Compare commits
1 Commits
main
...
release/3.
| Author | SHA1 | Date |
|---|---|---|
|
|
3820c463fa |
|
|
@ -189,7 +189,7 @@ Deploy Flowise self-hosted in your existing infrastructure, we support various [
|
|||
- [Railway](https://docs.flowiseai.com/configuration/deployment/railway)
|
||||
|
||||
[](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
|
||||
|
||||
|
||||
- [Northflank](https://northflank.com/stacks/deploy-flowiseai)
|
||||
|
||||
[](https://northflank.com/stacks/deploy-flowiseai)
|
||||
|
|
|
|||
58
SECURITY.md
58
SECURITY.md
|
|
@ -1,38 +1,38 @@
|
|||
### Responsible Disclosure Policy
|
||||
### Responsible Disclosure Policy
|
||||
|
||||
At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users.
|
||||
At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users.
|
||||
|
||||
### Out of scope vulnerabilities
|
||||
### Out of scope vulnerabilities
|
||||
|
||||
- Clickjacking on pages without sensitive actions
|
||||
- CSRF on unauthenticated/logout/login pages
|
||||
- Attacks requiring MITM (Man-in-the-Middle) or physical device access
|
||||
- Social engineering attacks
|
||||
- Activities that cause service disruption (DoS)
|
||||
- Content spoofing and text injection without a valid attack vector
|
||||
- Email spoofing
|
||||
- Absence of DNSSEC, CAA, CSP headers
|
||||
- Missing Secure or HTTP-only flag on non-sensitive cookies
|
||||
- Deadlinks
|
||||
- User enumeration
|
||||
- Clickjacking on pages without sensitive actions
|
||||
- CSRF on unauthenticated/logout/login pages
|
||||
- Attacks requiring MITM (Man-in-the-Middle) or physical device access
|
||||
- Social engineering attacks
|
||||
- Activities that cause service disruption (DoS)
|
||||
- Content spoofing and text injection without a valid attack vector
|
||||
- Email spoofing
|
||||
- Absence of DNSSEC, CAA, CSP headers
|
||||
- Missing Secure or HTTP-only flag on non-sensitive cookies
|
||||
- Deadlinks
|
||||
- User enumeration
|
||||
|
||||
### Reporting Guidelines
|
||||
### Reporting Guidelines
|
||||
|
||||
- Submit your findings to https://github.com/FlowiseAI/Flowise/security
|
||||
- Provide clear details to help us reproduce and fix the issue quickly.
|
||||
- Submit your findings to https://github.com/FlowiseAI/Flowise/security
|
||||
- Provide clear details to help us reproduce and fix the issue quickly.
|
||||
|
||||
### Disclosure Guidelines
|
||||
### Disclosure Guidelines
|
||||
|
||||
- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
|
||||
- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
|
||||
- Avoid including:
|
||||
- Data from any Flowise customer projects
|
||||
- Flowise user/customer information
|
||||
- Details about Flowise employees, contractors, or partners
|
||||
- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
|
||||
- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
|
||||
- Avoid including:
|
||||
- Data from any Flowise customer projects
|
||||
- Flowise user/customer information
|
||||
- Details about Flowise employees, contractors, or partners
|
||||
|
||||
### Response to Reports
|
||||
### Response to Reports
|
||||
|
||||
- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
|
||||
- Your report will be kept **confidential**, and your details will not be shared without your consent.
|
||||
|
||||
We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly.
|
||||
- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
|
||||
- Your report will be kept **confidential**, and your details will not be shared without your consent.
|
||||
|
||||
We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "flowise",
|
||||
"version": "3.0.11",
|
||||
"version": "10",
|
||||
"private": true,
|
||||
"homepage": "https://flowiseai.com",
|
||||
"workspaces": [
|
||||
|
|
@ -51,7 +51,7 @@
|
|||
"eslint-plugin-react-hooks": "^4.6.0",
|
||||
"eslint-plugin-unused-imports": "^2.0.0",
|
||||
"husky": "^8.0.1",
|
||||
"kill-port": "2.0.1",
|
||||
"kill-port": "^2.0.1",
|
||||
"lint-staged": "^13.0.3",
|
||||
"prettier": "^2.7.1",
|
||||
"pretty-quick": "^3.1.3",
|
||||
|
|
|
|||
|
|
@ -3,13 +3,6 @@
|
|||
{
|
||||
"name": "awsChatBedrock",
|
||||
"models": [
|
||||
{
|
||||
"label": "anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"name": "anthropic.claude-opus-4-5-20251101-v1:0",
|
||||
"description": "Claude 4.5 Opus",
|
||||
"input_cost": 0.000005,
|
||||
"output_cost": 0.000025
|
||||
},
|
||||
{
|
||||
"label": "anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"name": "anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
|
|
@ -322,12 +315,6 @@
|
|||
{
|
||||
"name": "azureChatOpenAI",
|
||||
"models": [
|
||||
{
|
||||
"label": "gpt-5.1",
|
||||
"name": "gpt-5.1",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gpt-5",
|
||||
"name": "gpt-5",
|
||||
|
|
@ -512,13 +499,6 @@
|
|||
{
|
||||
"name": "chatAnthropic",
|
||||
"models": [
|
||||
{
|
||||
"label": "claude-opus-4-5",
|
||||
"name": "claude-opus-4-5",
|
||||
"description": "Claude 4.5 Opus",
|
||||
"input_cost": 0.000005,
|
||||
"output_cost": 0.000025
|
||||
},
|
||||
{
|
||||
"label": "claude-sonnet-4-5",
|
||||
"name": "claude-sonnet-4-5",
|
||||
|
|
@ -641,18 +621,6 @@
|
|||
{
|
||||
"name": "chatGoogleGenerativeAI",
|
||||
"models": [
|
||||
{
|
||||
"label": "gemini-3-pro-preview",
|
||||
"name": "gemini-3-pro-preview",
|
||||
"input_cost": 0.00002,
|
||||
"output_cost": 0.00012
|
||||
},
|
||||
{
|
||||
"label": "gemini-3-pro-image-preview",
|
||||
"name": "gemini-3-pro-image-preview",
|
||||
"input_cost": 0.00002,
|
||||
"output_cost": 0.00012
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-pro",
|
||||
"name": "gemini-2.5-pro",
|
||||
|
|
@ -665,12 +633,6 @@
|
|||
"input_cost": 1.25e-6,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-flash-image",
|
||||
"name": "gemini-2.5-flash-image",
|
||||
"input_cost": 1.25e-6,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-flash-lite",
|
||||
"name": "gemini-2.5-flash-lite",
|
||||
|
|
@ -723,12 +685,6 @@
|
|||
{
|
||||
"name": "chatGoogleVertexAI",
|
||||
"models": [
|
||||
{
|
||||
"label": "gemini-3-pro-preview",
|
||||
"name": "gemini-3-pro-preview",
|
||||
"input_cost": 0.00002,
|
||||
"output_cost": 0.00012
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-pro",
|
||||
"name": "gemini-2.5-pro",
|
||||
|
|
@ -795,13 +751,6 @@
|
|||
"input_cost": 1.25e-7,
|
||||
"output_cost": 3.75e-7
|
||||
},
|
||||
{
|
||||
"label": "claude-opus-4-5@20251101",
|
||||
"name": "claude-opus-4-5@20251101",
|
||||
"description": "Claude 4.5 Opus",
|
||||
"input_cost": 0.000005,
|
||||
"output_cost": 0.000025
|
||||
},
|
||||
{
|
||||
"label": "claude-sonnet-4-5@20250929",
|
||||
"name": "claude-sonnet-4-5@20250929",
|
||||
|
|
@ -1047,12 +996,6 @@
|
|||
{
|
||||
"name": "chatOpenAI",
|
||||
"models": [
|
||||
{
|
||||
"label": "gpt-5.1",
|
||||
"name": "gpt-5.1",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gpt-5",
|
||||
"name": "gpt-5",
|
||||
|
|
|
|||
|
|
@ -22,16 +22,15 @@ import zodToJsonSchema from 'zod-to-json-schema'
|
|||
import { getErrorMessage } from '../../../src/error'
|
||||
import { DataSource } from 'typeorm'
|
||||
import {
|
||||
addImageArtifactsToMessages,
|
||||
extractArtifactsFromResponse,
|
||||
getPastChatHistoryImageMessages,
|
||||
getUniqueImageMessages,
|
||||
processMessagesWithImages,
|
||||
replaceBase64ImagesWithFileReferences,
|
||||
replaceInlineDataWithFileReferences,
|
||||
updateFlowState
|
||||
} from '../utils'
|
||||
import { convertMultiOptionsToStringArray, processTemplateVariables, configureStructuredOutput } from '../../../src/utils'
|
||||
import { convertMultiOptionsToStringArray, getCredentialData, getCredentialParam, processTemplateVariables } from '../../../src/utils'
|
||||
import { addSingleFileToStorage } from '../../../src/storageUtils'
|
||||
import fetch from 'node-fetch'
|
||||
|
||||
interface ITool {
|
||||
agentSelectedTool: string
|
||||
|
|
@ -82,7 +81,7 @@ class Agent_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Agent'
|
||||
this.name = 'agentAgentflow'
|
||||
this.version = 3.2
|
||||
this.version = 2.2
|
||||
this.type = 'Agent'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Dynamically choose and utilize tools during runtime, enabling multi-step reasoning'
|
||||
|
|
@ -177,11 +176,6 @@ class Agent_Agentflow implements INode {
|
|||
label: 'Google Search',
|
||||
name: 'googleSearch',
|
||||
description: 'Search real-time web content'
|
||||
},
|
||||
{
|
||||
label: 'Code Execution',
|
||||
name: 'codeExecution',
|
||||
description: 'Write and run Python code in a sandboxed environment'
|
||||
}
|
||||
],
|
||||
show: {
|
||||
|
|
@ -400,108 +394,6 @@ class Agent_Agentflow implements INode {
|
|||
],
|
||||
default: 'userMessage'
|
||||
},
|
||||
{
|
||||
label: 'JSON Structured Output',
|
||||
name: 'agentStructuredOutput',
|
||||
description: 'Instruct the Agent to give output in a JSON structured schema',
|
||||
type: 'array',
|
||||
optional: true,
|
||||
acceptVariable: true,
|
||||
array: [
|
||||
{
|
||||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Type',
|
||||
name: 'type',
|
||||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: 'String',
|
||||
name: 'string'
|
||||
},
|
||||
{
|
||||
label: 'String Array',
|
||||
name: 'stringArray'
|
||||
},
|
||||
{
|
||||
label: 'Number',
|
||||
name: 'number'
|
||||
},
|
||||
{
|
||||
label: 'Boolean',
|
||||
name: 'boolean'
|
||||
},
|
||||
{
|
||||
label: 'Enum',
|
||||
name: 'enum'
|
||||
},
|
||||
{
|
||||
label: 'JSON Array',
|
||||
name: 'jsonArray'
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
label: 'Enum Values',
|
||||
name: 'enumValues',
|
||||
type: 'string',
|
||||
placeholder: 'value1, value2, value3',
|
||||
description: 'Enum values. Separated by comma',
|
||||
optional: true,
|
||||
show: {
|
||||
'agentStructuredOutput[$index].type': 'enum'
|
||||
}
|
||||
},
|
||||
{
|
||||
label: 'JSON Schema',
|
||||
name: 'jsonSchema',
|
||||
type: 'code',
|
||||
placeholder: `{
|
||||
"answer": {
|
||||
"type": "string",
|
||||
"description": "Value of the answer"
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"description": "Reason for the answer"
|
||||
},
|
||||
"optional": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"count": {
|
||||
"type": "number"
|
||||
},
|
||||
"children": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"value": {
|
||||
"type": "string",
|
||||
"description": "Value of the children's answer"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}`,
|
||||
description: 'JSON schema for the structured output',
|
||||
optional: true,
|
||||
hideCodeExecute: true,
|
||||
show: {
|
||||
'agentStructuredOutput[$index].type': 'jsonArray'
|
||||
}
|
||||
},
|
||||
{
|
||||
label: 'Description',
|
||||
name: 'description',
|
||||
type: 'string',
|
||||
placeholder: 'Description of the key'
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
label: 'Update Flow State',
|
||||
name: 'agentUpdateState',
|
||||
|
|
@ -514,7 +406,8 @@ class Agent_Agentflow implements INode {
|
|||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listRuntimeStateKeys'
|
||||
loadMethod: 'listRuntimeStateKeys',
|
||||
freeSolo: true
|
||||
},
|
||||
{
|
||||
label: 'Value',
|
||||
|
|
@ -877,7 +770,6 @@ class Agent_Agentflow implements INode {
|
|||
const memoryType = nodeData.inputs?.agentMemoryType as string
|
||||
const userMessage = nodeData.inputs?.agentUserMessage as string
|
||||
const _agentUpdateState = nodeData.inputs?.agentUpdateState
|
||||
const _agentStructuredOutput = nodeData.inputs?.agentStructuredOutput
|
||||
const agentMessages = (nodeData.inputs?.agentMessages as unknown as ILLMMessage[]) ?? []
|
||||
|
||||
// Extract runtime state and history
|
||||
|
|
@ -903,8 +795,6 @@ class Agent_Agentflow implements INode {
|
|||
const llmWithoutToolsBind = (await newLLMNodeInstance.init(newNodeData, '', options)) as BaseChatModel
|
||||
let llmNodeInstance = llmWithoutToolsBind
|
||||
|
||||
const isStructuredOutput = _agentStructuredOutput && Array.isArray(_agentStructuredOutput) && _agentStructuredOutput.length > 0
|
||||
|
||||
const agentToolsBuiltInOpenAI = convertMultiOptionsToStringArray(nodeData.inputs?.agentToolsBuiltInOpenAI)
|
||||
if (agentToolsBuiltInOpenAI && agentToolsBuiltInOpenAI.length > 0) {
|
||||
for (const tool of agentToolsBuiltInOpenAI) {
|
||||
|
|
@ -1063,7 +953,7 @@ class Agent_Agentflow implements INode {
|
|||
// Initialize response and determine if streaming is possible
|
||||
let response: AIMessageChunk = new AIMessageChunk('')
|
||||
const isLastNode = options.isLastNode as boolean
|
||||
const isStreamable = isLastNode && options.sseStreamer !== undefined && modelConfig?.streaming !== false && !isStructuredOutput
|
||||
const isStreamable = isLastNode && options.sseStreamer !== undefined && modelConfig?.streaming !== false
|
||||
|
||||
// Start analytics
|
||||
if (analyticHandlers && options.parentTraceIds) {
|
||||
|
|
@ -1071,6 +961,12 @@ class Agent_Agentflow implements INode {
|
|||
llmIds = await analyticHandlers.onLLMStart(llmLabel, messages, options.parentTraceIds)
|
||||
}
|
||||
|
||||
// Track execution time
|
||||
const startTime = Date.now()
|
||||
|
||||
// Get initial response from LLM
|
||||
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
|
||||
|
||||
// Handle tool calls with support for recursion
|
||||
let usedTools: IUsedTool[] = []
|
||||
let sourceDocuments: Array<any> = []
|
||||
|
|
@ -1083,24 +979,12 @@ class Agent_Agentflow implements INode {
|
|||
const messagesBeforeToolCalls = [...messages]
|
||||
let _toolCallMessages: BaseMessageLike[] = []
|
||||
|
||||
/**
|
||||
* Add image artifacts from previous assistant responses as user messages
|
||||
* Images are converted from FILE-STORAGE::<image_path> to base 64 image_url format
|
||||
*/
|
||||
await addImageArtifactsToMessages(messages, options)
|
||||
|
||||
// Check if this is hummanInput for tool calls
|
||||
const _humanInput = nodeData.inputs?.humanInput
|
||||
const humanInput: IHumanInput = typeof _humanInput === 'string' ? JSON.parse(_humanInput) : _humanInput
|
||||
const humanInputAction = options.humanInputAction
|
||||
const iterationContext = options.iterationContext
|
||||
|
||||
// Track execution time
|
||||
const startTime = Date.now()
|
||||
|
||||
// Get initial response from LLM
|
||||
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
|
||||
|
||||
if (humanInput) {
|
||||
if (humanInput.type !== 'proceed' && humanInput.type !== 'reject') {
|
||||
throw new Error(`Invalid human input type. Expected 'proceed' or 'reject', but got '${humanInput.type}'`)
|
||||
|
|
@ -1118,8 +1002,7 @@ class Agent_Agentflow implements INode {
|
|||
llmWithoutToolsBind,
|
||||
isStreamable,
|
||||
isLastNode,
|
||||
iterationContext,
|
||||
isStructuredOutput
|
||||
iterationContext
|
||||
})
|
||||
|
||||
response = result.response
|
||||
|
|
@ -1148,14 +1031,7 @@ class Agent_Agentflow implements INode {
|
|||
}
|
||||
} else {
|
||||
if (isStreamable) {
|
||||
response = await this.handleStreamingResponse(
|
||||
sseStreamer,
|
||||
llmNodeInstance,
|
||||
messages,
|
||||
chatId,
|
||||
abortController,
|
||||
isStructuredOutput
|
||||
)
|
||||
response = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
|
||||
} else {
|
||||
response = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
|
||||
}
|
||||
|
|
@ -1177,8 +1053,7 @@ class Agent_Agentflow implements INode {
|
|||
llmNodeInstance,
|
||||
isStreamable,
|
||||
isLastNode,
|
||||
iterationContext,
|
||||
isStructuredOutput
|
||||
iterationContext
|
||||
})
|
||||
|
||||
response = result.response
|
||||
|
|
@ -1205,20 +1080,11 @@ class Agent_Agentflow implements INode {
|
|||
sseStreamer.streamArtifactsEvent(chatId, flatten(artifacts))
|
||||
}
|
||||
}
|
||||
} else if (!humanInput && !isStreamable && isLastNode && sseStreamer && !isStructuredOutput) {
|
||||
} else if (!humanInput && !isStreamable && isLastNode && sseStreamer) {
|
||||
// Stream whole response back to UI if not streaming and no tool calls
|
||||
// Skip this if structured output is enabled - it will be streamed after conversion
|
||||
let finalResponse = ''
|
||||
if (response.content && Array.isArray(response.content)) {
|
||||
finalResponse = response.content
|
||||
.map((item: any) => {
|
||||
if ((item.text && !item.type) || (item.type === 'text' && item.text)) {
|
||||
return item.text
|
||||
}
|
||||
return ''
|
||||
})
|
||||
.filter((text: string) => text)
|
||||
.join('\n')
|
||||
finalResponse = response.content.map((item: any) => item.text).join('\n')
|
||||
} else if (response.content && typeof response.content === 'string') {
|
||||
finalResponse = response.content
|
||||
} else {
|
||||
|
|
@ -1247,53 +1113,9 @@ class Agent_Agentflow implements INode {
|
|||
// Prepare final response and output object
|
||||
let finalResponse = ''
|
||||
if (response.content && Array.isArray(response.content)) {
|
||||
// Process items and concatenate consecutive text items
|
||||
const processedParts: string[] = []
|
||||
let currentTextBuffer = ''
|
||||
|
||||
for (const item of response.content) {
|
||||
const itemAny = item as any
|
||||
const isTextItem = (itemAny.text && !itemAny.type) || (itemAny.type === 'text' && itemAny.text)
|
||||
|
||||
if (isTextItem) {
|
||||
// Accumulate consecutive text items
|
||||
currentTextBuffer += itemAny.text
|
||||
} else {
|
||||
// Flush accumulated text before processing other types
|
||||
if (currentTextBuffer) {
|
||||
processedParts.push(currentTextBuffer)
|
||||
currentTextBuffer = ''
|
||||
}
|
||||
|
||||
// Process non-text items
|
||||
if (itemAny.type === 'executableCode' && itemAny.executableCode) {
|
||||
// Format executable code as a code block
|
||||
const language = itemAny.executableCode.language?.toLowerCase() || 'python'
|
||||
processedParts.push(`\n\`\`\`${language}\n${itemAny.executableCode.code}\n\`\`\`\n`)
|
||||
} else if (itemAny.type === 'codeExecutionResult' && itemAny.codeExecutionResult) {
|
||||
// Format code execution result
|
||||
const outcome = itemAny.codeExecutionResult.outcome || 'OUTCOME_OK'
|
||||
const output = itemAny.codeExecutionResult.output || ''
|
||||
if (outcome === 'OUTCOME_OK' && output) {
|
||||
processedParts.push(`**Code Output:**\n\`\`\`\n${output}\n\`\`\`\n`)
|
||||
} else if (outcome !== 'OUTCOME_OK') {
|
||||
processedParts.push(`**Code Execution Error:**\n\`\`\`\n${output}\n\`\`\`\n`)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Flush any remaining text
|
||||
if (currentTextBuffer) {
|
||||
processedParts.push(currentTextBuffer)
|
||||
}
|
||||
|
||||
finalResponse = processedParts.filter((text) => text).join('\n')
|
||||
finalResponse = response.content.map((item: any) => item.text).join('\n')
|
||||
} else if (response.content && typeof response.content === 'string') {
|
||||
finalResponse = response.content
|
||||
} else if (response.content === '') {
|
||||
// Empty response content, this could happen when there is only image data
|
||||
finalResponse = ''
|
||||
} else {
|
||||
finalResponse = JSON.stringify(response, null, 2)
|
||||
}
|
||||
|
|
@ -1309,13 +1131,10 @@ class Agent_Agentflow implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
// Extract artifacts from annotations in response metadata and replace inline data
|
||||
// Extract artifacts from annotations in response metadata
|
||||
if (response.response_metadata) {
|
||||
const {
|
||||
artifacts: extractedArtifacts,
|
||||
fileAnnotations: extractedFileAnnotations,
|
||||
savedInlineImages
|
||||
} = await extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
|
||||
const { artifacts: extractedArtifacts, fileAnnotations: extractedFileAnnotations } =
|
||||
await this.extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
|
||||
if (extractedArtifacts.length > 0) {
|
||||
artifacts = [...artifacts, ...extractedArtifacts]
|
||||
|
||||
|
|
@ -1333,11 +1152,6 @@ class Agent_Agentflow implements INode {
|
|||
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
|
||||
}
|
||||
}
|
||||
|
||||
// Replace inlineData base64 with file references in the response
|
||||
if (savedInlineImages && savedInlineImages.length > 0) {
|
||||
replaceInlineDataWithFileReferences(response, savedInlineImages)
|
||||
}
|
||||
}
|
||||
|
||||
// Replace sandbox links with proper download URLs. Example: [Download the script](sandbox:/mnt/data/dummy_bar_graph.py)
|
||||
|
|
@ -1345,23 +1159,6 @@ class Agent_Agentflow implements INode {
|
|||
finalResponse = await this.processSandboxLinks(finalResponse, options.baseURL, options.chatflowid, chatId)
|
||||
}
|
||||
|
||||
// If is structured output, then invoke LLM again with structured output at the very end after all tool calls
|
||||
if (isStructuredOutput) {
|
||||
llmNodeInstance = configureStructuredOutput(llmNodeInstance, _agentStructuredOutput)
|
||||
const prompt = 'Convert the following response to the structured output format: ' + finalResponse
|
||||
response = await llmNodeInstance.invoke(prompt, { signal: abortController?.signal })
|
||||
|
||||
if (typeof response === 'object') {
|
||||
finalResponse = '```json\n' + JSON.stringify(response, null, 2) + '\n```'
|
||||
} else {
|
||||
finalResponse = response
|
||||
}
|
||||
|
||||
if (isLastNode && sseStreamer) {
|
||||
sseStreamer.streamTokenEvent(chatId, finalResponse)
|
||||
}
|
||||
}
|
||||
|
||||
const output = this.prepareOutputObject(
|
||||
response,
|
||||
availableTools,
|
||||
|
|
@ -1374,8 +1171,7 @@ class Agent_Agentflow implements INode {
|
|||
artifacts,
|
||||
additionalTokens,
|
||||
isWaitingForHumanInput,
|
||||
fileAnnotations,
|
||||
isStructuredOutput
|
||||
fileAnnotations
|
||||
)
|
||||
|
||||
// End analytics tracking
|
||||
|
|
@ -1396,15 +1192,9 @@ class Agent_Agentflow implements INode {
|
|||
// Process template variables in state
|
||||
newState = processTemplateVariables(newState, finalResponse)
|
||||
|
||||
/**
|
||||
* Remove the temporarily added image artifact messages before storing
|
||||
* This is to avoid storing the actual base64 data into database
|
||||
*/
|
||||
const messagesToStore = messages.filter((msg: any) => !msg._isTemporaryImageMessage)
|
||||
|
||||
// Replace the actual messages array with one that includes the file references for images instead of base64 data
|
||||
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
|
||||
messagesToStore,
|
||||
messages,
|
||||
runtimeImageMessagesWithFileRef,
|
||||
pastImageMessagesWithFileRef
|
||||
)
|
||||
|
|
@ -1543,12 +1333,7 @@ class Agent_Agentflow implements INode {
|
|||
// Handle Gemini googleSearch tool
|
||||
if (groundingMetadata && groundingMetadata.webSearchQueries && Array.isArray(groundingMetadata.webSearchQueries)) {
|
||||
// Check for duplicates
|
||||
const isDuplicate = builtInUsedTools.find(
|
||||
(tool) =>
|
||||
tool.tool === 'googleSearch' &&
|
||||
JSON.stringify((tool.toolInput as any)?.queries) === JSON.stringify(groundingMetadata.webSearchQueries)
|
||||
)
|
||||
if (!isDuplicate) {
|
||||
if (!builtInUsedTools.find((tool) => tool.tool === 'googleSearch')) {
|
||||
builtInUsedTools.push({
|
||||
tool: 'googleSearch',
|
||||
toolInput: {
|
||||
|
|
@ -1562,12 +1347,7 @@ class Agent_Agentflow implements INode {
|
|||
// Handle Gemini urlContext tool
|
||||
if (urlContextMetadata && urlContextMetadata.urlMetadata && Array.isArray(urlContextMetadata.urlMetadata)) {
|
||||
// Check for duplicates
|
||||
const isDuplicate = builtInUsedTools.find(
|
||||
(tool) =>
|
||||
tool.tool === 'urlContext' &&
|
||||
JSON.stringify((tool.toolInput as any)?.urlMetadata) === JSON.stringify(urlContextMetadata.urlMetadata)
|
||||
)
|
||||
if (!isDuplicate) {
|
||||
if (!builtInUsedTools.find((tool) => tool.tool === 'urlContext')) {
|
||||
builtInUsedTools.push({
|
||||
tool: 'urlContext',
|
||||
toolInput: {
|
||||
|
|
@ -1578,55 +1358,47 @@ class Agent_Agentflow implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
// Handle Gemini codeExecution tool
|
||||
if (response.content && Array.isArray(response.content)) {
|
||||
for (let i = 0; i < response.content.length; i++) {
|
||||
const item = response.content[i]
|
||||
|
||||
if (item.type === 'executableCode' && item.executableCode) {
|
||||
const language = item.executableCode.language || 'PYTHON'
|
||||
const code = item.executableCode.code || ''
|
||||
let toolOutput = ''
|
||||
|
||||
// Check for duplicates
|
||||
const isDuplicate = builtInUsedTools.find(
|
||||
(tool) =>
|
||||
tool.tool === 'codeExecution' &&
|
||||
(tool.toolInput as any)?.language === language &&
|
||||
(tool.toolInput as any)?.code === code
|
||||
)
|
||||
if (isDuplicate) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check the next item for the output
|
||||
const nextItem = i + 1 < response.content.length ? response.content[i + 1] : null
|
||||
|
||||
if (nextItem) {
|
||||
if (nextItem.type === 'codeExecutionResult' && nextItem.codeExecutionResult) {
|
||||
const outcome = nextItem.codeExecutionResult.outcome
|
||||
const output = nextItem.codeExecutionResult.output || ''
|
||||
toolOutput = outcome === 'OUTCOME_OK' ? output : `Error: ${output}`
|
||||
} else if (nextItem.type === 'inlineData') {
|
||||
toolOutput = 'Generated image data'
|
||||
}
|
||||
}
|
||||
|
||||
builtInUsedTools.push({
|
||||
tool: 'codeExecution',
|
||||
toolInput: {
|
||||
language,
|
||||
code
|
||||
},
|
||||
toolOutput
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return builtInUsedTools
|
||||
}
|
||||
|
||||
/**
|
||||
* Saves base64 image data to storage and returns file information
|
||||
*/
|
||||
private async saveBase64Image(
|
||||
outputItem: any,
|
||||
options: ICommonObject
|
||||
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> {
|
||||
try {
|
||||
if (!outputItem.result) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Extract base64 data and create buffer
|
||||
const base64Data = outputItem.result
|
||||
const imageBuffer = Buffer.from(base64Data, 'base64')
|
||||
|
||||
// Determine file extension and MIME type
|
||||
const outputFormat = outputItem.output_format || 'png'
|
||||
const fileName = `generated_image_${outputItem.id || Date.now()}.${outputFormat}`
|
||||
const mimeType = outputFormat === 'png' ? 'image/png' : 'image/jpeg'
|
||||
|
||||
// Save the image using the existing storage utility
|
||||
const { path, totalSize } = await addSingleFileToStorage(
|
||||
mimeType,
|
||||
imageBuffer,
|
||||
fileName,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
|
||||
return { filePath: path, fileName, totalSize }
|
||||
} catch (error) {
|
||||
console.error('Error saving base64 image:', error)
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles memory management based on the specified memory type
|
||||
*/
|
||||
|
|
@ -1789,62 +1561,32 @@ class Agent_Agentflow implements INode {
|
|||
llmNodeInstance: BaseChatModel,
|
||||
messages: BaseMessageLike[],
|
||||
chatId: string,
|
||||
abortController: AbortController,
|
||||
isStructuredOutput: boolean = false
|
||||
abortController: AbortController
|
||||
): Promise<AIMessageChunk> {
|
||||
let response = new AIMessageChunk('')
|
||||
|
||||
try {
|
||||
for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) {
|
||||
if (sseStreamer && !isStructuredOutput) {
|
||||
if (sseStreamer) {
|
||||
let content = ''
|
||||
|
||||
if (typeof chunk === 'string') {
|
||||
content = chunk
|
||||
} else if (Array.isArray(chunk.content) && chunk.content.length > 0) {
|
||||
content = chunk.content
|
||||
.map((item: any) => {
|
||||
if ((item.text && !item.type) || (item.type === 'text' && item.text)) {
|
||||
return item.text
|
||||
} else if (item.type === 'executableCode' && item.executableCode) {
|
||||
const language = item.executableCode.language?.toLowerCase() || 'python'
|
||||
return `\n\`\`\`${language}\n${item.executableCode.code}\n\`\`\`\n`
|
||||
} else if (item.type === 'codeExecutionResult' && item.codeExecutionResult) {
|
||||
const outcome = item.codeExecutionResult.outcome || 'OUTCOME_OK'
|
||||
const output = item.codeExecutionResult.output || ''
|
||||
if (outcome === 'OUTCOME_OK' && output) {
|
||||
return `**Code Output:**\n\`\`\`\n${output}\n\`\`\`\n`
|
||||
} else if (outcome !== 'OUTCOME_OK') {
|
||||
return `**Code Execution Error:**\n\`\`\`\n${output}\n\`\`\`\n`
|
||||
}
|
||||
}
|
||||
return ''
|
||||
})
|
||||
.filter((text: string) => text)
|
||||
.join('')
|
||||
} else if (chunk.content) {
|
||||
if (Array.isArray(chunk.content) && chunk.content.length > 0) {
|
||||
const contents = chunk.content as MessageContentText[]
|
||||
content = contents.map((item) => item.text).join('')
|
||||
} else {
|
||||
content = chunk.content.toString()
|
||||
}
|
||||
sseStreamer.streamTokenEvent(chatId, content)
|
||||
}
|
||||
|
||||
const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
|
||||
response = response.concat(messageChunk)
|
||||
response = response.concat(chunk)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error during streaming:', error)
|
||||
throw error
|
||||
}
|
||||
|
||||
// Only convert to string if all content items are text (no inlineData or other special types)
|
||||
if (Array.isArray(response.content) && response.content.length > 0) {
|
||||
const hasNonTextContent = response.content.some(
|
||||
(item: any) => item.type === 'inlineData' || item.type === 'executableCode' || item.type === 'codeExecutionResult'
|
||||
)
|
||||
if (!hasNonTextContent) {
|
||||
const responseContents = response.content as MessageContentText[]
|
||||
response.content = responseContents.map((item) => item.text).join('')
|
||||
}
|
||||
const responseContents = response.content as MessageContentText[]
|
||||
response.content = responseContents.map((item) => item.text).join('')
|
||||
}
|
||||
return response
|
||||
}
|
||||
|
|
@ -1864,8 +1606,7 @@ class Agent_Agentflow implements INode {
|
|||
artifacts: any[],
|
||||
additionalTokens: number = 0,
|
||||
isWaitingForHumanInput: boolean = false,
|
||||
fileAnnotations: any[] = [],
|
||||
isStructuredOutput: boolean = false
|
||||
fileAnnotations: any[] = []
|
||||
): any {
|
||||
const output: any = {
|
||||
content: finalResponse,
|
||||
|
|
@ -1900,15 +1641,6 @@ class Agent_Agentflow implements INode {
|
|||
output.responseMetadata = response.response_metadata
|
||||
}
|
||||
|
||||
if (isStructuredOutput && typeof response === 'object') {
|
||||
const structuredOutput = response as Record<string, any>
|
||||
for (const key in structuredOutput) {
|
||||
if (structuredOutput[key] !== undefined && structuredOutput[key] !== null) {
|
||||
output[key] = structuredOutput[key]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add used tools, source documents and artifacts to output
|
||||
if (usedTools && usedTools.length > 0) {
|
||||
output.usedTools = flatten(usedTools)
|
||||
|
|
@ -1974,8 +1706,7 @@ class Agent_Agentflow implements INode {
|
|||
llmNodeInstance,
|
||||
isStreamable,
|
||||
isLastNode,
|
||||
iterationContext,
|
||||
isStructuredOutput = false
|
||||
iterationContext
|
||||
}: {
|
||||
response: AIMessageChunk
|
||||
messages: BaseMessageLike[]
|
||||
|
|
@ -1989,7 +1720,6 @@ class Agent_Agentflow implements INode {
|
|||
isStreamable: boolean
|
||||
isLastNode: boolean
|
||||
iterationContext: ICommonObject
|
||||
isStructuredOutput?: boolean
|
||||
}): Promise<{
|
||||
response: AIMessageChunk
|
||||
usedTools: IUsedTool[]
|
||||
|
|
@ -2069,9 +1799,7 @@ class Agent_Agentflow implements INode {
|
|||
const toolCallDetails = '```json\n' + JSON.stringify(toolCall, null, 2) + '\n```'
|
||||
const responseContent = response.content + `\nAttempting to use tool:\n${toolCallDetails}`
|
||||
response.content = responseContent
|
||||
if (!isStructuredOutput) {
|
||||
sseStreamer?.streamTokenEvent(chatId, responseContent)
|
||||
}
|
||||
sseStreamer?.streamTokenEvent(chatId, responseContent)
|
||||
return { response, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput: true }
|
||||
}
|
||||
|
||||
|
|
@ -2177,7 +1905,7 @@ class Agent_Agentflow implements INode {
|
|||
const lastToolOutput = usedTools[0]?.toolOutput || ''
|
||||
const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2)
|
||||
|
||||
if (sseStreamer && !isStructuredOutput) {
|
||||
if (sseStreamer) {
|
||||
sseStreamer.streamTokenEvent(chatId, lastToolOutputString)
|
||||
}
|
||||
|
||||
|
|
@ -2206,19 +1934,12 @@ class Agent_Agentflow implements INode {
|
|||
let newResponse: AIMessageChunk
|
||||
|
||||
if (isStreamable) {
|
||||
newResponse = await this.handleStreamingResponse(
|
||||
sseStreamer,
|
||||
llmNodeInstance,
|
||||
messages,
|
||||
chatId,
|
||||
abortController,
|
||||
isStructuredOutput
|
||||
)
|
||||
newResponse = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
|
||||
} else {
|
||||
newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
|
||||
|
||||
// Stream non-streaming response if this is the last node
|
||||
if (isLastNode && sseStreamer && !isStructuredOutput) {
|
||||
if (isLastNode && sseStreamer) {
|
||||
let responseContent = JSON.stringify(newResponse, null, 2)
|
||||
if (typeof newResponse.content === 'string') {
|
||||
responseContent = newResponse.content
|
||||
|
|
@ -2253,8 +1974,7 @@ class Agent_Agentflow implements INode {
|
|||
llmNodeInstance,
|
||||
isStreamable,
|
||||
isLastNode,
|
||||
iterationContext,
|
||||
isStructuredOutput
|
||||
iterationContext
|
||||
})
|
||||
|
||||
// Merge results from recursive tool calls
|
||||
|
|
@ -2285,8 +2005,7 @@ class Agent_Agentflow implements INode {
|
|||
llmWithoutToolsBind,
|
||||
isStreamable,
|
||||
isLastNode,
|
||||
iterationContext,
|
||||
isStructuredOutput = false
|
||||
iterationContext
|
||||
}: {
|
||||
humanInput: IHumanInput
|
||||
humanInputAction: Record<string, any> | undefined
|
||||
|
|
@ -2301,7 +2020,6 @@ class Agent_Agentflow implements INode {
|
|||
isStreamable: boolean
|
||||
isLastNode: boolean
|
||||
iterationContext: ICommonObject
|
||||
isStructuredOutput?: boolean
|
||||
}): Promise<{
|
||||
response: AIMessageChunk
|
||||
usedTools: IUsedTool[]
|
||||
|
|
@ -2504,7 +2222,7 @@ class Agent_Agentflow implements INode {
|
|||
const lastToolOutput = usedTools[0]?.toolOutput || ''
|
||||
const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2)
|
||||
|
||||
if (sseStreamer && !isStructuredOutput) {
|
||||
if (sseStreamer) {
|
||||
sseStreamer.streamTokenEvent(chatId, lastToolOutputString)
|
||||
}
|
||||
|
||||
|
|
@ -2535,19 +2253,12 @@ class Agent_Agentflow implements INode {
|
|||
}
|
||||
|
||||
if (isStreamable) {
|
||||
newResponse = await this.handleStreamingResponse(
|
||||
sseStreamer,
|
||||
llmNodeInstance,
|
||||
messages,
|
||||
chatId,
|
||||
abortController,
|
||||
isStructuredOutput
|
||||
)
|
||||
newResponse = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
|
||||
} else {
|
||||
newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
|
||||
|
||||
// Stream non-streaming response if this is the last node
|
||||
if (isLastNode && sseStreamer && !isStructuredOutput) {
|
||||
if (isLastNode && sseStreamer) {
|
||||
let responseContent = JSON.stringify(newResponse, null, 2)
|
||||
if (typeof newResponse.content === 'string') {
|
||||
responseContent = newResponse.content
|
||||
|
|
@ -2582,8 +2293,7 @@ class Agent_Agentflow implements INode {
|
|||
llmNodeInstance,
|
||||
isStreamable,
|
||||
isLastNode,
|
||||
iterationContext,
|
||||
isStructuredOutput
|
||||
iterationContext
|
||||
})
|
||||
|
||||
// Merge results from recursive tool calls
|
||||
|
|
@ -2598,6 +2308,190 @@ class Agent_Agentflow implements INode {
|
|||
return { response: newResponse, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput }
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts artifacts from response metadata (both annotations and built-in tools)
|
||||
*/
|
||||
private async extractArtifactsFromResponse(
|
||||
responseMetadata: any,
|
||||
modelNodeData: INodeData,
|
||||
options: ICommonObject
|
||||
): Promise<{ artifacts: any[]; fileAnnotations: any[] }> {
|
||||
const artifacts: any[] = []
|
||||
const fileAnnotations: any[] = []
|
||||
|
||||
if (!responseMetadata?.output || !Array.isArray(responseMetadata.output)) {
|
||||
return { artifacts, fileAnnotations }
|
||||
}
|
||||
|
||||
for (const outputItem of responseMetadata.output) {
|
||||
// Handle container file citations from annotations
|
||||
if (outputItem.type === 'message' && outputItem.content && Array.isArray(outputItem.content)) {
|
||||
for (const contentItem of outputItem.content) {
|
||||
if (contentItem.annotations && Array.isArray(contentItem.annotations)) {
|
||||
for (const annotation of contentItem.annotations) {
|
||||
if (annotation.type === 'container_file_citation' && annotation.file_id && annotation.filename) {
|
||||
try {
|
||||
// Download and store the file content
|
||||
const downloadResult = await this.downloadContainerFile(
|
||||
annotation.container_id,
|
||||
annotation.file_id,
|
||||
annotation.filename,
|
||||
modelNodeData,
|
||||
options
|
||||
)
|
||||
|
||||
if (downloadResult) {
|
||||
const fileType = this.getArtifactTypeFromFilename(annotation.filename)
|
||||
|
||||
if (fileType === 'png' || fileType === 'jpeg' || fileType === 'jpg') {
|
||||
const artifact = {
|
||||
type: fileType,
|
||||
data: downloadResult.filePath
|
||||
}
|
||||
|
||||
artifacts.push(artifact)
|
||||
} else {
|
||||
fileAnnotations.push({
|
||||
filePath: downloadResult.filePath,
|
||||
fileName: annotation.filename
|
||||
})
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error processing annotation:', error)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle built-in tool artifacts (like image generation)
|
||||
if (outputItem.type === 'image_generation_call' && outputItem.result) {
|
||||
try {
|
||||
const savedImageResult = await this.saveBase64Image(outputItem, options)
|
||||
if (savedImageResult) {
|
||||
// Replace the base64 result with the file path in the response metadata
|
||||
outputItem.result = savedImageResult.filePath
|
||||
|
||||
// Create artifact in the same format as other image artifacts
|
||||
const fileType = this.getArtifactTypeFromFilename(savedImageResult.fileName)
|
||||
artifacts.push({
|
||||
type: fileType,
|
||||
data: savedImageResult.filePath
|
||||
})
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error processing image generation artifact:', error)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { artifacts, fileAnnotations }
|
||||
}
|
||||
|
||||
/**
|
||||
* Downloads file content from container file citation
|
||||
*/
|
||||
private async downloadContainerFile(
|
||||
containerId: string,
|
||||
fileId: string,
|
||||
filename: string,
|
||||
modelNodeData: INodeData,
|
||||
options: ICommonObject
|
||||
): Promise<{ filePath: string; totalSize: number } | null> {
|
||||
try {
|
||||
const credentialData = await getCredentialData(modelNodeData.credential ?? '', options)
|
||||
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, modelNodeData)
|
||||
|
||||
if (!openAIApiKey) {
|
||||
console.warn('No OpenAI API key available for downloading container file')
|
||||
return null
|
||||
}
|
||||
|
||||
// Download the file using OpenAI Container API
|
||||
const response = await fetch(`https://api.openai.com/v1/containers/${containerId}/files/${fileId}/content`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: '*/*',
|
||||
Authorization: `Bearer ${openAIApiKey}`
|
||||
}
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
console.warn(
|
||||
`Failed to download container file ${fileId} from container ${containerId}: ${response.status} ${response.statusText}`
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
// Extract the binary data from the Response object
|
||||
const data = await response.arrayBuffer()
|
||||
const dataBuffer = Buffer.from(data)
|
||||
const mimeType = this.getMimeTypeFromFilename(filename)
|
||||
|
||||
// Store the file using the same storage utility as OpenAIAssistant
|
||||
const { path, totalSize } = await addSingleFileToStorage(
|
||||
mimeType,
|
||||
dataBuffer,
|
||||
filename,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
|
||||
return { filePath: path, totalSize }
|
||||
} catch (error) {
|
||||
console.error('Error downloading container file:', error)
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets MIME type from filename extension
|
||||
*/
|
||||
private getMimeTypeFromFilename(filename: string): string {
|
||||
const extension = filename.toLowerCase().split('.').pop()
|
||||
const mimeTypes: { [key: string]: string } = {
|
||||
png: 'image/png',
|
||||
jpg: 'image/jpeg',
|
||||
jpeg: 'image/jpeg',
|
||||
gif: 'image/gif',
|
||||
pdf: 'application/pdf',
|
||||
txt: 'text/plain',
|
||||
csv: 'text/csv',
|
||||
json: 'application/json',
|
||||
html: 'text/html',
|
||||
xml: 'application/xml'
|
||||
}
|
||||
return mimeTypes[extension || ''] || 'application/octet-stream'
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets artifact type from filename extension for UI rendering
|
||||
*/
|
||||
private getArtifactTypeFromFilename(filename: string): string {
|
||||
const extension = filename.toLowerCase().split('.').pop()
|
||||
const artifactTypes: { [key: string]: string } = {
|
||||
png: 'png',
|
||||
jpg: 'jpeg',
|
||||
jpeg: 'jpeg',
|
||||
html: 'html',
|
||||
htm: 'html',
|
||||
md: 'markdown',
|
||||
markdown: 'markdown',
|
||||
json: 'json',
|
||||
js: 'javascript',
|
||||
javascript: 'javascript',
|
||||
tex: 'latex',
|
||||
latex: 'latex',
|
||||
txt: 'text',
|
||||
csv: 'text',
|
||||
pdf: 'text'
|
||||
}
|
||||
return artifactTypes[extension || ''] || 'text'
|
||||
}
|
||||
|
||||
/**
|
||||
* Processes sandbox links in the response text and converts them to file annotations
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ class CustomFunction_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Custom Function'
|
||||
this.name = 'customFunctionAgentflow'
|
||||
this.version = 1.1
|
||||
this.version = 1.0
|
||||
this.type = 'CustomFunction'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Execute custom function'
|
||||
|
|
@ -107,7 +107,8 @@ class CustomFunction_Agentflow implements INode {
|
|||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listRuntimeStateKeys'
|
||||
loadMethod: 'listRuntimeStateKeys',
|
||||
freeSolo: true
|
||||
},
|
||||
{
|
||||
label: 'Value',
|
||||
|
|
@ -133,7 +134,7 @@ class CustomFunction_Agentflow implements INode {
|
|||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
|
||||
const javascriptFunction = nodeData.inputs?.customFunctionJavascriptFunction as string
|
||||
const functionInputVariables = (nodeData.inputs?.customFunctionInputVariables as ICustomFunctionInputVariables[]) ?? []
|
||||
const functionInputVariables = nodeData.inputs?.customFunctionInputVariables as ICustomFunctionInputVariables[]
|
||||
const _customFunctionUpdateState = nodeData.inputs?.customFunctionUpdateState
|
||||
|
||||
const state = options.agentflowRuntime?.state as ICommonObject
|
||||
|
|
@ -146,17 +147,11 @@ class CustomFunction_Agentflow implements INode {
|
|||
|
||||
const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
|
||||
const flow = {
|
||||
input,
|
||||
state,
|
||||
chatflowId: options.chatflowid,
|
||||
sessionId: options.sessionId,
|
||||
chatId: options.chatId,
|
||||
rawOutput: options.postProcessing?.rawOutput || '',
|
||||
chatHistory: options.postProcessing?.chatHistory || [],
|
||||
sourceDocuments: options.postProcessing?.sourceDocuments,
|
||||
usedTools: options.postProcessing?.usedTools,
|
||||
artifacts: options.postProcessing?.artifacts,
|
||||
fileAnnotations: options.postProcessing?.fileAnnotations
|
||||
input,
|
||||
state
|
||||
}
|
||||
|
||||
// Create additional sandbox variables for custom function inputs
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ class ExecuteFlow_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Execute Flow'
|
||||
this.name = 'executeFlowAgentflow'
|
||||
this.version = 1.2
|
||||
this.version = 1.1
|
||||
this.type = 'ExecuteFlow'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Execute another flow'
|
||||
|
|
@ -102,7 +102,8 @@ class ExecuteFlow_Agentflow implements INode {
|
|||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listRuntimeStateKeys'
|
||||
loadMethod: 'listRuntimeStateKeys',
|
||||
freeSolo: true
|
||||
},
|
||||
{
|
||||
label: 'Value',
|
||||
|
|
|
|||
|
|
@ -241,11 +241,8 @@ class HumanInput_Agentflow implements INode {
|
|||
if (isStreamable) {
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
for await (const chunk of await llmNodeInstance.stream(messages)) {
|
||||
const content = typeof chunk === 'string' ? chunk : chunk.content.toString()
|
||||
sseStreamer.streamTokenEvent(chatId, content)
|
||||
|
||||
const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
|
||||
response = response.concat(messageChunk)
|
||||
sseStreamer.streamTokenEvent(chatId, chunk.content.toString())
|
||||
response = response.concat(chunk)
|
||||
}
|
||||
humanInputDescription = response.content as string
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -2,19 +2,17 @@ import { BaseChatModel } from '@langchain/core/language_models/chat_models'
|
|||
import { ICommonObject, IMessage, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
|
||||
import { AIMessageChunk, BaseMessageLike, MessageContentText } from '@langchain/core/messages'
|
||||
import { DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt'
|
||||
import { z } from 'zod'
|
||||
import { AnalyticHandler } from '../../../src/handler'
|
||||
import { ILLMMessage } from '../Interface.Agentflow'
|
||||
import { ILLMMessage, IStructuredOutput } from '../Interface.Agentflow'
|
||||
import {
|
||||
addImageArtifactsToMessages,
|
||||
extractArtifactsFromResponse,
|
||||
getPastChatHistoryImageMessages,
|
||||
getUniqueImageMessages,
|
||||
processMessagesWithImages,
|
||||
replaceBase64ImagesWithFileReferences,
|
||||
replaceInlineDataWithFileReferences,
|
||||
updateFlowState
|
||||
} from '../utils'
|
||||
import { processTemplateVariables, configureStructuredOutput } from '../../../src/utils'
|
||||
import { processTemplateVariables } from '../../../src/utils'
|
||||
import { flatten } from 'lodash'
|
||||
|
||||
class LLM_Agentflow implements INode {
|
||||
|
|
@ -34,7 +32,7 @@ class LLM_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'LLM'
|
||||
this.name = 'llmAgentflow'
|
||||
this.version = 1.1
|
||||
this.version = 1.0
|
||||
this.type = 'LLM'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Large language models to analyze user-provided inputs and generate responses'
|
||||
|
|
@ -290,7 +288,8 @@ class LLM_Agentflow implements INode {
|
|||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listRuntimeStateKeys'
|
||||
loadMethod: 'listRuntimeStateKeys',
|
||||
freeSolo: true
|
||||
},
|
||||
{
|
||||
label: 'Value',
|
||||
|
|
@ -450,16 +449,10 @@ class LLM_Agentflow implements INode {
|
|||
}
|
||||
delete nodeData.inputs?.llmMessages
|
||||
|
||||
/**
|
||||
* Add image artifacts from previous assistant responses as user messages
|
||||
* Images are converted from FILE-STORAGE::<image_path> to base 64 image_url format
|
||||
*/
|
||||
await addImageArtifactsToMessages(messages, options)
|
||||
|
||||
// Configure structured output if specified
|
||||
const isStructuredOutput = _llmStructuredOutput && Array.isArray(_llmStructuredOutput) && _llmStructuredOutput.length > 0
|
||||
if (isStructuredOutput) {
|
||||
llmNodeInstance = configureStructuredOutput(llmNodeInstance, _llmStructuredOutput)
|
||||
llmNodeInstance = this.configureStructuredOutput(llmNodeInstance, _llmStructuredOutput)
|
||||
}
|
||||
|
||||
// Initialize response and determine if streaming is possible
|
||||
|
|
@ -475,11 +468,9 @@ class LLM_Agentflow implements INode {
|
|||
|
||||
// Track execution time
|
||||
const startTime = Date.now()
|
||||
|
||||
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
|
||||
|
||||
/*
|
||||
* Invoke LLM
|
||||
*/
|
||||
if (isStreamable) {
|
||||
response = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
|
||||
} else {
|
||||
|
|
@ -504,40 +495,6 @@ class LLM_Agentflow implements INode {
|
|||
const endTime = Date.now()
|
||||
const timeDelta = endTime - startTime
|
||||
|
||||
// Extract artifacts and file annotations from response metadata
|
||||
let artifacts: any[] = []
|
||||
let fileAnnotations: any[] = []
|
||||
if (response.response_metadata) {
|
||||
const {
|
||||
artifacts: extractedArtifacts,
|
||||
fileAnnotations: extractedFileAnnotations,
|
||||
savedInlineImages
|
||||
} = await extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
|
||||
|
||||
if (extractedArtifacts.length > 0) {
|
||||
artifacts = extractedArtifacts
|
||||
|
||||
// Stream artifacts if this is the last node
|
||||
if (isLastNode && sseStreamer) {
|
||||
sseStreamer.streamArtifactsEvent(chatId, artifacts)
|
||||
}
|
||||
}
|
||||
|
||||
if (extractedFileAnnotations.length > 0) {
|
||||
fileAnnotations = extractedFileAnnotations
|
||||
|
||||
// Stream file annotations if this is the last node
|
||||
if (isLastNode && sseStreamer) {
|
||||
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
|
||||
}
|
||||
}
|
||||
|
||||
// Replace inlineData base64 with file references in the response
|
||||
if (savedInlineImages && savedInlineImages.length > 0) {
|
||||
replaceInlineDataWithFileReferences(response, savedInlineImages)
|
||||
}
|
||||
}
|
||||
|
||||
// Update flow state if needed
|
||||
let newState = { ...state }
|
||||
if (_llmUpdateState && Array.isArray(_llmUpdateState) && _llmUpdateState.length > 0) {
|
||||
|
|
@ -557,22 +514,10 @@ class LLM_Agentflow implements INode {
|
|||
finalResponse = response.content.map((item: any) => item.text).join('\n')
|
||||
} else if (response.content && typeof response.content === 'string') {
|
||||
finalResponse = response.content
|
||||
} else if (response.content === '') {
|
||||
// Empty response content, this could happen when there is only image data
|
||||
finalResponse = ''
|
||||
} else {
|
||||
finalResponse = JSON.stringify(response, null, 2)
|
||||
}
|
||||
const output = this.prepareOutputObject(
|
||||
response,
|
||||
finalResponse,
|
||||
startTime,
|
||||
endTime,
|
||||
timeDelta,
|
||||
isStructuredOutput,
|
||||
artifacts,
|
||||
fileAnnotations
|
||||
)
|
||||
const output = this.prepareOutputObject(response, finalResponse, startTime, endTime, timeDelta, isStructuredOutput)
|
||||
|
||||
// End analytics tracking
|
||||
if (analyticHandlers && llmIds) {
|
||||
|
|
@ -584,23 +529,12 @@ class LLM_Agentflow implements INode {
|
|||
this.sendStreamingEvents(options, chatId, response)
|
||||
}
|
||||
|
||||
// Stream file annotations if any were extracted
|
||||
if (fileAnnotations.length > 0 && isLastNode && sseStreamer) {
|
||||
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
|
||||
}
|
||||
|
||||
// Process template variables in state
|
||||
newState = processTemplateVariables(newState, finalResponse)
|
||||
|
||||
/**
|
||||
* Remove the temporarily added image artifact messages before storing
|
||||
* This is to avoid storing the actual base64 data into database
|
||||
*/
|
||||
const messagesToStore = messages.filter((msg: any) => !msg._isTemporaryImageMessage)
|
||||
|
||||
// Replace the actual messages array with one that includes the file references for images instead of base64 data
|
||||
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
|
||||
messagesToStore,
|
||||
messages,
|
||||
runtimeImageMessagesWithFileRef,
|
||||
pastImageMessagesWithFileRef
|
||||
)
|
||||
|
|
@ -651,13 +585,7 @@ class LLM_Agentflow implements INode {
|
|||
{
|
||||
role: returnRole,
|
||||
content: finalResponse,
|
||||
name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id,
|
||||
...(((artifacts && artifacts.length > 0) || (fileAnnotations && fileAnnotations.length > 0)) && {
|
||||
additional_kwargs: {
|
||||
...(artifacts && artifacts.length > 0 && { artifacts }),
|
||||
...(fileAnnotations && fileAnnotations.length > 0 && { fileAnnotations })
|
||||
}
|
||||
})
|
||||
name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -827,6 +755,59 @@ class LLM_Agentflow implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Configures structured output for the LLM
|
||||
*/
|
||||
private configureStructuredOutput(llmNodeInstance: BaseChatModel, llmStructuredOutput: IStructuredOutput[]): BaseChatModel {
|
||||
try {
|
||||
const zodObj: ICommonObject = {}
|
||||
for (const sch of llmStructuredOutput) {
|
||||
if (sch.type === 'string') {
|
||||
zodObj[sch.key] = z.string().describe(sch.description || '')
|
||||
} else if (sch.type === 'stringArray') {
|
||||
zodObj[sch.key] = z.array(z.string()).describe(sch.description || '')
|
||||
} else if (sch.type === 'number') {
|
||||
zodObj[sch.key] = z.number().describe(sch.description || '')
|
||||
} else if (sch.type === 'boolean') {
|
||||
zodObj[sch.key] = z.boolean().describe(sch.description || '')
|
||||
} else if (sch.type === 'enum') {
|
||||
const enumValues = sch.enumValues?.split(',').map((item: string) => item.trim()) || []
|
||||
zodObj[sch.key] = z
|
||||
.enum(enumValues.length ? (enumValues as [string, ...string[]]) : ['default'])
|
||||
.describe(sch.description || '')
|
||||
} else if (sch.type === 'jsonArray') {
|
||||
const jsonSchema = sch.jsonSchema
|
||||
if (jsonSchema) {
|
||||
try {
|
||||
// Parse the JSON schema
|
||||
const schemaObj = JSON.parse(jsonSchema)
|
||||
|
||||
// Create a Zod schema from the JSON schema
|
||||
const itemSchema = this.createZodSchemaFromJSON(schemaObj)
|
||||
|
||||
// Create an array schema of the item schema
|
||||
zodObj[sch.key] = z.array(itemSchema).describe(sch.description || '')
|
||||
} catch (err) {
|
||||
console.error(`Error parsing JSON schema for ${sch.key}:`, err)
|
||||
// Fallback to generic array of records
|
||||
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
|
||||
}
|
||||
} else {
|
||||
// If no schema provided, use generic array of records
|
||||
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
|
||||
}
|
||||
}
|
||||
}
|
||||
const structuredOutput = z.object(zodObj)
|
||||
|
||||
// @ts-ignore
|
||||
return llmNodeInstance.withStructuredOutput(structuredOutput)
|
||||
} catch (exception) {
|
||||
console.error(exception)
|
||||
return llmNodeInstance
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles streaming response from the LLM
|
||||
*/
|
||||
|
|
@ -843,20 +824,16 @@ class LLM_Agentflow implements INode {
|
|||
for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) {
|
||||
if (sseStreamer) {
|
||||
let content = ''
|
||||
|
||||
if (typeof chunk === 'string') {
|
||||
content = chunk
|
||||
} else if (Array.isArray(chunk.content) && chunk.content.length > 0) {
|
||||
if (Array.isArray(chunk.content) && chunk.content.length > 0) {
|
||||
const contents = chunk.content as MessageContentText[]
|
||||
content = contents.map((item) => item.text).join('')
|
||||
} else if (chunk.content) {
|
||||
} else {
|
||||
content = chunk.content.toString()
|
||||
}
|
||||
sseStreamer.streamTokenEvent(chatId, content)
|
||||
}
|
||||
|
||||
const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
|
||||
response = response.concat(messageChunk)
|
||||
response = response.concat(chunk)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error during streaming:', error)
|
||||
|
|
@ -878,9 +855,7 @@ class LLM_Agentflow implements INode {
|
|||
startTime: number,
|
||||
endTime: number,
|
||||
timeDelta: number,
|
||||
isStructuredOutput: boolean,
|
||||
artifacts: any[] = [],
|
||||
fileAnnotations: any[] = []
|
||||
isStructuredOutput: boolean
|
||||
): any {
|
||||
const output: any = {
|
||||
content: finalResponse,
|
||||
|
|
@ -899,10 +874,6 @@ class LLM_Agentflow implements INode {
|
|||
output.usageMetadata = response.usage_metadata
|
||||
}
|
||||
|
||||
if (response.response_metadata) {
|
||||
output.responseMetadata = response.response_metadata
|
||||
}
|
||||
|
||||
if (isStructuredOutput && typeof response === 'object') {
|
||||
const structuredOutput = response as Record<string, any>
|
||||
for (const key in structuredOutput) {
|
||||
|
|
@ -912,14 +883,6 @@ class LLM_Agentflow implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
if (artifacts && artifacts.length > 0) {
|
||||
output.artifacts = flatten(artifacts)
|
||||
}
|
||||
|
||||
if (fileAnnotations && fileAnnotations.length > 0) {
|
||||
output.fileAnnotations = fileAnnotations
|
||||
}
|
||||
|
||||
return output
|
||||
}
|
||||
|
||||
|
|
@ -944,6 +907,107 @@ class LLM_Agentflow implements INode {
|
|||
|
||||
sseStreamer.streamEndEvent(chatId)
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a Zod schema from a JSON schema object
|
||||
* @param jsonSchema The JSON schema object
|
||||
* @returns A Zod schema
|
||||
*/
|
||||
private createZodSchemaFromJSON(jsonSchema: any): z.ZodTypeAny {
|
||||
// If the schema is an object with properties, create an object schema
|
||||
if (typeof jsonSchema === 'object' && jsonSchema !== null) {
|
||||
const schemaObj: Record<string, z.ZodTypeAny> = {}
|
||||
|
||||
// Process each property in the schema
|
||||
for (const [key, value] of Object.entries(jsonSchema)) {
|
||||
if (value === null) {
|
||||
// Handle null values
|
||||
schemaObj[key] = z.null()
|
||||
} else if (typeof value === 'object' && !Array.isArray(value)) {
|
||||
// Check if the property has a type definition
|
||||
if ('type' in value) {
|
||||
const type = value.type as string
|
||||
const description = ('description' in value ? (value.description as string) : '') || ''
|
||||
|
||||
// Create the appropriate Zod type based on the type property
|
||||
if (type === 'string') {
|
||||
schemaObj[key] = z.string().describe(description)
|
||||
} else if (type === 'number') {
|
||||
schemaObj[key] = z.number().describe(description)
|
||||
} else if (type === 'boolean') {
|
||||
schemaObj[key] = z.boolean().describe(description)
|
||||
} else if (type === 'array') {
|
||||
// If it's an array type, check if items is defined
|
||||
if ('items' in value && value.items) {
|
||||
const itemSchema = this.createZodSchemaFromJSON(value.items)
|
||||
schemaObj[key] = z.array(itemSchema).describe(description)
|
||||
} else {
|
||||
// Default to array of any if items not specified
|
||||
schemaObj[key] = z.array(z.any()).describe(description)
|
||||
}
|
||||
} else if (type === 'object') {
|
||||
// If it's an object type, check if properties is defined
|
||||
if ('properties' in value && value.properties) {
|
||||
const nestedSchema = this.createZodSchemaFromJSON(value.properties)
|
||||
schemaObj[key] = nestedSchema.describe(description)
|
||||
} else {
|
||||
// Default to record of any if properties not specified
|
||||
schemaObj[key] = z.record(z.any()).describe(description)
|
||||
}
|
||||
} else {
|
||||
// Default to any for unknown types
|
||||
schemaObj[key] = z.any().describe(description)
|
||||
}
|
||||
|
||||
// Check if the property is optional
|
||||
if ('optional' in value && value.optional === true) {
|
||||
schemaObj[key] = schemaObj[key].optional()
|
||||
}
|
||||
} else if (Array.isArray(value)) {
|
||||
// Array values without a type property
|
||||
if (value.length > 0) {
|
||||
// If the array has items, recursively create a schema for the first item
|
||||
const itemSchema = this.createZodSchemaFromJSON(value[0])
|
||||
schemaObj[key] = z.array(itemSchema)
|
||||
} else {
|
||||
// Empty array, allow any array
|
||||
schemaObj[key] = z.array(z.any())
|
||||
}
|
||||
} else {
|
||||
// It's a nested object without a type property, recursively create schema
|
||||
schemaObj[key] = this.createZodSchemaFromJSON(value)
|
||||
}
|
||||
} else if (Array.isArray(value)) {
|
||||
// Array values
|
||||
if (value.length > 0) {
|
||||
// If the array has items, recursively create a schema for the first item
|
||||
const itemSchema = this.createZodSchemaFromJSON(value[0])
|
||||
schemaObj[key] = z.array(itemSchema)
|
||||
} else {
|
||||
// Empty array, allow any array
|
||||
schemaObj[key] = z.array(z.any())
|
||||
}
|
||||
} else {
|
||||
// For primitive values (which shouldn't be in the schema directly)
|
||||
// Use the corresponding Zod type
|
||||
if (typeof value === 'string') {
|
||||
schemaObj[key] = z.string()
|
||||
} else if (typeof value === 'number') {
|
||||
schemaObj[key] = z.number()
|
||||
} else if (typeof value === 'boolean') {
|
||||
schemaObj[key] = z.boolean()
|
||||
} else {
|
||||
schemaObj[key] = z.any()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return z.object(schemaObj)
|
||||
}
|
||||
|
||||
// Fallback to any for unknown types
|
||||
return z.any()
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: LLM_Agentflow }
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class Loop_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Loop'
|
||||
this.name = 'loopAgentflow'
|
||||
this.version = 1.2
|
||||
this.version = 1.1
|
||||
this.type = 'Loop'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Loop back to a previous node'
|
||||
|
|
@ -64,7 +64,8 @@ class Loop_Agentflow implements INode {
|
|||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listRuntimeStateKeys'
|
||||
loadMethod: 'listRuntimeStateKeys',
|
||||
freeSolo: true
|
||||
},
|
||||
{
|
||||
label: 'Value',
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ class Retriever_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Retriever'
|
||||
this.name = 'retrieverAgentflow'
|
||||
this.version = 1.1
|
||||
this.version = 1.0
|
||||
this.type = 'Retriever'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Retrieve information from vector database'
|
||||
|
|
@ -87,7 +87,8 @@ class Retriever_Agentflow implements INode {
|
|||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listRuntimeStateKeys'
|
||||
loadMethod: 'listRuntimeStateKeys',
|
||||
freeSolo: true
|
||||
},
|
||||
{
|
||||
label: 'Value',
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ class Tool_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Tool'
|
||||
this.name = 'toolAgentflow'
|
||||
this.version = 1.2
|
||||
this.version = 1.1
|
||||
this.type = 'Tool'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Tools allow LLM to interact with external systems'
|
||||
|
|
@ -80,7 +80,8 @@ class Tool_Agentflow implements INode {
|
|||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listRuntimeStateKeys'
|
||||
loadMethod: 'listRuntimeStateKeys',
|
||||
freeSolo: true
|
||||
},
|
||||
{
|
||||
label: 'Value',
|
||||
|
|
|
|||
|
|
@ -1,11 +1,10 @@
|
|||
import { BaseMessage, MessageContentImageUrl, AIMessageChunk } from '@langchain/core/messages'
|
||||
import { BaseMessage, MessageContentImageUrl } from '@langchain/core/messages'
|
||||
import { getImageUploads } from '../../src/multiModalUtils'
|
||||
import { addSingleFileToStorage, getFileFromStorage } from '../../src/storageUtils'
|
||||
import { ICommonObject, IFileUpload, INodeData } from '../../src/Interface'
|
||||
import { getFileFromStorage } from '../../src/storageUtils'
|
||||
import { ICommonObject, IFileUpload } from '../../src/Interface'
|
||||
import { BaseMessageLike } from '@langchain/core/messages'
|
||||
import { IFlowState } from './Interface.Agentflow'
|
||||
import { getCredentialData, getCredentialParam, handleEscapeCharacters, mapMimeTypeToInputField } from '../../src/utils'
|
||||
import fetch from 'node-fetch'
|
||||
import { handleEscapeCharacters, mapMimeTypeToInputField } from '../../src/utils'
|
||||
|
||||
export const addImagesToMessages = async (
|
||||
options: ICommonObject,
|
||||
|
|
@ -19,8 +18,7 @@ export const addImagesToMessages = async (
|
|||
for (const upload of imageUploads) {
|
||||
let bf = upload.data
|
||||
if (upload.type == 'stored-file') {
|
||||
const fileName = upload.name.replace(/^FILE-STORAGE::/, '')
|
||||
const contents = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId)
|
||||
const contents = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
|
||||
// as the image is stored in the server, read the file and convert it to base64
|
||||
bf = 'data:' + upload.mime + ';base64,' + contents.toString('base64')
|
||||
|
||||
|
|
@ -91,9 +89,8 @@ export const processMessagesWithImages = async (
|
|||
if (item.type === 'stored-file' && item.name && item.mime.startsWith('image/')) {
|
||||
hasImageReferences = true
|
||||
try {
|
||||
const fileName = item.name.replace(/^FILE-STORAGE::/, '')
|
||||
// Get file contents from storage
|
||||
const contents = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId)
|
||||
const contents = await getFileFromStorage(item.name, options.orgId, options.chatflowid, options.chatId)
|
||||
|
||||
// Create base64 data URL
|
||||
const base64Data = 'data:' + item.mime + ';base64,' + contents.toString('base64')
|
||||
|
|
@ -325,8 +322,7 @@ export const getPastChatHistoryImageMessages = async (
|
|||
const imageContents: MessageContentImageUrl[] = []
|
||||
for (const upload of uploads) {
|
||||
if (upload.type === 'stored-file' && upload.mime.startsWith('image/')) {
|
||||
const fileName = upload.name.replace(/^FILE-STORAGE::/, '')
|
||||
const fileData = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId)
|
||||
const fileData = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
|
||||
// as the image is stored in the server, read the file and convert it to base64
|
||||
const bf = 'data:' + upload.mime + ';base64,' + fileData.toString('base64')
|
||||
|
||||
|
|
@ -460,437 +456,6 @@ export const getPastChatHistoryImageMessages = async (
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets MIME type from filename extension
|
||||
*/
|
||||
export const getMimeTypeFromFilename = (filename: string): string => {
|
||||
const extension = filename.toLowerCase().split('.').pop()
|
||||
const mimeTypes: { [key: string]: string } = {
|
||||
png: 'image/png',
|
||||
jpg: 'image/jpeg',
|
||||
jpeg: 'image/jpeg',
|
||||
gif: 'image/gif',
|
||||
pdf: 'application/pdf',
|
||||
txt: 'text/plain',
|
||||
csv: 'text/csv',
|
||||
json: 'application/json',
|
||||
html: 'text/html',
|
||||
xml: 'application/xml'
|
||||
}
|
||||
return mimeTypes[extension || ''] || 'application/octet-stream'
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets artifact type from filename extension for UI rendering
|
||||
*/
|
||||
export const getArtifactTypeFromFilename = (filename: string): string => {
|
||||
const extension = filename.toLowerCase().split('.').pop()
|
||||
const artifactTypes: { [key: string]: string } = {
|
||||
png: 'png',
|
||||
jpg: 'jpeg',
|
||||
jpeg: 'jpeg',
|
||||
html: 'html',
|
||||
htm: 'html',
|
||||
md: 'markdown',
|
||||
markdown: 'markdown',
|
||||
json: 'json',
|
||||
js: 'javascript',
|
||||
javascript: 'javascript',
|
||||
tex: 'latex',
|
||||
latex: 'latex',
|
||||
txt: 'text',
|
||||
csv: 'text',
|
||||
pdf: 'text'
|
||||
}
|
||||
return artifactTypes[extension || ''] || 'text'
|
||||
}
|
||||
|
||||
/**
|
||||
* Saves base64 image data to storage and returns file information
|
||||
*/
|
||||
export const saveBase64Image = async (
|
||||
outputItem: any,
|
||||
options: ICommonObject
|
||||
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> => {
|
||||
try {
|
||||
if (!outputItem.result) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Extract base64 data and create buffer
|
||||
const base64Data = outputItem.result
|
||||
const imageBuffer = Buffer.from(base64Data, 'base64')
|
||||
|
||||
// Determine file extension and MIME type
|
||||
const outputFormat = outputItem.output_format || 'png'
|
||||
const fileName = `generated_image_${outputItem.id || Date.now()}.${outputFormat}`
|
||||
const mimeType = outputFormat === 'png' ? 'image/png' : 'image/jpeg'
|
||||
|
||||
// Save the image using the existing storage utility
|
||||
const { path, totalSize } = await addSingleFileToStorage(
|
||||
mimeType,
|
||||
imageBuffer,
|
||||
fileName,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
|
||||
return { filePath: path, fileName, totalSize }
|
||||
} catch (error) {
|
||||
console.error('Error saving base64 image:', error)
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Saves Gemini inline image data to storage and returns file information
|
||||
*/
|
||||
export const saveGeminiInlineImage = async (
|
||||
inlineItem: any,
|
||||
options: ICommonObject
|
||||
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> => {
|
||||
try {
|
||||
if (!inlineItem.data || !inlineItem.mimeType) {
|
||||
return null
|
||||
}
|
||||
|
||||
// Extract base64 data and create buffer
|
||||
const base64Data = inlineItem.data
|
||||
const imageBuffer = Buffer.from(base64Data, 'base64')
|
||||
|
||||
// Determine file extension from MIME type
|
||||
const mimeType = inlineItem.mimeType
|
||||
let extension = 'png'
|
||||
if (mimeType.includes('jpeg') || mimeType.includes('jpg')) {
|
||||
extension = 'jpg'
|
||||
} else if (mimeType.includes('png')) {
|
||||
extension = 'png'
|
||||
} else if (mimeType.includes('gif')) {
|
||||
extension = 'gif'
|
||||
} else if (mimeType.includes('webp')) {
|
||||
extension = 'webp'
|
||||
}
|
||||
|
||||
const fileName = `gemini_generated_image_${Date.now()}.${extension}`
|
||||
|
||||
// Save the image using the existing storage utility
|
||||
const { path, totalSize } = await addSingleFileToStorage(
|
||||
mimeType,
|
||||
imageBuffer,
|
||||
fileName,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
|
||||
return { filePath: path, fileName, totalSize }
|
||||
} catch (error) {
|
||||
console.error('Error saving Gemini inline image:', error)
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Downloads file content from container file citation
|
||||
*/
|
||||
export const downloadContainerFile = async (
|
||||
containerId: string,
|
||||
fileId: string,
|
||||
filename: string,
|
||||
modelNodeData: INodeData,
|
||||
options: ICommonObject
|
||||
): Promise<{ filePath: string; totalSize: number } | null> => {
|
||||
try {
|
||||
const credentialData = await getCredentialData(modelNodeData.credential ?? '', options)
|
||||
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, modelNodeData)
|
||||
|
||||
if (!openAIApiKey) {
|
||||
console.warn('No OpenAI API key available for downloading container file')
|
||||
return null
|
||||
}
|
||||
|
||||
// Download the file using OpenAI Container API
|
||||
const response = await fetch(`https://api.openai.com/v1/containers/${containerId}/files/${fileId}/content`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: '*/*',
|
||||
Authorization: `Bearer ${openAIApiKey}`
|
||||
}
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
console.warn(
|
||||
`Failed to download container file ${fileId} from container ${containerId}: ${response.status} ${response.statusText}`
|
||||
)
|
||||
return null
|
||||
}
|
||||
|
||||
// Extract the binary data from the Response object
|
||||
const data = await response.arrayBuffer()
|
||||
const dataBuffer = Buffer.from(data)
|
||||
const mimeType = getMimeTypeFromFilename(filename)
|
||||
|
||||
// Store the file using the same storage utility as OpenAIAssistant
|
||||
const { path, totalSize } = await addSingleFileToStorage(
|
||||
mimeType,
|
||||
dataBuffer,
|
||||
filename,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
|
||||
return { filePath: path, totalSize }
|
||||
} catch (error) {
|
||||
console.error('Error downloading container file:', error)
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Replace inlineData base64 with file references in the response content
|
||||
*/
|
||||
export const replaceInlineDataWithFileReferences = (
|
||||
response: AIMessageChunk,
|
||||
savedInlineImages: Array<{ filePath: string; fileName: string; mimeType: string }>
|
||||
): void => {
|
||||
// Check if content is an array
|
||||
if (!Array.isArray(response.content)) {
|
||||
return
|
||||
}
|
||||
|
||||
// Replace base64 data with file references in response content
|
||||
let savedImageIndex = 0
|
||||
for (let i = 0; i < response.content.length; i++) {
|
||||
const contentItem = response.content[i]
|
||||
if (
|
||||
typeof contentItem === 'object' &&
|
||||
contentItem.type === 'inlineData' &&
|
||||
contentItem.inlineData &&
|
||||
savedImageIndex < savedInlineImages.length
|
||||
) {
|
||||
const savedImage = savedInlineImages[savedImageIndex]
|
||||
// Replace with file reference
|
||||
response.content[i] = {
|
||||
type: 'stored-file',
|
||||
name: savedImage.fileName,
|
||||
mime: savedImage.mimeType,
|
||||
path: savedImage.filePath
|
||||
}
|
||||
savedImageIndex++
|
||||
}
|
||||
}
|
||||
|
||||
// Clear the inlineData from response_metadata to avoid duplication
|
||||
if (response.response_metadata?.inlineData) {
|
||||
delete response.response_metadata.inlineData
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts artifacts from response metadata (both annotations and built-in tools)
|
||||
*/
|
||||
export const extractArtifactsFromResponse = async (
|
||||
responseMetadata: any,
|
||||
modelNodeData: INodeData,
|
||||
options: ICommonObject
|
||||
): Promise<{
|
||||
artifacts: any[]
|
||||
fileAnnotations: any[]
|
||||
savedInlineImages?: Array<{ filePath: string; fileName: string; mimeType: string }>
|
||||
}> => {
|
||||
const artifacts: any[] = []
|
||||
const fileAnnotations: any[] = []
|
||||
const savedInlineImages: Array<{ filePath: string; fileName: string; mimeType: string }> = []
|
||||
|
||||
// Handle Gemini inline data (image generation)
|
||||
if (responseMetadata?.inlineData && Array.isArray(responseMetadata.inlineData)) {
|
||||
for (const inlineItem of responseMetadata.inlineData) {
|
||||
if (inlineItem.type === 'gemini_inline_data' && inlineItem.data && inlineItem.mimeType) {
|
||||
try {
|
||||
const savedImageResult = await saveGeminiInlineImage(inlineItem, options)
|
||||
if (savedImageResult) {
|
||||
// Create artifact in the same format as other image artifacts
|
||||
const fileType = getArtifactTypeFromFilename(savedImageResult.fileName)
|
||||
artifacts.push({
|
||||
type: fileType,
|
||||
data: savedImageResult.filePath
|
||||
})
|
||||
|
||||
// Track saved image for replacing base64 data in content
|
||||
savedInlineImages.push({
|
||||
filePath: savedImageResult.filePath,
|
||||
fileName: savedImageResult.fileName,
|
||||
mimeType: inlineItem.mimeType
|
||||
})
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error processing Gemini inline image artifact:', error)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!responseMetadata?.output || !Array.isArray(responseMetadata.output)) {
|
||||
return { artifacts, fileAnnotations, savedInlineImages: savedInlineImages.length > 0 ? savedInlineImages : undefined }
|
||||
}
|
||||
|
||||
for (const outputItem of responseMetadata.output) {
|
||||
// Handle container file citations from annotations
|
||||
if (outputItem.type === 'message' && outputItem.content && Array.isArray(outputItem.content)) {
|
||||
for (const contentItem of outputItem.content) {
|
||||
if (contentItem.annotations && Array.isArray(contentItem.annotations)) {
|
||||
for (const annotation of contentItem.annotations) {
|
||||
if (annotation.type === 'container_file_citation' && annotation.file_id && annotation.filename) {
|
||||
try {
|
||||
// Download and store the file content
|
||||
const downloadResult = await downloadContainerFile(
|
||||
annotation.container_id,
|
||||
annotation.file_id,
|
||||
annotation.filename,
|
||||
modelNodeData,
|
||||
options
|
||||
)
|
||||
|
||||
if (downloadResult) {
|
||||
const fileType = getArtifactTypeFromFilename(annotation.filename)
|
||||
|
||||
if (fileType === 'png' || fileType === 'jpeg' || fileType === 'jpg') {
|
||||
const artifact = {
|
||||
type: fileType,
|
||||
data: downloadResult.filePath
|
||||
}
|
||||
|
||||
artifacts.push(artifact)
|
||||
} else {
|
||||
fileAnnotations.push({
|
||||
filePath: downloadResult.filePath,
|
||||
fileName: annotation.filename
|
||||
})
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error processing annotation:', error)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle built-in tool artifacts (like image generation)
|
||||
if (outputItem.type === 'image_generation_call' && outputItem.result) {
|
||||
try {
|
||||
const savedImageResult = await saveBase64Image(outputItem, options)
|
||||
if (savedImageResult) {
|
||||
// Replace the base64 result with the file path in the response metadata
|
||||
outputItem.result = savedImageResult.filePath
|
||||
|
||||
// Create artifact in the same format as other image artifacts
|
||||
const fileType = getArtifactTypeFromFilename(savedImageResult.fileName)
|
||||
artifacts.push({
|
||||
type: fileType,
|
||||
data: savedImageResult.filePath
|
||||
})
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error processing image generation artifact:', error)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { artifacts, fileAnnotations, savedInlineImages: savedInlineImages.length > 0 ? savedInlineImages : undefined }
|
||||
}
|
||||
|
||||
/**
|
||||
* Add image artifacts from previous assistant messages as user messages
|
||||
* This allows the LLM to see and reference the generated images in the conversation
|
||||
* Messages are marked with a special flag for later removal
|
||||
*/
|
||||
export const addImageArtifactsToMessages = async (messages: BaseMessageLike[], options: ICommonObject): Promise<void> => {
|
||||
const imageExtensions = ['png', 'jpg', 'jpeg', 'gif', 'webp']
|
||||
const messagesToInsert: Array<{ index: number; message: any }> = []
|
||||
|
||||
// Iterate through messages to find assistant messages with image artifacts
|
||||
for (let i = 0; i < messages.length; i++) {
|
||||
const message = messages[i] as any
|
||||
|
||||
// Check if this is an assistant message with artifacts
|
||||
if (
|
||||
(message.role === 'assistant' || message.role === 'ai') &&
|
||||
message.additional_kwargs?.artifacts &&
|
||||
Array.isArray(message.additional_kwargs.artifacts)
|
||||
) {
|
||||
const artifacts = message.additional_kwargs.artifacts
|
||||
const imageArtifacts: Array<{ type: string; name: string; mime: string }> = []
|
||||
|
||||
// Extract image artifacts
|
||||
for (const artifact of artifacts) {
|
||||
if (artifact.type && artifact.data) {
|
||||
// Check if this is an image artifact by file type
|
||||
if (imageExtensions.includes(artifact.type.toLowerCase())) {
|
||||
// Extract filename from the file path
|
||||
const fileName = artifact.data.split('/').pop() || artifact.data
|
||||
const mimeType = `image/${artifact.type.toLowerCase()}`
|
||||
|
||||
imageArtifacts.push({
|
||||
type: 'stored-file',
|
||||
name: fileName,
|
||||
mime: mimeType
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we found image artifacts, prepare to insert a user message after this assistant message
|
||||
if (imageArtifacts.length > 0) {
|
||||
// Check if the next message already contains these image artifacts to avoid duplicates
|
||||
const nextMessage = messages[i + 1] as any
|
||||
const shouldInsert =
|
||||
!nextMessage ||
|
||||
nextMessage.role !== 'user' ||
|
||||
!Array.isArray(nextMessage.content) ||
|
||||
!nextMessage.content.some(
|
||||
(item: any) =>
|
||||
(item.type === 'stored-file' || item.type === 'image_url') &&
|
||||
imageArtifacts.some((artifact) => {
|
||||
// Compare with and without FILE-STORAGE:: prefix
|
||||
const artifactName = artifact.name.replace('FILE-STORAGE::', '')
|
||||
const itemName = item.name?.replace('FILE-STORAGE::', '') || ''
|
||||
return artifactName === itemName
|
||||
})
|
||||
)
|
||||
|
||||
if (shouldInsert) {
|
||||
messagesToInsert.push({
|
||||
index: i + 1,
|
||||
message: {
|
||||
role: 'user',
|
||||
content: imageArtifacts,
|
||||
_isTemporaryImageMessage: true // Mark for later removal
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Insert messages in reverse order to maintain correct indices
|
||||
for (let i = messagesToInsert.length - 1; i >= 0; i--) {
|
||||
const { index, message } = messagesToInsert[i]
|
||||
messages.splice(index, 0, message)
|
||||
}
|
||||
|
||||
// Convert stored-file references to base64 image_url format
|
||||
if (messagesToInsert.length > 0) {
|
||||
const { updatedMessages } = await processMessagesWithImages(messages, options)
|
||||
// Replace the messages array content with the updated messages
|
||||
messages.length = 0
|
||||
messages.push(...updatedMessages)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the flow state with new values
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ import { RunnableSequence } from '@langchain/core/runnables'
|
|||
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
|
||||
import { ChatPromptTemplate, MessagesPlaceholder, HumanMessagePromptTemplate, PromptTemplate } from '@langchain/core/prompts'
|
||||
import { formatToOpenAIToolMessages } from 'langchain/agents/format_scratchpad/openai_tools'
|
||||
import { getBaseClasses, transformBracesWithColon, convertChatHistoryToText, convertBaseMessagetoIMessage } from '../../../src/utils'
|
||||
import { getBaseClasses, transformBracesWithColon } from '../../../src/utils'
|
||||
import { type ToolsAgentStep } from 'langchain/agents/openai/output_parser'
|
||||
import {
|
||||
FlowiseMemory,
|
||||
|
|
@ -23,10 +23,8 @@ import { Moderation, checkInputs, streamResponse } from '../../moderation/Modera
|
|||
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
|
||||
import type { Document } from '@langchain/core/documents'
|
||||
import { BaseRetriever } from '@langchain/core/retrievers'
|
||||
import { RESPONSE_TEMPLATE, REPHRASE_TEMPLATE } from '../../chains/ConversationalRetrievalQAChain/prompts'
|
||||
import { RESPONSE_TEMPLATE } from '../../chains/ConversationalRetrievalQAChain/prompts'
|
||||
import { addImagesToMessages, llmSupportsVision } from '../../../src/multiModalUtils'
|
||||
import { StringOutputParser } from '@langchain/core/output_parsers'
|
||||
import { Tool } from '@langchain/core/tools'
|
||||
|
||||
class ConversationalRetrievalToolAgent_Agents implements INode {
|
||||
label: string
|
||||
|
|
@ -44,7 +42,7 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
constructor(fields?: { sessionId?: string }) {
|
||||
this.label = 'Conversational Retrieval Tool Agent'
|
||||
this.name = 'conversationalRetrievalToolAgent'
|
||||
this.author = 'niztal(falkor) and nikitas-novatix'
|
||||
this.author = 'niztal(falkor)'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
|
|
@ -81,26 +79,6 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
optional: true,
|
||||
default: RESPONSE_TEMPLATE
|
||||
},
|
||||
{
|
||||
label: 'Rephrase Prompt',
|
||||
name: 'rephrasePrompt',
|
||||
type: 'string',
|
||||
description: 'Using previous chat history, rephrase question into a standalone question',
|
||||
warning: 'Prompt must include input variables: {chat_history} and {question}',
|
||||
rows: 4,
|
||||
additionalParams: true,
|
||||
optional: true,
|
||||
default: REPHRASE_TEMPLATE
|
||||
},
|
||||
{
|
||||
label: 'Rephrase Model',
|
||||
name: 'rephraseModel',
|
||||
type: 'BaseChatModel',
|
||||
description:
|
||||
'Optional: Use a different (faster/cheaper) model for rephrasing. If not specified, uses the main Tool Calling Chat Model.',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Input Moderation',
|
||||
description: 'Detect text that could generate harmful output and prevent it from being sent to the language model',
|
||||
|
|
@ -125,9 +103,8 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
this.sessionId = fields?.sessionId
|
||||
}
|
||||
|
||||
// The agent will be prepared in run() with the correct user message - it needs the actual runtime input for rephrasing
|
||||
async init(_nodeData: INodeData, _input: string, _options: ICommonObject): Promise<any> {
|
||||
return null
|
||||
async init(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
|
||||
return prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
|
||||
|
|
@ -171,23 +148,6 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
sseStreamer.streamUsedToolsEvent(chatId, res.usedTools)
|
||||
usedTools = res.usedTools
|
||||
}
|
||||
|
||||
// If the tool is set to returnDirect, stream the output to the client
|
||||
if (res.usedTools && res.usedTools.length) {
|
||||
let inputTools = nodeData.inputs?.tools
|
||||
inputTools = flatten(inputTools)
|
||||
for (const tool of res.usedTools) {
|
||||
const inputTool = inputTools.find((inputTool: Tool) => inputTool.name === tool.tool)
|
||||
if (inputTool && (inputTool as any).returnDirect && shouldStreamResponse) {
|
||||
sseStreamer.streamTokenEvent(chatId, tool.toolOutput)
|
||||
// Prevent CustomChainHandler from streaming the same output again
|
||||
if (res.output === tool.toolOutput) {
|
||||
res.output = ''
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// The CustomChainHandler will send the stream end event
|
||||
} else {
|
||||
res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] })
|
||||
if (res.sourceDocuments) {
|
||||
|
|
@ -250,11 +210,9 @@ const prepareAgent = async (
|
|||
flowObj: { sessionId?: string; chatId?: string; input?: string }
|
||||
) => {
|
||||
const model = nodeData.inputs?.model as BaseChatModel
|
||||
const rephraseModel = (nodeData.inputs?.rephraseModel as BaseChatModel) || model // Use main model if not specified
|
||||
const maxIterations = nodeData.inputs?.maxIterations as string
|
||||
const memory = nodeData.inputs?.memory as FlowiseMemory
|
||||
let systemMessage = nodeData.inputs?.systemMessage as string
|
||||
let rephrasePrompt = nodeData.inputs?.rephrasePrompt as string
|
||||
let tools = nodeData.inputs?.tools
|
||||
tools = flatten(tools)
|
||||
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
|
||||
|
|
@ -262,9 +220,6 @@ const prepareAgent = async (
|
|||
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever
|
||||
|
||||
systemMessage = transformBracesWithColon(systemMessage)
|
||||
if (rephrasePrompt) {
|
||||
rephrasePrompt = transformBracesWithColon(rephrasePrompt)
|
||||
}
|
||||
|
||||
const prompt = ChatPromptTemplate.fromMessages([
|
||||
['system', systemMessage ? systemMessage : `You are a helpful AI assistant.`],
|
||||
|
|
@ -308,37 +263,6 @@ const prepareAgent = async (
|
|||
|
||||
const modelWithTools = model.bindTools(tools)
|
||||
|
||||
// Function to get standalone question (either rephrased or original)
|
||||
const getStandaloneQuestion = async (input: string): Promise<string> => {
|
||||
// If no rephrase prompt, return the original input
|
||||
if (!rephrasePrompt) {
|
||||
return input
|
||||
}
|
||||
|
||||
// Get chat history (use empty string if none)
|
||||
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
|
||||
const iMessages = convertBaseMessagetoIMessage(messages)
|
||||
const chatHistoryString = convertChatHistoryToText(iMessages)
|
||||
|
||||
// Always rephrase to normalize/expand user queries for better retrieval
|
||||
try {
|
||||
const CONDENSE_QUESTION_PROMPT = PromptTemplate.fromTemplate(rephrasePrompt)
|
||||
const condenseQuestionChain = RunnableSequence.from([CONDENSE_QUESTION_PROMPT, rephraseModel, new StringOutputParser()])
|
||||
const res = await condenseQuestionChain.invoke({
|
||||
question: input,
|
||||
chat_history: chatHistoryString
|
||||
})
|
||||
return res
|
||||
} catch (error) {
|
||||
console.error('Error rephrasing question:', error)
|
||||
// On error, fall back to original input
|
||||
return input
|
||||
}
|
||||
}
|
||||
|
||||
// Get standalone question before creating runnable
|
||||
const standaloneQuestion = await getStandaloneQuestion(flowObj?.input || '')
|
||||
|
||||
const runnableAgent = RunnableSequence.from([
|
||||
{
|
||||
[inputKey]: (i: { input: string; steps: ToolsAgentStep[] }) => i.input,
|
||||
|
|
@ -348,9 +272,7 @@ const prepareAgent = async (
|
|||
return messages ?? []
|
||||
},
|
||||
context: async (i: { input: string; chatHistory?: string }) => {
|
||||
// Use the standalone question (rephrased or original) for retrieval
|
||||
const retrievalQuery = standaloneQuestion || i.input
|
||||
const relevantDocs = await vectorStoreRetriever.invoke(retrievalQuery)
|
||||
const relevantDocs = await vectorStoreRetriever.invoke(i.input)
|
||||
const formattedDocs = formatDocs(relevantDocs)
|
||||
return formattedDocs
|
||||
}
|
||||
|
|
@ -373,6 +295,4 @@ const prepareAgent = async (
|
|||
return executor
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
nodeClass: ConversationalRetrievalToolAgent_Agents
|
||||
}
|
||||
module.exports = { nodeClass: ConversationalRetrievalToolAgent_Agents }
|
||||
|
|
|
|||
|
|
@ -578,7 +578,7 @@ class OpenAIAssistant_Agents implements INode {
|
|||
toolOutput
|
||||
})
|
||||
} catch (e) {
|
||||
await analyticHandlers.onToolError(toolIds, e)
|
||||
await analyticHandlers.onToolEnd(toolIds, e)
|
||||
console.error('Error executing tool', e)
|
||||
throw new Error(
|
||||
`Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}`
|
||||
|
|
@ -703,7 +703,7 @@ class OpenAIAssistant_Agents implements INode {
|
|||
toolOutput
|
||||
})
|
||||
} catch (e) {
|
||||
await analyticHandlers.onToolError(toolIds, e)
|
||||
await analyticHandlers.onToolEnd(toolIds, e)
|
||||
console.error('Error executing tool', e)
|
||||
clearInterval(timeout)
|
||||
reject(
|
||||
|
|
@ -1096,7 +1096,7 @@ async function handleToolSubmission(params: ToolSubmissionParams): Promise<ToolS
|
|||
toolOutput
|
||||
})
|
||||
} catch (e) {
|
||||
await analyticHandlers.onToolError(toolIds, e)
|
||||
await analyticHandlers.onToolEnd(toolIds, e)
|
||||
console.error('Error executing tool', e)
|
||||
throw new Error(`Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}`)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -607,12 +607,7 @@ export class LangchainChatGoogleGenerativeAI
|
|||
private client: GenerativeModel
|
||||
|
||||
get _isMultimodalModel() {
|
||||
return (
|
||||
this.model.includes('vision') ||
|
||||
this.model.startsWith('gemini-1.5') ||
|
||||
this.model.startsWith('gemini-2') ||
|
||||
this.model.startsWith('gemini-3')
|
||||
)
|
||||
return this.model.includes('vision') || this.model.startsWith('gemini-1.5') || this.model.startsWith('gemini-2')
|
||||
}
|
||||
|
||||
constructor(fields: GoogleGenerativeAIChatInput) {
|
||||
|
|
|
|||
|
|
@ -452,7 +452,6 @@ export function mapGenerateContentResultToChatResult(
|
|||
const [candidate] = response.candidates
|
||||
const { content: candidateContent, ...generationInfo } = candidate
|
||||
let content: MessageContent | undefined
|
||||
const inlineDataItems: any[] = []
|
||||
|
||||
if (Array.isArray(candidateContent?.parts) && candidateContent.parts.length === 1 && candidateContent.parts[0].text) {
|
||||
content = candidateContent.parts[0].text
|
||||
|
|
@ -473,18 +472,6 @@ export function mapGenerateContentResultToChatResult(
|
|||
type: 'codeExecutionResult',
|
||||
codeExecutionResult: p.codeExecutionResult
|
||||
}
|
||||
} else if ('inlineData' in p && p.inlineData) {
|
||||
// Extract inline image data for processing by Agent
|
||||
inlineDataItems.push({
|
||||
type: 'gemini_inline_data',
|
||||
mimeType: p.inlineData.mimeType,
|
||||
data: p.inlineData.data
|
||||
})
|
||||
// Return the inline data as part of the content structure
|
||||
return {
|
||||
type: 'inlineData',
|
||||
inlineData: p.inlineData
|
||||
}
|
||||
}
|
||||
return p
|
||||
})
|
||||
|
|
@ -501,12 +488,6 @@ export function mapGenerateContentResultToChatResult(
|
|||
text = block?.text ?? text
|
||||
}
|
||||
|
||||
// Build response_metadata with inline data if present
|
||||
const response_metadata: any = {}
|
||||
if (inlineDataItems.length > 0) {
|
||||
response_metadata.inlineData = inlineDataItems
|
||||
}
|
||||
|
||||
const generation: ChatGeneration = {
|
||||
text,
|
||||
message: new AIMessage({
|
||||
|
|
@ -521,8 +502,7 @@ export function mapGenerateContentResultToChatResult(
|
|||
additional_kwargs: {
|
||||
...generationInfo
|
||||
},
|
||||
usage_metadata: extra?.usageMetadata,
|
||||
response_metadata: Object.keys(response_metadata).length > 0 ? response_metadata : undefined
|
||||
usage_metadata: extra?.usageMetadata
|
||||
}),
|
||||
generationInfo
|
||||
}
|
||||
|
|
@ -553,8 +533,6 @@ export function convertResponseContentToChatGenerationChunk(
|
|||
const [candidate] = response.candidates
|
||||
const { content: candidateContent, ...generationInfo } = candidate
|
||||
let content: MessageContent | undefined
|
||||
const inlineDataItems: any[] = []
|
||||
|
||||
// Checks if some parts do not have text. If false, it means that the content is a string.
|
||||
if (Array.isArray(candidateContent?.parts) && candidateContent.parts.every((p) => 'text' in p)) {
|
||||
content = candidateContent.parts.map((p) => p.text).join('')
|
||||
|
|
@ -575,18 +553,6 @@ export function convertResponseContentToChatGenerationChunk(
|
|||
type: 'codeExecutionResult',
|
||||
codeExecutionResult: p.codeExecutionResult
|
||||
}
|
||||
} else if ('inlineData' in p && p.inlineData) {
|
||||
// Extract inline image data for processing by Agent
|
||||
inlineDataItems.push({
|
||||
type: 'gemini_inline_data',
|
||||
mimeType: p.inlineData.mimeType,
|
||||
data: p.inlineData.data
|
||||
})
|
||||
// Return the inline data as part of the content structure
|
||||
return {
|
||||
type: 'inlineData',
|
||||
inlineData: p.inlineData
|
||||
}
|
||||
}
|
||||
return p
|
||||
})
|
||||
|
|
@ -616,12 +582,6 @@ export function convertResponseContentToChatGenerationChunk(
|
|||
)
|
||||
}
|
||||
|
||||
// Build response_metadata with inline data if present
|
||||
const response_metadata: any = {}
|
||||
if (inlineDataItems.length > 0) {
|
||||
response_metadata.inlineData = inlineDataItems
|
||||
}
|
||||
|
||||
return new ChatGenerationChunk({
|
||||
text,
|
||||
message: new AIMessageChunk({
|
||||
|
|
@ -631,8 +591,7 @@ export function convertResponseContentToChatGenerationChunk(
|
|||
// Each chunk can have unique "generationInfo", and merging strategy is unclear,
|
||||
// so leave blank for now.
|
||||
additional_kwargs: {},
|
||||
usage_metadata: extra.usageMetadata,
|
||||
response_metadata: Object.keys(response_metadata).length > 0 ? response_metadata : undefined
|
||||
usage_metadata: extra.usageMetadata
|
||||
}),
|
||||
generationInfo
|
||||
})
|
||||
|
|
|
|||
|
|
@ -41,17 +41,15 @@ class ChatHuggingFace_ChatModels implements INode {
|
|||
label: 'Model',
|
||||
name: 'model',
|
||||
type: 'string',
|
||||
description:
|
||||
'Model name (e.g., deepseek-ai/DeepSeek-V3.2-Exp:novita). If model includes provider (:) or using router endpoint, leave Endpoint blank.',
|
||||
placeholder: 'deepseek-ai/DeepSeek-V3.2-Exp:novita'
|
||||
description: 'If using own inference endpoint, leave this blank',
|
||||
placeholder: 'gpt2'
|
||||
},
|
||||
{
|
||||
label: 'Endpoint',
|
||||
name: 'endpoint',
|
||||
type: 'string',
|
||||
placeholder: 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2',
|
||||
description:
|
||||
'Custom inference endpoint (optional). Not needed for models with providers (:) or router endpoints. Leave blank to use Inference Providers.',
|
||||
description: 'Using your own inference endpoint',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
|
|
@ -126,15 +124,6 @@ class ChatHuggingFace_ChatModels implements INode {
|
|||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData)
|
||||
|
||||
if (!huggingFaceApiKey) {
|
||||
console.error('[ChatHuggingFace] API key validation failed: No API key found')
|
||||
throw new Error('HuggingFace API key is required. Please configure it in the credential settings.')
|
||||
}
|
||||
|
||||
if (!huggingFaceApiKey.startsWith('hf_')) {
|
||||
console.warn('[ChatHuggingFace] API key format warning: Key does not start with "hf_"')
|
||||
}
|
||||
|
||||
const obj: Partial<HFInput> = {
|
||||
model,
|
||||
apiKey: huggingFaceApiKey
|
||||
|
|
|
|||
|
|
@ -56,9 +56,9 @@ export class HuggingFaceInference extends LLM implements HFInput {
|
|||
this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
|
||||
this.endpointUrl = fields?.endpointUrl
|
||||
this.includeCredentials = fields?.includeCredentials
|
||||
if (!this.apiKey || this.apiKey.trim() === '') {
|
||||
if (!this.apiKey) {
|
||||
throw new Error(
|
||||
'Please set an API key for HuggingFace Hub. Either configure it in the credential settings in the UI, or set the environment variable HUGGINGFACEHUB_API_KEY.'
|
||||
'Please set an API key for HuggingFace Hub in the environment variable HUGGINGFACEHUB_API_KEY or in the apiKey field of the HuggingFaceInference constructor.'
|
||||
)
|
||||
}
|
||||
}
|
||||
|
|
@ -68,21 +68,19 @@ export class HuggingFaceInference extends LLM implements HFInput {
|
|||
}
|
||||
|
||||
invocationParams(options?: this['ParsedCallOptions']) {
|
||||
// Return parameters compatible with chatCompletion API (OpenAI-compatible format)
|
||||
const params: any = {
|
||||
temperature: this.temperature,
|
||||
max_tokens: this.maxTokens,
|
||||
stop: options?.stop ?? this.stopSequences,
|
||||
top_p: this.topP
|
||||
return {
|
||||
model: this.model,
|
||||
parameters: {
|
||||
// make it behave similar to openai, returning only the generated text
|
||||
return_full_text: false,
|
||||
temperature: this.temperature,
|
||||
max_new_tokens: this.maxTokens,
|
||||
stop: options?.stop ?? this.stopSequences,
|
||||
top_p: this.topP,
|
||||
top_k: this.topK,
|
||||
repetition_penalty: this.frequencyPenalty
|
||||
}
|
||||
}
|
||||
// Include optional parameters if they are defined
|
||||
if (this.topK !== undefined) {
|
||||
params.top_k = this.topK
|
||||
}
|
||||
if (this.frequencyPenalty !== undefined) {
|
||||
params.frequency_penalty = this.frequencyPenalty
|
||||
}
|
||||
return params
|
||||
}
|
||||
|
||||
async *_streamResponseChunks(
|
||||
|
|
@ -90,109 +88,51 @@ export class HuggingFaceInference extends LLM implements HFInput {
|
|||
options: this['ParsedCallOptions'],
|
||||
runManager?: CallbackManagerForLLMRun
|
||||
): AsyncGenerator<GenerationChunk> {
|
||||
try {
|
||||
const client = await this._prepareHFInference()
|
||||
const stream = await this.caller.call(async () =>
|
||||
client.chatCompletionStream({
|
||||
model: this.model,
|
||||
messages: [{ role: 'user', content: prompt }],
|
||||
...this.invocationParams(options)
|
||||
const hfi = await this._prepareHFInference()
|
||||
const stream = await this.caller.call(async () =>
|
||||
hfi.textGenerationStream({
|
||||
...this.invocationParams(options),
|
||||
inputs: prompt
|
||||
})
|
||||
)
|
||||
for await (const chunk of stream) {
|
||||
const token = chunk.token.text
|
||||
yield new GenerationChunk({ text: token, generationInfo: chunk })
|
||||
await runManager?.handleLLMNewToken(token ?? '')
|
||||
|
||||
// stream is done
|
||||
if (chunk.generated_text)
|
||||
yield new GenerationChunk({
|
||||
text: '',
|
||||
generationInfo: { finished: true }
|
||||
})
|
||||
)
|
||||
for await (const chunk of stream) {
|
||||
const token = chunk.choices[0]?.delta?.content || ''
|
||||
if (token) {
|
||||
yield new GenerationChunk({ text: token, generationInfo: chunk })
|
||||
await runManager?.handleLLMNewToken(token)
|
||||
}
|
||||
// stream is done when finish_reason is set
|
||||
if (chunk.choices[0]?.finish_reason) {
|
||||
yield new GenerationChunk({
|
||||
text: '',
|
||||
generationInfo: { finished: true }
|
||||
})
|
||||
break
|
||||
}
|
||||
}
|
||||
} catch (error: any) {
|
||||
console.error('[ChatHuggingFace] Error in _streamResponseChunks:', error)
|
||||
// Provide more helpful error messages
|
||||
if (error?.message?.includes('endpointUrl') || error?.message?.includes('third-party provider')) {
|
||||
throw new Error(
|
||||
`Cannot use custom endpoint with model "${this.model}" that includes a provider. Please leave the Endpoint field blank in the UI. Original error: ${error.message}`
|
||||
)
|
||||
}
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
/** @ignore */
|
||||
async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> {
|
||||
try {
|
||||
const client = await this._prepareHFInference()
|
||||
// Use chatCompletion for chat models (v4 supports conversational models via Inference Providers)
|
||||
const args = {
|
||||
model: this.model,
|
||||
messages: [{ role: 'user', content: prompt }],
|
||||
...this.invocationParams(options)
|
||||
}
|
||||
const res = await this.caller.callWithOptions({ signal: options.signal }, client.chatCompletion.bind(client), args)
|
||||
const content = res.choices[0]?.message?.content || ''
|
||||
if (!content) {
|
||||
console.error('[ChatHuggingFace] No content in response:', JSON.stringify(res))
|
||||
throw new Error(`No content received from HuggingFace API. Response: ${JSON.stringify(res)}`)
|
||||
}
|
||||
return content
|
||||
} catch (error: any) {
|
||||
console.error('[ChatHuggingFace] Error in _call:', error.message)
|
||||
// Provide more helpful error messages
|
||||
if (error?.message?.includes('endpointUrl') || error?.message?.includes('third-party provider')) {
|
||||
throw new Error(
|
||||
`Cannot use custom endpoint with model "${this.model}" that includes a provider. Please leave the Endpoint field blank in the UI. Original error: ${error.message}`
|
||||
)
|
||||
}
|
||||
if (error?.message?.includes('Invalid username or password') || error?.message?.includes('authentication')) {
|
||||
throw new Error(
|
||||
`HuggingFace API authentication failed. Please verify your API key is correct and starts with "hf_". Original error: ${error.message}`
|
||||
)
|
||||
}
|
||||
throw error
|
||||
}
|
||||
const hfi = await this._prepareHFInference()
|
||||
const args = { ...this.invocationParams(options), inputs: prompt }
|
||||
const res = await this.caller.callWithOptions({ signal: options.signal }, hfi.textGeneration.bind(hfi), args)
|
||||
return res.generated_text
|
||||
}
|
||||
|
||||
/** @ignore */
|
||||
private async _prepareHFInference() {
|
||||
if (!this.apiKey || this.apiKey.trim() === '') {
|
||||
console.error('[ChatHuggingFace] API key validation failed: Empty or undefined')
|
||||
throw new Error('HuggingFace API key is required. Please configure it in the credential settings.')
|
||||
}
|
||||
|
||||
const { InferenceClient } = await HuggingFaceInference.imports()
|
||||
// Use InferenceClient for chat models (works better with Inference Providers)
|
||||
const client = new InferenceClient(this.apiKey)
|
||||
|
||||
// Don't override endpoint if model uses a provider (contains ':') or if endpoint is router-based
|
||||
// When using Inference Providers, endpoint should be left blank - InferenceClient handles routing automatically
|
||||
if (
|
||||
this.endpointUrl &&
|
||||
!this.model.includes(':') &&
|
||||
!this.endpointUrl.includes('/v1/chat/completions') &&
|
||||
!this.endpointUrl.includes('router.huggingface.co')
|
||||
) {
|
||||
return client.endpoint(this.endpointUrl)
|
||||
}
|
||||
|
||||
// Return client without endpoint override - InferenceClient will use Inference Providers automatically
|
||||
return client
|
||||
const { HfInference } = await HuggingFaceInference.imports()
|
||||
const hfi = new HfInference(this.apiKey, {
|
||||
includeCredentials: this.includeCredentials
|
||||
})
|
||||
return this.endpointUrl ? hfi.endpoint(this.endpointUrl) : hfi
|
||||
}
|
||||
|
||||
/** @ignore */
|
||||
static async imports(): Promise<{
|
||||
InferenceClient: typeof import('@huggingface/inference').InferenceClient
|
||||
HfInference: typeof import('@huggingface/inference').HfInference
|
||||
}> {
|
||||
try {
|
||||
const { InferenceClient } = await import('@huggingface/inference')
|
||||
return { InferenceClient }
|
||||
const { HfInference } = await import('@huggingface/inference')
|
||||
return { HfInference }
|
||||
} catch (e) {
|
||||
throw new Error('Please install huggingface as a dependency with, e.g. `pnpm install @huggingface/inference`')
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,7 @@
|
|||
import { ChatOpenAI as LangchainChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
|
||||
import { ChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
|
||||
import { BaseCache } from '@langchain/core/caches'
|
||||
import { ICommonObject, IMultiModalOption, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { ChatOpenRouter } from './FlowiseChatOpenRouter'
|
||||
|
||||
class ChatOpenRouter_ChatModels implements INode {
|
||||
label: string
|
||||
|
|
@ -24,7 +23,7 @@ class ChatOpenRouter_ChatModels implements INode {
|
|||
this.icon = 'openRouter.svg'
|
||||
this.category = 'Chat Models'
|
||||
this.description = 'Wrapper around Open Router Inference API'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(LangchainChatOpenAI)]
|
||||
this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
|
||||
this.credential = {
|
||||
label: 'Connect Credential',
|
||||
name: 'credential',
|
||||
|
|
@ -115,40 +114,6 @@ class ChatOpenRouter_ChatModels implements INode {
|
|||
type: 'json',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Allow Image Uploads',
|
||||
name: 'allowImageUploads',
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Allow image input. Refer to the <a href="https://docs.flowiseai.com/using-flowise/uploads#image" target="_blank">docs</a> for more details.',
|
||||
default: false,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Image Resolution',
|
||||
description: 'This parameter controls the resolution in which the model views the image.',
|
||||
name: 'imageResolution',
|
||||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: 'Low',
|
||||
name: 'low'
|
||||
},
|
||||
{
|
||||
label: 'High',
|
||||
name: 'high'
|
||||
},
|
||||
{
|
||||
label: 'Auto',
|
||||
name: 'auto'
|
||||
}
|
||||
],
|
||||
default: 'low',
|
||||
optional: false,
|
||||
show: {
|
||||
allowImageUploads: true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -165,8 +130,6 @@ class ChatOpenRouter_ChatModels implements INode {
|
|||
const basePath = (nodeData.inputs?.basepath as string) || 'https://openrouter.ai/api/v1'
|
||||
const baseOptions = nodeData.inputs?.baseOptions
|
||||
const cache = nodeData.inputs?.cache as BaseCache
|
||||
const allowImageUploads = nodeData.inputs?.allowImageUploads as boolean
|
||||
const imageResolution = nodeData.inputs?.imageResolution as string
|
||||
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const openRouterApiKey = getCredentialParam('openRouterApiKey', credentialData, nodeData)
|
||||
|
|
@ -192,7 +155,7 @@ class ChatOpenRouter_ChatModels implements INode {
|
|||
try {
|
||||
parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions)
|
||||
} catch (exception) {
|
||||
throw new Error("Invalid JSON in the ChatOpenRouter's BaseOptions: " + exception)
|
||||
throw new Error("Invalid JSON in the ChatCerebras's BaseOptions: " + exception)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -203,15 +166,7 @@ class ChatOpenRouter_ChatModels implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
const multiModalOption: IMultiModalOption = {
|
||||
image: {
|
||||
allowImageUploads: allowImageUploads ?? false,
|
||||
imageResolution
|
||||
}
|
||||
}
|
||||
|
||||
const model = new ChatOpenRouter(nodeData.id, obj)
|
||||
model.setMultiModalOption(multiModalOption)
|
||||
const model = new ChatOpenAI(obj)
|
||||
return model
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,29 +0,0 @@
|
|||
import { ChatOpenAI as LangchainChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
|
||||
import { IMultiModalOption, IVisionChatModal } from '../../../src'
|
||||
|
||||
export class ChatOpenRouter extends LangchainChatOpenAI implements IVisionChatModal {
|
||||
configuredModel: string
|
||||
configuredMaxToken?: number
|
||||
multiModalOption: IMultiModalOption
|
||||
id: string
|
||||
|
||||
constructor(id: string, fields?: ChatOpenAIFields) {
|
||||
super(fields)
|
||||
this.id = id
|
||||
this.configuredModel = fields?.modelName ?? ''
|
||||
this.configuredMaxToken = fields?.maxTokens
|
||||
}
|
||||
|
||||
revertToOriginalModel(): void {
|
||||
this.model = this.configuredModel
|
||||
this.maxTokens = this.configuredMaxToken
|
||||
}
|
||||
|
||||
setMultiModalOption(multiModalOption: IMultiModalOption): void {
|
||||
this.multiModalOption = multiModalOption
|
||||
}
|
||||
|
||||
setVisionModel(): void {
|
||||
// pass - OpenRouter models don't need model switching
|
||||
}
|
||||
}
|
||||
|
|
@ -27,6 +27,8 @@ type Element = {
|
|||
}
|
||||
|
||||
export class UnstructuredLoader extends BaseDocumentLoader {
|
||||
public filePath: string
|
||||
|
||||
private apiUrl = process.env.UNSTRUCTURED_API_URL || 'https://api.unstructuredapp.io/general/v0/general'
|
||||
|
||||
private apiKey: string | undefined = process.env.UNSTRUCTURED_API_KEY
|
||||
|
|
@ -136,7 +138,7 @@ export class UnstructuredLoader extends BaseDocumentLoader {
|
|||
})
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to partition file with error ${response.status} and message ${await response.text()}`)
|
||||
throw new Error(`Failed to partition file ${this.filePath} with error ${response.status} and message ${await response.text()}`)
|
||||
}
|
||||
|
||||
const elements = await response.json()
|
||||
|
|
|
|||
|
|
@ -1,6 +1,3 @@
|
|||
/*
|
||||
* Uncomment this if you want to use the UnstructuredFolder to load a folder from the file system
|
||||
|
||||
import { omit } from 'lodash'
|
||||
import { ICommonObject, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
|
||||
import {
|
||||
|
|
@ -519,4 +516,3 @@ class UnstructuredFolder_DocumentLoaders implements INode {
|
|||
}
|
||||
|
||||
module.exports = { nodeClass: UnstructuredFolder_DocumentLoaders }
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -23,22 +23,24 @@ export class HuggingFaceInferenceEmbeddings extends Embeddings implements Huggin
|
|||
this.model = fields?.model ?? 'sentence-transformers/distilbert-base-nli-mean-tokens'
|
||||
this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
|
||||
this.endpoint = fields?.endpoint ?? ''
|
||||
const hf = new HfInference(this.apiKey)
|
||||
// v4 uses Inference Providers by default; only override if custom endpoint provided
|
||||
this.client = this.endpoint ? hf.endpoint(this.endpoint) : hf
|
||||
this.client = new HfInference(this.apiKey)
|
||||
if (this.endpoint) this.client.endpoint(this.endpoint)
|
||||
}
|
||||
|
||||
async _embed(texts: string[]): Promise<number[][]> {
|
||||
// replace newlines, which can negatively affect performance.
|
||||
const clean = texts.map((text) => text.replace(/\n/g, ' '))
|
||||
const hf = new HfInference(this.apiKey)
|
||||
const obj: any = {
|
||||
inputs: clean
|
||||
}
|
||||
if (!this.endpoint) {
|
||||
if (this.endpoint) {
|
||||
hf.endpoint(this.endpoint)
|
||||
} else {
|
||||
obj.model = this.model
|
||||
}
|
||||
|
||||
const res = await this.caller.callWithOptions({}, this.client.featureExtraction.bind(this.client), obj)
|
||||
const res = await this.caller.callWithOptions({}, hf.featureExtraction.bind(hf), obj)
|
||||
return res as number[][]
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -78,8 +78,6 @@ export class HuggingFaceInference extends LLM implements HFInput {
|
|||
async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> {
|
||||
const { HfInference } = await HuggingFaceInference.imports()
|
||||
const hf = new HfInference(this.apiKey)
|
||||
// v4 uses Inference Providers by default; only override if custom endpoint provided
|
||||
const hfClient = this.endpoint ? hf.endpoint(this.endpoint) : hf
|
||||
const obj: any = {
|
||||
parameters: {
|
||||
// make it behave similar to openai, returning only the generated text
|
||||
|
|
@ -92,10 +90,12 @@ export class HuggingFaceInference extends LLM implements HFInput {
|
|||
},
|
||||
inputs: prompt
|
||||
}
|
||||
if (!this.endpoint) {
|
||||
if (this.endpoint) {
|
||||
hf.endpoint(this.endpoint)
|
||||
} else {
|
||||
obj.model = this.model
|
||||
}
|
||||
const res = await this.caller.callWithOptions({ signal: options.signal }, hfClient.textGeneration.bind(hfClient), obj)
|
||||
const res = await this.caller.callWithOptions({ signal: options.signal }, hf.textGeneration.bind(hf), obj)
|
||||
return res.generated_text
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -62,6 +62,7 @@ class MySQLRecordManager_RecordManager implements INode {
|
|||
label: 'Namespace',
|
||||
name: 'namespace',
|
||||
type: 'string',
|
||||
description: 'If not specified, chatflowid will be used',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
},
|
||||
|
|
@ -218,16 +219,7 @@ class MySQLRecordManager implements RecordManagerInterface {
|
|||
unique key \`unique_key_namespace\` (\`key\`,
|
||||
\`namespace\`));`)
|
||||
|
||||
// Add doc_id column if it doesn't exist (migration for existing tables)
|
||||
const checkColumn = await queryRunner.manager.query(
|
||||
`SELECT COUNT(1) ColumnExists FROM INFORMATION_SCHEMA.COLUMNS
|
||||
WHERE table_schema=DATABASE() AND table_name='${tableName}' AND column_name='doc_id';`
|
||||
)
|
||||
if (checkColumn[0].ColumnExists === 0) {
|
||||
await queryRunner.manager.query(`ALTER TABLE \`${tableName}\` ADD COLUMN \`doc_id\` longtext;`)
|
||||
}
|
||||
|
||||
const columns = [`updated_at`, `key`, `namespace`, `group_id`, `doc_id`]
|
||||
const columns = [`updated_at`, `key`, `namespace`, `group_id`]
|
||||
for (const column of columns) {
|
||||
// MySQL does not support 'IF NOT EXISTS' function for Index
|
||||
const Check = await queryRunner.manager.query(
|
||||
|
|
@ -269,7 +261,7 @@ class MySQLRecordManager implements RecordManagerInterface {
|
|||
}
|
||||
}
|
||||
|
||||
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> {
|
||||
async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
|
||||
if (keys.length === 0) {
|
||||
return
|
||||
}
|
||||
|
|
@ -285,23 +277,23 @@ class MySQLRecordManager implements RecordManagerInterface {
|
|||
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
|
||||
}
|
||||
|
||||
// Handle both new format (objects with uid and docId) and old format (strings)
|
||||
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
|
||||
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
|
||||
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
|
||||
const groupIds = _groupIds ?? keys.map(() => null)
|
||||
|
||||
const groupIds = _groupIds ?? keyStrings.map(() => null)
|
||||
|
||||
if (groupIds.length !== keyStrings.length) {
|
||||
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids (${groupIds.length})`)
|
||||
if (groupIds.length !== keys.length) {
|
||||
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids (${groupIds.length})`)
|
||||
}
|
||||
|
||||
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i] ?? null, docIds[i] ?? null])
|
||||
const recordsToUpsert = keys.map((key, i) => [
|
||||
key,
|
||||
this.namespace,
|
||||
updatedAt,
|
||||
groupIds[i] ?? null // Ensure groupIds[i] is null if undefined
|
||||
])
|
||||
|
||||
const query = `
|
||||
INSERT INTO \`${tableName}\` (\`key\`, \`namespace\`, \`updated_at\`, \`group_id\`, \`doc_id\`)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
ON DUPLICATE KEY UPDATE \`updated_at\` = VALUES(\`updated_at\`), \`doc_id\` = VALUES(\`doc_id\`)`
|
||||
INSERT INTO \`${tableName}\` (\`key\`, \`namespace\`, \`updated_at\`, \`group_id\`)
|
||||
VALUES (?, ?, ?, ?)
|
||||
ON DUPLICATE KEY UPDATE \`updated_at\` = VALUES(\`updated_at\`)`
|
||||
|
||||
// To handle multiple files upsert
|
||||
try {
|
||||
|
|
@ -357,13 +349,13 @@ class MySQLRecordManager implements RecordManagerInterface {
|
|||
}
|
||||
}
|
||||
|
||||
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> {
|
||||
async listKeys(options?: ListKeyOptions): Promise<string[]> {
|
||||
const dataSource = await this.getDataSource()
|
||||
const queryRunner = dataSource.createQueryRunner()
|
||||
const tableName = this.sanitizeTableName(this.tableName)
|
||||
|
||||
try {
|
||||
const { before, after, limit, groupIds, docId } = options ?? {}
|
||||
const { before, after, limit, groupIds } = options ?? {}
|
||||
let query = `SELECT \`key\` FROM \`${tableName}\` WHERE \`namespace\` = ?`
|
||||
const values: (string | number | string[])[] = [this.namespace]
|
||||
|
||||
|
|
@ -390,11 +382,6 @@ class MySQLRecordManager implements RecordManagerInterface {
|
|||
values.push(...groupIds.filter((gid): gid is string => gid !== null))
|
||||
}
|
||||
|
||||
if (docId) {
|
||||
query += ` AND \`doc_id\` = ?`
|
||||
values.push(docId)
|
||||
}
|
||||
|
||||
query += ';'
|
||||
|
||||
// Directly using try/catch with async/await for cleaner flow
|
||||
|
|
|
|||
|
|
@ -78,6 +78,7 @@ class PostgresRecordManager_RecordManager implements INode {
|
|||
label: 'Namespace',
|
||||
name: 'namespace',
|
||||
type: 'string',
|
||||
description: 'If not specified, chatflowid will be used',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
},
|
||||
|
|
@ -240,19 +241,6 @@ class PostgresRecordManager implements RecordManagerInterface {
|
|||
CREATE INDEX IF NOT EXISTS namespace_index ON "${tableName}" (namespace);
|
||||
CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
|
||||
|
||||
// Add doc_id column if it doesn't exist (migration for existing tables)
|
||||
await queryRunner.manager.query(`
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (
|
||||
SELECT 1 FROM information_schema.columns
|
||||
WHERE table_name = '${tableName}' AND column_name = 'doc_id'
|
||||
) THEN
|
||||
ALTER TABLE "${tableName}" ADD COLUMN doc_id TEXT;
|
||||
CREATE INDEX IF NOT EXISTS doc_id_index ON "${tableName}" (doc_id);
|
||||
END IF;
|
||||
END $$;`)
|
||||
|
||||
await queryRunner.release()
|
||||
} catch (e: any) {
|
||||
// This error indicates that the table already exists
|
||||
|
|
@ -298,7 +286,7 @@ class PostgresRecordManager implements RecordManagerInterface {
|
|||
return `(${placeholders.join(', ')})`
|
||||
}
|
||||
|
||||
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> {
|
||||
async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
|
||||
if (keys.length === 0) {
|
||||
return
|
||||
}
|
||||
|
|
@ -314,22 +302,17 @@ class PostgresRecordManager implements RecordManagerInterface {
|
|||
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
|
||||
}
|
||||
|
||||
// Handle both new format (objects with uid and docId) and old format (strings)
|
||||
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
|
||||
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
|
||||
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
|
||||
const groupIds = _groupIds ?? keys.map(() => null)
|
||||
|
||||
const groupIds = _groupIds ?? keyStrings.map(() => null)
|
||||
|
||||
if (groupIds.length !== keyStrings.length) {
|
||||
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids ${groupIds.length})`)
|
||||
if (groupIds.length !== keys.length) {
|
||||
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids ${groupIds.length})`)
|
||||
}
|
||||
|
||||
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i], docIds[i]])
|
||||
const recordsToUpsert = keys.map((key, i) => [key, this.namespace, updatedAt, groupIds[i]])
|
||||
|
||||
const valuesPlaceholders = recordsToUpsert.map((_, j) => this.generatePlaceholderForRowAt(j, recordsToUpsert[0].length)).join(', ')
|
||||
|
||||
const query = `INSERT INTO "${tableName}" (key, namespace, updated_at, group_id, doc_id) VALUES ${valuesPlaceholders} ON CONFLICT (key, namespace) DO UPDATE SET updated_at = EXCLUDED.updated_at, doc_id = EXCLUDED.doc_id;`
|
||||
const query = `INSERT INTO "${tableName}" (key, namespace, updated_at, group_id) VALUES ${valuesPlaceholders} ON CONFLICT (key, namespace) DO UPDATE SET updated_at = EXCLUDED.updated_at;`
|
||||
try {
|
||||
await queryRunner.manager.query(query, recordsToUpsert.flat())
|
||||
await queryRunner.release()
|
||||
|
|
@ -368,8 +351,8 @@ class PostgresRecordManager implements RecordManagerInterface {
|
|||
}
|
||||
}
|
||||
|
||||
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> {
|
||||
const { before, after, limit, groupIds, docId } = options ?? {}
|
||||
async listKeys(options?: ListKeyOptions): Promise<string[]> {
|
||||
const { before, after, limit, groupIds } = options ?? {}
|
||||
const tableName = this.sanitizeTableName(this.tableName)
|
||||
|
||||
let query = `SELECT key FROM "${tableName}" WHERE namespace = $1`
|
||||
|
|
@ -400,12 +383,6 @@ class PostgresRecordManager implements RecordManagerInterface {
|
|||
index += 1
|
||||
}
|
||||
|
||||
if (docId) {
|
||||
values.push(docId)
|
||||
query += ` AND doc_id = $${index}`
|
||||
index += 1
|
||||
}
|
||||
|
||||
query += ';'
|
||||
|
||||
const dataSource = await this.getDataSource()
|
||||
|
|
|
|||
|
|
@ -51,6 +51,7 @@ class SQLiteRecordManager_RecordManager implements INode {
|
|||
label: 'Namespace',
|
||||
name: 'namespace',
|
||||
type: 'string',
|
||||
description: 'If not specified, chatflowid will be used',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
},
|
||||
|
|
@ -197,15 +198,6 @@ CREATE INDEX IF NOT EXISTS key_index ON "${tableName}" (key);
|
|||
CREATE INDEX IF NOT EXISTS namespace_index ON "${tableName}" (namespace);
|
||||
CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
|
||||
|
||||
// Add doc_id column if it doesn't exist (migration for existing tables)
|
||||
const checkColumn = await queryRunner.manager.query(
|
||||
`SELECT COUNT(*) as count FROM pragma_table_info('${tableName}') WHERE name='doc_id';`
|
||||
)
|
||||
if (checkColumn[0].count === 0) {
|
||||
await queryRunner.manager.query(`ALTER TABLE "${tableName}" ADD COLUMN doc_id TEXT;`)
|
||||
await queryRunner.manager.query(`CREATE INDEX IF NOT EXISTS doc_id_index ON "${tableName}" (doc_id);`)
|
||||
}
|
||||
|
||||
await queryRunner.release()
|
||||
} catch (e: any) {
|
||||
// This error indicates that the table already exists
|
||||
|
|
@ -236,7 +228,7 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
|
|||
}
|
||||
}
|
||||
|
||||
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> {
|
||||
async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
|
||||
if (keys.length === 0) {
|
||||
return
|
||||
}
|
||||
|
|
@ -251,23 +243,23 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
|
|||
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
|
||||
}
|
||||
|
||||
// Handle both new format (objects with uid and docId) and old format (strings)
|
||||
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
|
||||
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
|
||||
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
|
||||
const groupIds = _groupIds ?? keys.map(() => null)
|
||||
|
||||
const groupIds = _groupIds ?? keyStrings.map(() => null)
|
||||
|
||||
if (groupIds.length !== keyStrings.length) {
|
||||
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids (${groupIds.length})`)
|
||||
if (groupIds.length !== keys.length) {
|
||||
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids (${groupIds.length})`)
|
||||
}
|
||||
|
||||
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i] ?? null, docIds[i] ?? null])
|
||||
const recordsToUpsert = keys.map((key, i) => [
|
||||
key,
|
||||
this.namespace,
|
||||
updatedAt,
|
||||
groupIds[i] ?? null // Ensure groupIds[i] is null if undefined
|
||||
])
|
||||
|
||||
const query = `
|
||||
INSERT INTO "${tableName}" (key, namespace, updated_at, group_id, doc_id)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
ON CONFLICT (key, namespace) DO UPDATE SET updated_at = excluded.updated_at, doc_id = excluded.doc_id`
|
||||
INSERT INTO "${tableName}" (key, namespace, updated_at, group_id)
|
||||
VALUES (?, ?, ?, ?)
|
||||
ON CONFLICT (key, namespace) DO UPDATE SET updated_at = excluded.updated_at`
|
||||
|
||||
try {
|
||||
// To handle multiple files upsert
|
||||
|
|
@ -322,8 +314,8 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
|
|||
}
|
||||
}
|
||||
|
||||
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> {
|
||||
const { before, after, limit, groupIds, docId } = options ?? {}
|
||||
async listKeys(options?: ListKeyOptions): Promise<string[]> {
|
||||
const { before, after, limit, groupIds } = options ?? {}
|
||||
const tableName = this.sanitizeTableName(this.tableName)
|
||||
|
||||
let query = `SELECT key FROM "${tableName}" WHERE namespace = ?`
|
||||
|
|
@ -352,11 +344,6 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
|
|||
values.push(...groupIds.filter((gid): gid is string => gid !== null))
|
||||
}
|
||||
|
||||
if (docId) {
|
||||
query += ` AND doc_id = ?`
|
||||
values.push(docId)
|
||||
}
|
||||
|
||||
query += ';'
|
||||
|
||||
const dataSource = await this.getDataSource()
|
||||
|
|
|
|||
|
|
@ -136,17 +136,17 @@ class Custom_MCP implements INode {
|
|||
}
|
||||
|
||||
let sandbox: ICommonObject = {}
|
||||
const workspaceId = options?.searchOptions?.workspaceId?._value || options?.workspaceId
|
||||
|
||||
if (mcpServerConfig.includes('$vars')) {
|
||||
const appDataSource = options.appDataSource as DataSource
|
||||
const databaseEntities = options.databaseEntities as IDatabaseEntity
|
||||
// If options.workspaceId is not set, create a new options object with the workspaceId for getVars.
|
||||
const optionsWithWorkspaceId = options.workspaceId ? options : { ...options, workspaceId }
|
||||
const variables = await getVars(appDataSource, databaseEntities, nodeData, optionsWithWorkspaceId)
|
||||
|
||||
const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
|
||||
sandbox['$vars'] = prepareSandboxVars(variables)
|
||||
}
|
||||
|
||||
const workspaceId = options?.searchOptions?.workspaceId?._value || options?.workspaceId
|
||||
|
||||
let canonicalConfig
|
||||
try {
|
||||
canonicalConfig = JSON.parse(mcpServerConfig)
|
||||
|
|
|
|||
|
|
@ -84,16 +84,11 @@ class CustomFunction_Utilities implements INode {
|
|||
|
||||
const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
|
||||
const flow = {
|
||||
input,
|
||||
chatflowId: options.chatflowid,
|
||||
sessionId: options.sessionId,
|
||||
chatId: options.chatId,
|
||||
rawOutput: options.postProcessing?.rawOutput || '',
|
||||
chatHistory: options.postProcessing?.chatHistory || [],
|
||||
sourceDocuments: options.postProcessing?.sourceDocuments,
|
||||
usedTools: options.postProcessing?.usedTools,
|
||||
artifacts: options.postProcessing?.artifacts,
|
||||
fileAnnotations: options.postProcessing?.fileAnnotations
|
||||
rawOutput: options.rawOutput || '',
|
||||
input
|
||||
}
|
||||
|
||||
let inputVars: ICommonObject = {}
|
||||
|
|
|
|||
|
|
@ -186,11 +186,7 @@ class Chroma_VectorStores implements INode {
|
|||
const vectorStoreName = collectionName
|
||||
await recordManager.createSchema()
|
||||
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
|
||||
const filterKeys: ICommonObject = {}
|
||||
if (options.docId) {
|
||||
filterKeys.docId = options.docId
|
||||
}
|
||||
const keys: string[] = await recordManager.listKeys(filterKeys)
|
||||
const keys: string[] = await recordManager.listKeys({})
|
||||
|
||||
const chromaStore = new ChromaExtended(embeddings, obj)
|
||||
|
||||
|
|
|
|||
|
|
@ -198,11 +198,7 @@ class Elasticsearch_VectorStores implements INode {
|
|||
const vectorStoreName = indexName
|
||||
await recordManager.createSchema()
|
||||
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
|
||||
const filterKeys: ICommonObject = {}
|
||||
if (options.docId) {
|
||||
filterKeys.docId = options.docId
|
||||
}
|
||||
const keys: string[] = await recordManager.listKeys(filterKeys)
|
||||
const keys: string[] = await recordManager.listKeys({})
|
||||
|
||||
await vectorStore.delete({ ids: keys })
|
||||
await recordManager.deleteKeys(keys)
|
||||
|
|
|
|||
|
|
@ -212,11 +212,7 @@ class Pinecone_VectorStores implements INode {
|
|||
const vectorStoreName = pineconeNamespace
|
||||
await recordManager.createSchema()
|
||||
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
|
||||
const filterKeys: ICommonObject = {}
|
||||
if (options.docId) {
|
||||
filterKeys.docId = options.docId
|
||||
}
|
||||
const keys: string[] = await recordManager.listKeys(filterKeys)
|
||||
const keys: string[] = await recordManager.listKeys({})
|
||||
|
||||
await pineconeStore.delete({ ids: keys })
|
||||
await recordManager.deleteKeys(keys)
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ class Postgres_VectorStores implements INode {
|
|||
constructor() {
|
||||
this.label = 'Postgres'
|
||||
this.name = 'postgres'
|
||||
this.version = 7.1
|
||||
this.version = 7.0
|
||||
this.type = 'Postgres'
|
||||
this.icon = 'postgres.svg'
|
||||
this.category = 'Vector Stores'
|
||||
|
|
@ -173,15 +173,6 @@ class Postgres_VectorStores implements INode {
|
|||
additionalParams: true,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Upsert Batch Size',
|
||||
name: 'batchSize',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
description: 'Upsert in batches of size N',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Additional Configuration',
|
||||
name: 'additionalConfig',
|
||||
|
|
@ -241,7 +232,6 @@ class Postgres_VectorStores implements INode {
|
|||
const docs = nodeData.inputs?.document as Document[]
|
||||
const recordManager = nodeData.inputs?.recordManager
|
||||
const isFileUploadEnabled = nodeData.inputs?.fileUpload as boolean
|
||||
const _batchSize = nodeData.inputs?.batchSize
|
||||
const vectorStoreDriver: VectorStoreDriver = Postgres_VectorStores.getDriverFromConfig(nodeData, options)
|
||||
|
||||
const flattenDocs = docs && docs.length ? flatten(docs) : []
|
||||
|
|
@ -275,15 +265,7 @@ class Postgres_VectorStores implements INode {
|
|||
|
||||
return res
|
||||
} else {
|
||||
if (_batchSize) {
|
||||
const batchSize = parseInt(_batchSize, 10)
|
||||
for (let i = 0; i < finalDocs.length; i += batchSize) {
|
||||
const batch = finalDocs.slice(i, i + batchSize)
|
||||
await vectorStoreDriver.fromDocuments(batch)
|
||||
}
|
||||
} else {
|
||||
await vectorStoreDriver.fromDocuments(finalDocs)
|
||||
}
|
||||
await vectorStoreDriver.fromDocuments(finalDocs)
|
||||
|
||||
return { numAdded: finalDocs.length, addedDocs: finalDocs }
|
||||
}
|
||||
|
|
@ -303,11 +285,7 @@ class Postgres_VectorStores implements INode {
|
|||
const vectorStoreName = tableName
|
||||
await recordManager.createSchema()
|
||||
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
|
||||
const filterKeys: ICommonObject = {}
|
||||
if (options.docId) {
|
||||
filterKeys.docId = options.docId
|
||||
}
|
||||
const keys: string[] = await recordManager.listKeys(filterKeys)
|
||||
const keys: string[] = await recordManager.listKeys({})
|
||||
|
||||
await vectorStore.delete({ ids: keys })
|
||||
await recordManager.deleteKeys(keys)
|
||||
|
|
|
|||
|
|
@ -5,11 +5,6 @@ import { TypeORMVectorStore, TypeORMVectorStoreArgs, TypeORMVectorStoreDocument
|
|||
import { VectorStore } from '@langchain/core/vectorstores'
|
||||
import { Document } from '@langchain/core/documents'
|
||||
import { Pool } from 'pg'
|
||||
import { v4 as uuid } from 'uuid'
|
||||
|
||||
type TypeORMAddDocumentOptions = {
|
||||
ids?: string[]
|
||||
}
|
||||
|
||||
export class TypeORMDriver extends VectorStoreDriver {
|
||||
protected _postgresConnectionOptions: DataSourceOptions
|
||||
|
|
@ -100,45 +95,15 @@ export class TypeORMDriver extends VectorStoreDriver {
|
|||
try {
|
||||
instance.appDataSource.getRepository(instance.documentEntity).delete(ids)
|
||||
} catch (e) {
|
||||
console.error('Failed to delete', e)
|
||||
console.error('Failed to delete')
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
instance.addVectors = async (
|
||||
vectors: number[][],
|
||||
documents: Document[],
|
||||
documentOptions?: TypeORMAddDocumentOptions
|
||||
): Promise<void> => {
|
||||
const rows = vectors.map((embedding, idx) => {
|
||||
const embeddingString = `[${embedding.join(',')}]`
|
||||
const documentRow = {
|
||||
id: documentOptions?.ids?.length ? documentOptions.ids[idx] : uuid(),
|
||||
pageContent: documents[idx].pageContent,
|
||||
embedding: embeddingString,
|
||||
metadata: documents[idx].metadata
|
||||
}
|
||||
return documentRow
|
||||
})
|
||||
const baseAddVectorsFn = instance.addVectors.bind(instance)
|
||||
|
||||
const documentRepository = instance.appDataSource.getRepository(instance.documentEntity)
|
||||
const _batchSize = this.nodeData.inputs?.batchSize
|
||||
const chunkSize = _batchSize ? parseInt(_batchSize, 10) : 500
|
||||
|
||||
for (let i = 0; i < rows.length; i += chunkSize) {
|
||||
const chunk = rows.slice(i, i + chunkSize)
|
||||
try {
|
||||
await documentRepository.save(chunk)
|
||||
} catch (e) {
|
||||
console.error(e)
|
||||
throw new Error(`Error inserting: ${chunk[0].pageContent}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
instance.addDocuments = async (documents: Document[], options?: { ids?: string[] }): Promise<void> => {
|
||||
const texts = documents.map(({ pageContent }) => pageContent)
|
||||
return (instance.addVectors as any)(await this.getEmbeddings().embedDocuments(texts), documents, options)
|
||||
instance.addVectors = async (vectors, documents) => {
|
||||
return baseAddVectorsFn(vectors, this.sanitizeDocuments(documents))
|
||||
}
|
||||
|
||||
return instance
|
||||
|
|
|
|||
|
|
@ -385,11 +385,7 @@ class Qdrant_VectorStores implements INode {
|
|||
const vectorStoreName = collectionName
|
||||
await recordManager.createSchema()
|
||||
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
|
||||
const filterKeys: ICommonObject = {}
|
||||
if (options.docId) {
|
||||
filterKeys.docId = options.docId
|
||||
}
|
||||
const keys: string[] = await recordManager.listKeys(filterKeys)
|
||||
const keys: string[] = await recordManager.listKeys({})
|
||||
|
||||
await vectorStore.delete({ ids: keys })
|
||||
await recordManager.deleteKeys(keys)
|
||||
|
|
|
|||
|
|
@ -197,11 +197,7 @@ class Supabase_VectorStores implements INode {
|
|||
const vectorStoreName = tableName + '_' + queryName
|
||||
await recordManager.createSchema()
|
||||
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
|
||||
const filterKeys: ICommonObject = {}
|
||||
if (options.docId) {
|
||||
filterKeys.docId = options.docId
|
||||
}
|
||||
const keys: string[] = await recordManager.listKeys(filterKeys)
|
||||
const keys: string[] = await recordManager.listKeys({})
|
||||
|
||||
await supabaseStore.delete({ ids: keys })
|
||||
await recordManager.deleteKeys(keys)
|
||||
|
|
|
|||
|
|
@ -187,11 +187,7 @@ class Upstash_VectorStores implements INode {
|
|||
const vectorStoreName = UPSTASH_VECTOR_REST_URL
|
||||
await recordManager.createSchema()
|
||||
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
|
||||
const filterKeys: ICommonObject = {}
|
||||
if (options.docId) {
|
||||
filterKeys.docId = options.docId
|
||||
}
|
||||
const keys: string[] = await recordManager.listKeys(filterKeys)
|
||||
const keys: string[] = await recordManager.listKeys({})
|
||||
|
||||
await upstashStore.delete({ ids: keys })
|
||||
await recordManager.deleteKeys(keys)
|
||||
|
|
|
|||
|
|
@ -252,11 +252,7 @@ class Weaviate_VectorStores implements INode {
|
|||
const vectorStoreName = weaviateTextKey ? weaviateIndex + '_' + weaviateTextKey : weaviateIndex
|
||||
await recordManager.createSchema()
|
||||
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
|
||||
const filterKeys: ICommonObject = {}
|
||||
if (options.docId) {
|
||||
filterKeys.docId = options.docId
|
||||
}
|
||||
const keys: string[] = await recordManager.listKeys(filterKeys)
|
||||
const keys: string[] = await recordManager.listKeys({})
|
||||
|
||||
await weaviateStore.delete({ ids: keys })
|
||||
await recordManager.deleteKeys(keys)
|
||||
|
|
|
|||
|
|
@ -42,8 +42,7 @@
|
|||
"@google-ai/generativelanguage": "^2.5.0",
|
||||
"@google-cloud/storage": "^7.15.2",
|
||||
"@google/generative-ai": "^0.24.0",
|
||||
"@grpc/grpc-js": "^1.10.10",
|
||||
"@huggingface/inference": "^4.13.2",
|
||||
"@huggingface/inference": "^2.6.1",
|
||||
"@langchain/anthropic": "0.3.33",
|
||||
"@langchain/aws": "^0.1.11",
|
||||
"@langchain/baidu-qianfan": "^0.1.0",
|
||||
|
|
@ -74,20 +73,6 @@
|
|||
"@modelcontextprotocol/server-slack": "^2025.1.17",
|
||||
"@notionhq/client": "^2.2.8",
|
||||
"@opensearch-project/opensearch": "^1.2.0",
|
||||
"@opentelemetry/api": "1.9.0",
|
||||
"@opentelemetry/auto-instrumentations-node": "^0.52.0",
|
||||
"@opentelemetry/core": "1.27.0",
|
||||
"@opentelemetry/exporter-metrics-otlp-grpc": "0.54.0",
|
||||
"@opentelemetry/exporter-metrics-otlp-http": "0.54.0",
|
||||
"@opentelemetry/exporter-metrics-otlp-proto": "0.54.0",
|
||||
"@opentelemetry/exporter-trace-otlp-grpc": "0.54.0",
|
||||
"@opentelemetry/exporter-trace-otlp-http": "0.54.0",
|
||||
"@opentelemetry/exporter-trace-otlp-proto": "0.54.0",
|
||||
"@opentelemetry/resources": "1.27.0",
|
||||
"@opentelemetry/sdk-metrics": "1.27.0",
|
||||
"@opentelemetry/sdk-node": "^0.54.0",
|
||||
"@opentelemetry/sdk-trace-base": "1.27.0",
|
||||
"@opentelemetry/semantic-conventions": "1.27.0",
|
||||
"@pinecone-database/pinecone": "4.0.0",
|
||||
"@qdrant/js-client-rest": "^1.9.0",
|
||||
"@stripe/agent-toolkit": "^0.1.20",
|
||||
|
|
|
|||
|
|
@ -1774,7 +1774,7 @@ export class AnalyticHandler {
|
|||
}
|
||||
|
||||
if (Object.prototype.hasOwnProperty.call(this.handlers, 'lunary')) {
|
||||
const toolEventId: string = this.handlers['lunary'].toolEvent[returnIds['lunary'].toolEvent]
|
||||
const toolEventId: string = this.handlers['lunary'].llmEvent[returnIds['lunary'].toolEvent]
|
||||
const monitor = this.handlers['lunary'].client
|
||||
|
||||
if (monitor && toolEventId) {
|
||||
|
|
|
|||
|
|
@ -8,10 +8,6 @@ import { IndexingResult } from './Interface'
|
|||
|
||||
type Metadata = Record<string, unknown>
|
||||
|
||||
export interface ExtendedRecordManagerInterface extends RecordManagerInterface {
|
||||
update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: Record<string, any>): Promise<void>
|
||||
}
|
||||
|
||||
type StringOrDocFunc = string | ((doc: DocumentInterface) => string)
|
||||
|
||||
export interface HashedDocumentInterface extends DocumentInterface {
|
||||
|
|
@ -211,7 +207,7 @@ export const _isBaseDocumentLoader = (arg: any): arg is BaseDocumentLoader => {
|
|||
|
||||
interface IndexArgs {
|
||||
docsSource: BaseDocumentLoader | DocumentInterface[]
|
||||
recordManager: ExtendedRecordManagerInterface
|
||||
recordManager: RecordManagerInterface
|
||||
vectorStore: VectorStore
|
||||
options?: IndexOptions
|
||||
}
|
||||
|
|
@ -279,7 +275,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
|
|||
|
||||
const uids: string[] = []
|
||||
const docsToIndex: DocumentInterface[] = []
|
||||
const docsToUpdate: Array<{ uid: string; docId: string }> = []
|
||||
const docsToUpdate: string[] = []
|
||||
const seenDocs = new Set<string>()
|
||||
hashedDocs.forEach((hashedDoc, i) => {
|
||||
const docExists = batchExists[i]
|
||||
|
|
@ -287,7 +283,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
|
|||
if (forceUpdate) {
|
||||
seenDocs.add(hashedDoc.uid)
|
||||
} else {
|
||||
docsToUpdate.push({ uid: hashedDoc.uid, docId: hashedDoc.metadata.docId as string })
|
||||
docsToUpdate.push(hashedDoc.uid)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
|
@ -312,7 +308,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
|
|||
}
|
||||
|
||||
await recordManager.update(
|
||||
hashedDocs.map((doc) => ({ uid: doc.uid, docId: doc.metadata.docId as string })),
|
||||
hashedDocs.map((doc) => doc.uid),
|
||||
{ timeAtLeast: indexStartDt, groupIds: sourceIds }
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@ import { cloneDeep, omit, get } from 'lodash'
|
|||
import TurndownService from 'turndown'
|
||||
import { DataSource, Equal } from 'typeorm'
|
||||
import { ICommonObject, IDatabaseEntity, IFileUpload, IMessage, INodeData, IVariable, MessageContentImageUrl } from './Interface'
|
||||
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
|
||||
import { AES, enc } from 'crypto-js'
|
||||
import { AIMessage, HumanMessage, BaseMessage } from '@langchain/core/messages'
|
||||
import { Document } from '@langchain/core/documents'
|
||||
|
|
@ -1942,160 +1941,3 @@ export async function parseWithTypeConversion<T extends z.ZodTypeAny>(schema: T,
|
|||
throw e
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Configures structured output for the LLM using Zod schema
|
||||
* @param {BaseChatModel} llmNodeInstance - The LLM instance to configure
|
||||
* @param {any[]} structuredOutput - Array of structured output schema definitions
|
||||
* @returns {BaseChatModel} - The configured LLM instance
|
||||
*/
|
||||
export const configureStructuredOutput = (llmNodeInstance: BaseChatModel, structuredOutput: any[]): BaseChatModel => {
|
||||
try {
|
||||
const zodObj: ICommonObject = {}
|
||||
for (const sch of structuredOutput) {
|
||||
if (sch.type === 'string') {
|
||||
zodObj[sch.key] = z.string().describe(sch.description || '')
|
||||
} else if (sch.type === 'stringArray') {
|
||||
zodObj[sch.key] = z.array(z.string()).describe(sch.description || '')
|
||||
} else if (sch.type === 'number') {
|
||||
zodObj[sch.key] = z.number().describe(sch.description || '')
|
||||
} else if (sch.type === 'boolean') {
|
||||
zodObj[sch.key] = z.boolean().describe(sch.description || '')
|
||||
} else if (sch.type === 'enum') {
|
||||
const enumValues = sch.enumValues?.split(',').map((item: string) => item.trim()) || []
|
||||
zodObj[sch.key] = z
|
||||
.enum(enumValues.length ? (enumValues as [string, ...string[]]) : ['default'])
|
||||
.describe(sch.description || '')
|
||||
} else if (sch.type === 'jsonArray') {
|
||||
const jsonSchema = sch.jsonSchema
|
||||
if (jsonSchema) {
|
||||
try {
|
||||
// Parse the JSON schema
|
||||
const schemaObj = JSON.parse(jsonSchema)
|
||||
|
||||
// Create a Zod schema from the JSON schema
|
||||
const itemSchema = createZodSchemaFromJSON(schemaObj)
|
||||
|
||||
// Create an array schema of the item schema
|
||||
zodObj[sch.key] = z.array(itemSchema).describe(sch.description || '')
|
||||
} catch (err) {
|
||||
console.error(`Error parsing JSON schema for ${sch.key}:`, err)
|
||||
// Fallback to generic array of records
|
||||
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
|
||||
}
|
||||
} else {
|
||||
// If no schema provided, use generic array of records
|
||||
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
|
||||
}
|
||||
}
|
||||
}
|
||||
const structuredOutputSchema = z.object(zodObj)
|
||||
|
||||
// @ts-ignore
|
||||
return llmNodeInstance.withStructuredOutput(structuredOutputSchema)
|
||||
} catch (exception) {
|
||||
console.error(exception)
|
||||
return llmNodeInstance
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a Zod schema from a JSON schema object
|
||||
* @param {any} jsonSchema - The JSON schema object
|
||||
* @returns {z.ZodTypeAny} - A Zod schema
|
||||
*/
|
||||
export const createZodSchemaFromJSON = (jsonSchema: any): z.ZodTypeAny => {
|
||||
// If the schema is an object with properties, create an object schema
|
||||
if (typeof jsonSchema === 'object' && jsonSchema !== null) {
|
||||
const schemaObj: Record<string, z.ZodTypeAny> = {}
|
||||
|
||||
// Process each property in the schema
|
||||
for (const [key, value] of Object.entries(jsonSchema)) {
|
||||
if (value === null) {
|
||||
// Handle null values
|
||||
schemaObj[key] = z.null()
|
||||
} else if (typeof value === 'object' && !Array.isArray(value)) {
|
||||
// Check if the property has a type definition
|
||||
if ('type' in value) {
|
||||
const type = value.type as string
|
||||
const description = ('description' in value ? (value.description as string) : '') || ''
|
||||
|
||||
// Create the appropriate Zod type based on the type property
|
||||
if (type === 'string') {
|
||||
schemaObj[key] = z.string().describe(description)
|
||||
} else if (type === 'number') {
|
||||
schemaObj[key] = z.number().describe(description)
|
||||
} else if (type === 'boolean') {
|
||||
schemaObj[key] = z.boolean().describe(description)
|
||||
} else if (type === 'array') {
|
||||
// If it's an array type, check if items is defined
|
||||
if ('items' in value && value.items) {
|
||||
const itemSchema = createZodSchemaFromJSON(value.items)
|
||||
schemaObj[key] = z.array(itemSchema).describe(description)
|
||||
} else {
|
||||
// Default to array of any if items not specified
|
||||
schemaObj[key] = z.array(z.any()).describe(description)
|
||||
}
|
||||
} else if (type === 'object') {
|
||||
// If it's an object type, check if properties is defined
|
||||
if ('properties' in value && value.properties) {
|
||||
const nestedSchema = createZodSchemaFromJSON(value.properties)
|
||||
schemaObj[key] = nestedSchema.describe(description)
|
||||
} else {
|
||||
// Default to record of any if properties not specified
|
||||
schemaObj[key] = z.record(z.any()).describe(description)
|
||||
}
|
||||
} else {
|
||||
// Default to any for unknown types
|
||||
schemaObj[key] = z.any().describe(description)
|
||||
}
|
||||
|
||||
// Check if the property is optional
|
||||
if ('optional' in value && value.optional === true) {
|
||||
schemaObj[key] = schemaObj[key].optional()
|
||||
}
|
||||
} else if (Array.isArray(value)) {
|
||||
// Array values without a type property
|
||||
if (value.length > 0) {
|
||||
// If the array has items, recursively create a schema for the first item
|
||||
const itemSchema = createZodSchemaFromJSON(value[0])
|
||||
schemaObj[key] = z.array(itemSchema)
|
||||
} else {
|
||||
// Empty array, allow any array
|
||||
schemaObj[key] = z.array(z.any())
|
||||
}
|
||||
} else {
|
||||
// It's a nested object without a type property, recursively create schema
|
||||
schemaObj[key] = createZodSchemaFromJSON(value)
|
||||
}
|
||||
} else if (Array.isArray(value)) {
|
||||
// Array values
|
||||
if (value.length > 0) {
|
||||
// If the array has items, recursively create a schema for the first item
|
||||
const itemSchema = createZodSchemaFromJSON(value[0])
|
||||
schemaObj[key] = z.array(itemSchema)
|
||||
} else {
|
||||
// Empty array, allow any array
|
||||
schemaObj[key] = z.array(z.any())
|
||||
}
|
||||
} else {
|
||||
// For primitive values (which shouldn't be in the schema directly)
|
||||
// Use the corresponding Zod type
|
||||
if (typeof value === 'string') {
|
||||
schemaObj[key] = z.string()
|
||||
} else if (typeof value === 'number') {
|
||||
schemaObj[key] = z.number()
|
||||
} else if (typeof value === 'boolean') {
|
||||
schemaObj[key] = z.boolean()
|
||||
} else {
|
||||
schemaObj[key] = z.any()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return z.object(schemaObj)
|
||||
}
|
||||
|
||||
// Fallback to any for unknown types
|
||||
return z.any()
|
||||
}
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@
|
|||
"@google-cloud/logging-winston": "^6.0.0",
|
||||
"@keyv/redis": "^4.2.0",
|
||||
"@oclif/core": "4.0.7",
|
||||
"@opentelemetry/api": "1.9.0",
|
||||
"@opentelemetry/api": "^1.3.0",
|
||||
"@opentelemetry/auto-instrumentations-node": "^0.52.0",
|
||||
"@opentelemetry/core": "1.27.0",
|
||||
"@opentelemetry/exporter-metrics-otlp-grpc": "0.54.0",
|
||||
|
|
@ -119,12 +119,12 @@
|
|||
"lodash": "^4.17.21",
|
||||
"moment": "^2.29.3",
|
||||
"moment-timezone": "^0.5.34",
|
||||
"multer": "^2.0.2",
|
||||
"multer": "^1.4.5-lts.1",
|
||||
"multer-cloud-storage": "^4.0.0",
|
||||
"multer-s3": "^3.0.1",
|
||||
"mysql2": "^3.11.3",
|
||||
"nanoid": "3",
|
||||
"nodemailer": "^7.0.7",
|
||||
"nodemailer": "^6.9.14",
|
||||
"openai": "^4.96.0",
|
||||
"passport": "^0.7.0",
|
||||
"passport-auth0": "^1.4.4",
|
||||
|
|
|
|||
|
|
@ -37,19 +37,7 @@ export class UsageCacheManager {
|
|||
if (process.env.MODE === MODE.QUEUE) {
|
||||
let redisConfig: string | Record<string, any>
|
||||
if (process.env.REDIS_URL) {
|
||||
redisConfig = {
|
||||
url: process.env.REDIS_URL,
|
||||
socket: {
|
||||
keepAlive:
|
||||
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
|
||||
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
|
||||
: undefined
|
||||
},
|
||||
pingInterval:
|
||||
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
|
||||
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
|
||||
: undefined
|
||||
}
|
||||
redisConfig = process.env.REDIS_URL
|
||||
} else {
|
||||
redisConfig = {
|
||||
username: process.env.REDIS_USERNAME || undefined,
|
||||
|
|
@ -60,16 +48,8 @@ export class UsageCacheManager {
|
|||
tls: process.env.REDIS_TLS === 'true',
|
||||
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
|
||||
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
|
||||
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined,
|
||||
keepAlive:
|
||||
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
|
||||
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
|
||||
: undefined
|
||||
},
|
||||
pingInterval:
|
||||
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
|
||||
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
|
||||
: undefined
|
||||
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
|
||||
}
|
||||
}
|
||||
}
|
||||
this.cache = createCache({
|
||||
|
|
|
|||
|
|
@ -465,10 +465,9 @@ const insertIntoVectorStore = async (req: Request, res: Response, next: NextFunc
|
|||
}
|
||||
const subscriptionId = req.user?.activeOrganizationSubscriptionId || ''
|
||||
const body = req.body
|
||||
const isStrictSave = body.isStrictSave ?? false
|
||||
const apiResponse = await documentStoreService.insertIntoVectorStoreMiddleware(
|
||||
body,
|
||||
isStrictSave,
|
||||
false,
|
||||
orgId,
|
||||
workspaceId,
|
||||
subscriptionId,
|
||||
|
|
@ -514,11 +513,7 @@ const deleteVectorStoreFromStore = async (req: Request, res: Response, next: Nex
|
|||
`Error: documentStoreController.deleteVectorStoreFromStore - workspaceId not provided!`
|
||||
)
|
||||
}
|
||||
const apiResponse = await documentStoreService.deleteVectorStoreFromStore(
|
||||
req.params.storeId,
|
||||
workspaceId,
|
||||
(req.query.docId as string) || undefined
|
||||
)
|
||||
const apiResponse = await documentStoreService.deleteVectorStoreFromStore(req.params.storeId, workspaceId)
|
||||
return res.json(apiResponse)
|
||||
} catch (error) {
|
||||
next(error)
|
||||
|
|
|
|||
|
|
@ -1,14 +0,0 @@
|
|||
import { MigrationInterface, QueryRunner } from 'typeorm'
|
||||
|
||||
export class FixDocumentStoreFileChunkLongText1765000000000 implements MigrationInterface {
|
||||
public async up(queryRunner: QueryRunner): Promise<void> {
|
||||
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`pageContent\` LONGTEXT NOT NULL;`)
|
||||
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`metadata\` LONGTEXT NULL;`)
|
||||
}
|
||||
|
||||
public async down(queryRunner: QueryRunner): Promise<void> {
|
||||
// WARNING: Reverting to TEXT may cause data loss if content exceeds the 64KB limit.
|
||||
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`pageContent\` TEXT NOT NULL;`)
|
||||
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`metadata\` TEXT NULL;`)
|
||||
}
|
||||
}
|
||||
|
|
@ -40,7 +40,6 @@ import { AddTextToSpeechToChatFlow1754986457485 } from './1754986457485-AddTextT
|
|||
import { ModifyChatflowType1755066758601 } from './1755066758601-ModifyChatflowType'
|
||||
import { AddTextToSpeechToChatFlow1759419231100 } from './1759419231100-AddTextToSpeechToChatFlow'
|
||||
import { AddChatFlowNameIndex1759424809984 } from './1759424809984-AddChatFlowNameIndex'
|
||||
import { FixDocumentStoreFileChunkLongText1765000000000 } from './1765000000000-FixDocumentStoreFileChunkLongText'
|
||||
|
||||
import { AddAuthTables1720230151482 } from '../../../enterprise/database/migrations/mariadb/1720230151482-AddAuthTables'
|
||||
import { AddWorkspace1725437498242 } from '../../../enterprise/database/migrations/mariadb/1725437498242-AddWorkspace'
|
||||
|
|
@ -107,6 +106,5 @@ export const mariadbMigrations = [
|
|||
AddTextToSpeechToChatFlow1754986457485,
|
||||
ModifyChatflowType1755066758601,
|
||||
AddTextToSpeechToChatFlow1759419231100,
|
||||
AddChatFlowNameIndex1759424809984,
|
||||
FixDocumentStoreFileChunkLongText1765000000000
|
||||
AddChatFlowNameIndex1759424809984
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,14 +0,0 @@
|
|||
import { MigrationInterface, QueryRunner } from 'typeorm'
|
||||
|
||||
export class FixDocumentStoreFileChunkLongText1765000000000 implements MigrationInterface {
|
||||
public async up(queryRunner: QueryRunner): Promise<void> {
|
||||
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`pageContent\` LONGTEXT NOT NULL;`)
|
||||
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`metadata\` LONGTEXT NULL;`)
|
||||
}
|
||||
|
||||
public async down(queryRunner: QueryRunner): Promise<void> {
|
||||
// WARNING: Reverting to TEXT may cause data loss if content exceeds the 64KB limit.
|
||||
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`pageContent\` TEXT NOT NULL;`)
|
||||
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`metadata\` TEXT NULL;`)
|
||||
}
|
||||
}
|
||||
|
|
@ -41,7 +41,6 @@ import { AddTextToSpeechToChatFlow1754986468397 } from './1754986468397-AddTextT
|
|||
import { ModifyChatflowType1755066758601 } from './1755066758601-ModifyChatflowType'
|
||||
import { AddTextToSpeechToChatFlow1759419216034 } from './1759419216034-AddTextToSpeechToChatFlow'
|
||||
import { AddChatFlowNameIndex1759424828558 } from './1759424828558-AddChatFlowNameIndex'
|
||||
import { FixDocumentStoreFileChunkLongText1765000000000 } from './1765000000000-FixDocumentStoreFileChunkLongText'
|
||||
|
||||
import { AddAuthTables1720230151482 } from '../../../enterprise/database/migrations/mysql/1720230151482-AddAuthTables'
|
||||
import { AddWorkspace1720230151484 } from '../../../enterprise/database/migrations/mysql/1720230151484-AddWorkspace'
|
||||
|
|
@ -109,6 +108,5 @@ export const mysqlMigrations = [
|
|||
AddTextToSpeechToChatFlow1754986468397,
|
||||
ModifyChatflowType1755066758601,
|
||||
AddTextToSpeechToChatFlow1759419216034,
|
||||
AddChatFlowNameIndex1759424828558,
|
||||
FixDocumentStoreFileChunkLongText1765000000000
|
||||
AddChatFlowNameIndex1759424828558
|
||||
]
|
||||
|
|
|
|||
|
|
@ -391,7 +391,7 @@ const deleteDocumentStoreFileChunk = async (storeId: string, docId: string, chun
|
|||
}
|
||||
}
|
||||
|
||||
const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string, docId?: string) => {
|
||||
const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string) => {
|
||||
try {
|
||||
const appServer = getRunningExpressApp()
|
||||
const componentNodes = appServer.nodesPool.componentNodes
|
||||
|
|
@ -461,7 +461,7 @@ const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string,
|
|||
|
||||
// Call the delete method of the vector store
|
||||
if (vectorStoreObj.vectorStoreMethods.delete) {
|
||||
await vectorStoreObj.vectorStoreMethods.delete(vStoreNodeData, idsToDelete, { ...options, docId })
|
||||
await vectorStoreObj.vectorStoreMethods.delete(vStoreNodeData, idsToDelete, options)
|
||||
}
|
||||
} catch (error) {
|
||||
throw new InternalFlowiseError(
|
||||
|
|
@ -1157,18 +1157,6 @@ const updateVectorStoreConfigOnly = async (data: ICommonObject, workspaceId: str
|
|||
)
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Saves vector store configuration to the document store entity.
|
||||
* Handles embedding, vector store, and record manager configurations.
|
||||
*
|
||||
* @example
|
||||
* // Strict mode: Only save what's provided, clear the rest
|
||||
* await saveVectorStoreConfig(ds, { storeId, embeddingName, embeddingConfig }, true, wsId)
|
||||
*
|
||||
* @example
|
||||
* // Lenient mode: Reuse existing configs if not provided
|
||||
* await saveVectorStoreConfig(ds, { storeId, vectorStoreName, vectorStoreConfig }, false, wsId)
|
||||
*/
|
||||
const saveVectorStoreConfig = async (appDataSource: DataSource, data: ICommonObject, isStrictSave = true, workspaceId: string) => {
|
||||
try {
|
||||
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
|
||||
|
|
@ -1233,15 +1221,6 @@ const saveVectorStoreConfig = async (appDataSource: DataSource, data: ICommonObj
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Inserts documents from document store into the configured vector store.
|
||||
*
|
||||
* Process:
|
||||
* 1. Saves vector store configuration (embedding, vector store, record manager)
|
||||
* 2. Sets document store status to UPSERTING
|
||||
* 3. Performs the actual vector store upsert operation
|
||||
* 4. Updates status to UPSERTED upon completion
|
||||
*/
|
||||
export const insertIntoVectorStore = async ({
|
||||
appDataSource,
|
||||
componentNodes,
|
||||
|
|
@ -1252,16 +1231,19 @@ export const insertIntoVectorStore = async ({
|
|||
workspaceId
|
||||
}: IExecuteVectorStoreInsert) => {
|
||||
try {
|
||||
// Step 1: Save configuration based on isStrictSave mode
|
||||
const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave, workspaceId)
|
||||
|
||||
// Step 2: Mark as UPSERTING before starting the operation
|
||||
entity.status = DocumentStoreStatus.UPSERTING
|
||||
await appDataSource.getRepository(DocumentStore).save(entity)
|
||||
|
||||
// Step 3: Perform the actual vector store upsert
|
||||
// Note: Configuration already saved above, worker thread just retrieves and uses it
|
||||
const indexResult = await _insertIntoVectorStoreWorkerThread(appDataSource, componentNodes, telemetry, data, orgId, workspaceId)
|
||||
const indexResult = await _insertIntoVectorStoreWorkerThread(
|
||||
appDataSource,
|
||||
componentNodes,
|
||||
telemetry,
|
||||
data,
|
||||
isStrictSave,
|
||||
orgId,
|
||||
workspaceId
|
||||
)
|
||||
return indexResult
|
||||
} catch (error) {
|
||||
throw new InternalFlowiseError(
|
||||
|
|
@ -1326,18 +1308,12 @@ const _insertIntoVectorStoreWorkerThread = async (
|
|||
componentNodes: IComponentNodes,
|
||||
telemetry: Telemetry,
|
||||
data: ICommonObject,
|
||||
isStrictSave = true,
|
||||
orgId: string,
|
||||
workspaceId: string
|
||||
) => {
|
||||
try {
|
||||
// Configuration already saved by insertIntoVectorStore, just retrieve the entity
|
||||
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
|
||||
id: data.storeId,
|
||||
workspaceId: workspaceId
|
||||
})
|
||||
if (!entity) {
|
||||
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Document store ${data.storeId} not found`)
|
||||
}
|
||||
const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave, workspaceId)
|
||||
let upsertHistory: Record<string, any> = {}
|
||||
const chatflowid = data.storeId // fake chatflowid because this is not tied to any chatflow
|
||||
|
||||
|
|
@ -1374,10 +1350,7 @@ const _insertIntoVectorStoreWorkerThread = async (
|
|||
const docs: Document[] = chunks.map((chunk: DocumentStoreFileChunk) => {
|
||||
return new Document({
|
||||
pageContent: chunk.pageContent,
|
||||
metadata: {
|
||||
...JSON.parse(chunk.metadata),
|
||||
docId: chunk.docId
|
||||
}
|
||||
metadata: JSON.parse(chunk.metadata)
|
||||
})
|
||||
})
|
||||
vStoreNodeData.inputs.document = docs
|
||||
|
|
@ -1938,8 +1911,6 @@ const upsertDocStore = async (
|
|||
recordManagerConfig
|
||||
}
|
||||
|
||||
// Use isStrictSave: false to preserve existing configurations during upsert
|
||||
// This allows the operation to reuse existing embedding/vector store/record manager configs
|
||||
const res = await insertIntoVectorStore({
|
||||
appDataSource,
|
||||
componentNodes,
|
||||
|
|
|
|||
|
|
@ -2122,62 +2122,7 @@ export const executeAgentFlow = async ({
|
|||
|
||||
// check if last agentFlowExecutedData.data.output contains the key "content"
|
||||
const lastNodeOutput = agentFlowExecutedData[agentFlowExecutedData.length - 1].data?.output as ICommonObject | undefined
|
||||
let content = (lastNodeOutput?.content as string) ?? ' '
|
||||
|
||||
/* Check for post-processing settings */
|
||||
let chatflowConfig: ICommonObject = {}
|
||||
try {
|
||||
if (chatflow.chatbotConfig) {
|
||||
chatflowConfig = typeof chatflow.chatbotConfig === 'string' ? JSON.parse(chatflow.chatbotConfig) : chatflow.chatbotConfig
|
||||
}
|
||||
} catch (e) {
|
||||
logger.error('[server]: Error parsing chatflow config:', e)
|
||||
}
|
||||
|
||||
if (chatflowConfig?.postProcessing?.enabled === true && content) {
|
||||
try {
|
||||
const postProcessingFunction = JSON.parse(chatflowConfig?.postProcessing?.customFunction)
|
||||
const nodeInstanceFilePath = componentNodes['customFunctionAgentflow'].filePath as string
|
||||
const nodeModule = await import(nodeInstanceFilePath)
|
||||
//set the outputs.output to EndingNode to prevent json escaping of content...
|
||||
const nodeData = {
|
||||
inputs: { customFunctionJavascriptFunction: postProcessingFunction }
|
||||
}
|
||||
const runtimeChatHistory = agentflowRuntime.chatHistory || []
|
||||
const chatHistory = [...pastChatHistory, ...runtimeChatHistory]
|
||||
const options: ICommonObject = {
|
||||
chatflowid: chatflow.id,
|
||||
sessionId,
|
||||
chatId,
|
||||
input: question || form,
|
||||
postProcessing: {
|
||||
rawOutput: content,
|
||||
chatHistory: cloneDeep(chatHistory),
|
||||
sourceDocuments: lastNodeOutput?.sourceDocuments ? cloneDeep(lastNodeOutput.sourceDocuments) : undefined,
|
||||
usedTools: lastNodeOutput?.usedTools ? cloneDeep(lastNodeOutput.usedTools) : undefined,
|
||||
artifacts: lastNodeOutput?.artifacts ? cloneDeep(lastNodeOutput.artifacts) : undefined,
|
||||
fileAnnotations: lastNodeOutput?.fileAnnotations ? cloneDeep(lastNodeOutput.fileAnnotations) : undefined
|
||||
},
|
||||
appDataSource,
|
||||
databaseEntities,
|
||||
workspaceId,
|
||||
orgId,
|
||||
logger
|
||||
}
|
||||
const customFuncNodeInstance = new nodeModule.nodeClass()
|
||||
const customFunctionResponse = await customFuncNodeInstance.run(nodeData, question || form, options)
|
||||
const moderatedResponse = customFunctionResponse.output.content
|
||||
if (typeof moderatedResponse === 'string') {
|
||||
content = moderatedResponse
|
||||
} else if (typeof moderatedResponse === 'object') {
|
||||
content = '```json\n' + JSON.stringify(moderatedResponse, null, 2) + '\n```'
|
||||
} else {
|
||||
content = moderatedResponse
|
||||
}
|
||||
} catch (e) {
|
||||
logger.error('[server]: Post Processing Error:', e)
|
||||
}
|
||||
}
|
||||
const content = (lastNodeOutput?.content as string) ?? ' '
|
||||
|
||||
// remove credentialId from agentFlowExecutedData
|
||||
agentFlowExecutedData = agentFlowExecutedData.map((data) => _removeCredentialId(data))
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import { Request } from 'express'
|
|||
import * as path from 'path'
|
||||
import { DataSource } from 'typeorm'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import { omit, cloneDeep } from 'lodash'
|
||||
import { omit } from 'lodash'
|
||||
import {
|
||||
IFileUpload,
|
||||
convertSpeechToText,
|
||||
|
|
@ -817,14 +817,7 @@ export const executeFlow = async ({
|
|||
sessionId,
|
||||
chatId,
|
||||
input: question,
|
||||
postProcessing: {
|
||||
rawOutput: resultText,
|
||||
chatHistory: cloneDeep(chatHistory),
|
||||
sourceDocuments: result?.sourceDocuments ? cloneDeep(result.sourceDocuments) : undefined,
|
||||
usedTools: result?.usedTools ? cloneDeep(result.usedTools) : undefined,
|
||||
artifacts: result?.artifacts ? cloneDeep(result.artifacts) : undefined,
|
||||
fileAnnotations: result?.fileAnnotations ? cloneDeep(result.fileAnnotations) : undefined
|
||||
},
|
||||
rawOutput: resultText,
|
||||
appDataSource,
|
||||
databaseEntities,
|
||||
workspaceId,
|
||||
|
|
|
|||
|
|
@ -70,7 +70,7 @@ export const checkUsageLimit = async (
|
|||
if (limit === -1) return
|
||||
|
||||
if (currentUsage > limit) {
|
||||
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, `Limit exceeded: ${type}`)
|
||||
throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, `Limit exceeded: ${type}`)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -135,7 +135,7 @@ export const checkPredictions = async (orgId: string, subscriptionId: string, us
|
|||
if (predictionsLimit === -1) return
|
||||
|
||||
if (currentPredictions >= predictionsLimit) {
|
||||
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, 'Predictions limit exceeded')
|
||||
throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, 'Predictions limit exceeded')
|
||||
}
|
||||
|
||||
return {
|
||||
|
|
@ -161,7 +161,7 @@ export const checkStorage = async (orgId: string, subscriptionId: string, usageC
|
|||
if (storageLimit === -1) return
|
||||
|
||||
if (currentStorageUsage >= storageLimit) {
|
||||
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, 'Storage limit exceeded')
|
||||
throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, 'Storage limit exceeded')
|
||||
}
|
||||
|
||||
return {
|
||||
|
|
|
|||
|
|
@ -22,10 +22,7 @@ const refreshLoader = (storeId) => client.post(`/document-store/refresh/${storeI
|
|||
const insertIntoVectorStore = (body) => client.post(`/document-store/vectorstore/insert`, body)
|
||||
const saveVectorStoreConfig = (body) => client.post(`/document-store/vectorstore/save`, body)
|
||||
const updateVectorStoreConfig = (body) => client.post(`/document-store/vectorstore/update`, body)
|
||||
const deleteVectorStoreDataFromStore = (storeId, docId) => {
|
||||
const url = docId ? `/document-store/vectorstore/${storeId}?docId=${docId}` : `/document-store/vectorstore/${storeId}`
|
||||
return client.delete(url)
|
||||
}
|
||||
const deleteVectorStoreDataFromStore = (storeId) => client.delete(`/document-store/vectorstore/${storeId}`)
|
||||
const queryVectorStore = (body) => client.post(`/document-store/vectorstore/query`, body)
|
||||
const getVectorStoreProviders = () => client.get('/document-store/components/vectorstore')
|
||||
const getEmbeddingProviders = () => client.get('/document-store/components/embeddings')
|
||||
|
|
|
|||
|
|
@ -10,7 +10,6 @@ const VerifyEmailPage = Loadable(lazy(() => import('@/views/auth/verify-email'))
|
|||
const ForgotPasswordPage = Loadable(lazy(() => import('@/views/auth/forgotPassword')))
|
||||
const ResetPasswordPage = Loadable(lazy(() => import('@/views/auth/resetPassword')))
|
||||
const UnauthorizedPage = Loadable(lazy(() => import('@/views/auth/unauthorized')))
|
||||
const RateLimitedPage = Loadable(lazy(() => import('@/views/auth/rateLimited')))
|
||||
const OrganizationSetupPage = Loadable(lazy(() => import('@/views/organization/index')))
|
||||
const LicenseExpiredPage = Loadable(lazy(() => import('@/views/auth/expired')))
|
||||
|
||||
|
|
@ -46,10 +45,6 @@ const AuthRoutes = {
|
|||
path: '/unauthorized',
|
||||
element: <UnauthorizedPage />
|
||||
},
|
||||
{
|
||||
path: '/rate-limited',
|
||||
element: <RateLimitedPage />
|
||||
},
|
||||
{
|
||||
path: '/organization-setup',
|
||||
element: <OrganizationSetupPage />
|
||||
|
|
|
|||
|
|
@ -10,29 +10,11 @@ const ErrorContext = createContext()
|
|||
|
||||
export const ErrorProvider = ({ children }) => {
|
||||
const [error, setError] = useState(null)
|
||||
const [authRateLimitError, setAuthRateLimitError] = useState(null)
|
||||
const navigate = useNavigate()
|
||||
|
||||
const handleError = async (err) => {
|
||||
console.error(err)
|
||||
if (err?.response?.status === 429 && err?.response?.data?.type === 'authentication_rate_limit') {
|
||||
setAuthRateLimitError("You're making a lot of requests. Please wait and try again later.")
|
||||
} else if (err?.response?.status === 429 && err?.response?.data?.type !== 'authentication_rate_limit') {
|
||||
const retryAfterHeader = err?.response?.headers?.['retry-after']
|
||||
let retryAfter = 60 // Default in seconds
|
||||
if (retryAfterHeader) {
|
||||
const parsedSeconds = parseInt(retryAfterHeader, 10)
|
||||
if (Number.isNaN(parsedSeconds)) {
|
||||
const retryDate = new Date(retryAfterHeader)
|
||||
if (!Number.isNaN(retryDate.getTime())) {
|
||||
retryAfter = Math.max(0, Math.ceil((retryDate.getTime() - Date.now()) / 1000))
|
||||
}
|
||||
} else {
|
||||
retryAfter = parsedSeconds
|
||||
}
|
||||
}
|
||||
navigate('/rate-limited', { state: { retryAfter } })
|
||||
} else if (err?.response?.status === 403) {
|
||||
if (err?.response?.status === 403) {
|
||||
navigate('/unauthorized')
|
||||
} else if (err?.response?.status === 401) {
|
||||
if (ErrorMessage.INVALID_MISSING_TOKEN === err?.response?.data?.message) {
|
||||
|
|
@ -62,9 +44,7 @@ export const ErrorProvider = ({ children }) => {
|
|||
value={{
|
||||
error,
|
||||
setError,
|
||||
handleError,
|
||||
authRateLimitError,
|
||||
setAuthRateLimitError
|
||||
handleError
|
||||
}}
|
||||
>
|
||||
{children}
|
||||
|
|
|
|||
|
|
@ -74,7 +74,7 @@ const StyledMenu = styled((props) => (
|
|||
}
|
||||
}))
|
||||
|
||||
export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, setError, updateFlowsApi, currentPage, pageLimit }) {
|
||||
export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, setError, updateFlowsApi }) {
|
||||
const { confirm } = useConfirm()
|
||||
const dispatch = useDispatch()
|
||||
const updateChatflowApi = useApi(chatflowsApi.updateChatflow)
|
||||
|
|
@ -166,16 +166,10 @@ export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, s
|
|||
}
|
||||
try {
|
||||
await updateChatflowApi.request(chatflow.id, updateBody)
|
||||
const params = {
|
||||
page: currentPage,
|
||||
limit: pageLimit
|
||||
}
|
||||
if (isAgentCanvas && isAgentflowV2) {
|
||||
await updateFlowsApi.request('AGENTFLOW', params)
|
||||
} else if (isAgentCanvas) {
|
||||
await updateFlowsApi.request('MULTIAGENT', params)
|
||||
await updateFlowsApi.request('AGENTFLOW')
|
||||
} else {
|
||||
await updateFlowsApi.request(params)
|
||||
await updateFlowsApi.request(isAgentCanvas ? 'MULTIAGENT' : undefined)
|
||||
}
|
||||
} catch (error) {
|
||||
if (setError) setError(error)
|
||||
|
|
@ -215,15 +209,7 @@ export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, s
|
|||
}
|
||||
try {
|
||||
await updateChatflowApi.request(chatflow.id, updateBody)
|
||||
const params = {
|
||||
page: currentPage,
|
||||
limit: pageLimit
|
||||
}
|
||||
if (isAgentCanvas) {
|
||||
await updateFlowsApi.request('AGENTFLOW', params)
|
||||
} else {
|
||||
await updateFlowsApi.request(params)
|
||||
}
|
||||
await updateFlowsApi.request(isAgentCanvas ? 'AGENTFLOW' : undefined)
|
||||
} catch (error) {
|
||||
if (setError) setError(error)
|
||||
enqueueSnackbar({
|
||||
|
|
@ -255,16 +241,10 @@ export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, s
|
|||
if (isConfirmed) {
|
||||
try {
|
||||
await chatflowsApi.deleteChatflow(chatflow.id)
|
||||
const params = {
|
||||
page: currentPage,
|
||||
limit: pageLimit
|
||||
}
|
||||
if (isAgentCanvas && isAgentflowV2) {
|
||||
await updateFlowsApi.request('AGENTFLOW', params)
|
||||
} else if (isAgentCanvas) {
|
||||
await updateFlowsApi.request('MULTIAGENT', params)
|
||||
await updateFlowsApi.request('AGENTFLOW')
|
||||
} else {
|
||||
await updateFlowsApi.request(params)
|
||||
await updateFlowsApi.request(isAgentCanvas ? 'MULTIAGENT' : undefined)
|
||||
}
|
||||
} catch (error) {
|
||||
if (setError) setError(error)
|
||||
|
|
@ -474,7 +454,5 @@ FlowListMenu.propTypes = {
|
|||
isAgentCanvas: PropTypes.bool,
|
||||
isAgentflowV2: PropTypes.bool,
|
||||
setError: PropTypes.func,
|
||||
updateFlowsApi: PropTypes.object,
|
||||
currentPage: PropTypes.number,
|
||||
pageLimit: PropTypes.number
|
||||
updateFlowsApi: PropTypes.object
|
||||
}
|
||||
|
|
|
|||
|
|
@ -53,7 +53,8 @@ const CHATFLOW_CONFIGURATION_TABS = [
|
|||
},
|
||||
{
|
||||
label: 'Post Processing',
|
||||
id: 'postProcessing'
|
||||
id: 'postProcessing',
|
||||
hideInAgentFlow: true
|
||||
}
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -16,11 +16,11 @@ import { useEditor, EditorContent } from '@tiptap/react'
|
|||
import Placeholder from '@tiptap/extension-placeholder'
|
||||
import { mergeAttributes } from '@tiptap/core'
|
||||
import StarterKit from '@tiptap/starter-kit'
|
||||
import Mention from '@tiptap/extension-mention'
|
||||
import CodeBlockLowlight from '@tiptap/extension-code-block-lowlight'
|
||||
import { common, createLowlight } from 'lowlight'
|
||||
import { suggestionOptions } from '@/ui-component/input/suggestionOption'
|
||||
import { getAvailableNodesForVariable } from '@/utils/genericHelper'
|
||||
import { CustomMention } from '@/utils/customMention'
|
||||
|
||||
const lowlight = createLowlight(common)
|
||||
|
||||
|
|
@ -78,7 +78,7 @@ const extensions = (availableNodesForVariable, availableState, acceptNodeOutputA
|
|||
StarterKit.configure({
|
||||
codeBlock: false
|
||||
}),
|
||||
CustomMention.configure({
|
||||
Mention.configure({
|
||||
HTMLAttributes: {
|
||||
class: 'variable'
|
||||
},
|
||||
|
|
|
|||
|
|
@ -4,25 +4,8 @@ import PropTypes from 'prop-types'
|
|||
import { useSelector } from 'react-redux'
|
||||
|
||||
// material-ui
|
||||
import {
|
||||
IconButton,
|
||||
Button,
|
||||
Box,
|
||||
Typography,
|
||||
TableContainer,
|
||||
Table,
|
||||
TableHead,
|
||||
TableBody,
|
||||
TableRow,
|
||||
TableCell,
|
||||
Paper,
|
||||
Accordion,
|
||||
AccordionSummary,
|
||||
AccordionDetails,
|
||||
Card
|
||||
} from '@mui/material'
|
||||
import { IconArrowsMaximize, IconX } from '@tabler/icons-react'
|
||||
import ExpandMoreIcon from '@mui/icons-material/ExpandMore'
|
||||
import { IconButton, Button, Box, Typography } from '@mui/material'
|
||||
import { IconArrowsMaximize, IconBulb, IconX } from '@tabler/icons-react'
|
||||
import { useTheme } from '@mui/material/styles'
|
||||
|
||||
// Project import
|
||||
|
|
@ -38,11 +21,7 @@ import useNotifier from '@/utils/useNotifier'
|
|||
// API
|
||||
import chatflowsApi from '@/api/chatflows'
|
||||
|
||||
const sampleFunction = `// Access chat history as a string
|
||||
const chatHistory = JSON.stringify($flow.chatHistory, null, 2);
|
||||
|
||||
// Return a modified response
|
||||
return $flow.rawOutput + " This is a post processed response!";`
|
||||
const sampleFunction = `return $flow.rawOutput + " This is a post processed response!";`
|
||||
|
||||
const PostProcessing = ({ dialogProps }) => {
|
||||
const dispatch = useDispatch()
|
||||
|
|
@ -196,105 +175,31 @@ const PostProcessing = ({ dialogProps }) => {
|
|||
/>
|
||||
</div>
|
||||
</Box>
|
||||
<Card sx={{ borderColor: theme.palette.primary[200] + 75, mt: 2, mb: 2 }} variant='outlined'>
|
||||
<Accordion
|
||||
disableGutters
|
||||
sx={{
|
||||
'&:before': {
|
||||
display: 'none'
|
||||
}
|
||||
<div
|
||||
style={{
|
||||
display: 'flex',
|
||||
flexDirection: 'column',
|
||||
borderRadius: 10,
|
||||
background: '#d8f3dc',
|
||||
padding: 10,
|
||||
marginTop: 10
|
||||
}}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
display: 'flex',
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
paddingTop: 10
|
||||
}}
|
||||
>
|
||||
<AccordionSummary expandIcon={<ExpandMoreIcon />}>
|
||||
<Typography>Available Variables</Typography>
|
||||
</AccordionSummary>
|
||||
<AccordionDetails sx={{ p: 0 }}>
|
||||
<TableContainer component={Paper}>
|
||||
<Table aria-label='available variables table'>
|
||||
<TableHead>
|
||||
<TableRow>
|
||||
<TableCell sx={{ width: '30%' }}>Variable</TableCell>
|
||||
<TableCell sx={{ width: '15%' }}>Type</TableCell>
|
||||
<TableCell sx={{ width: '55%' }}>Description</TableCell>
|
||||
</TableRow>
|
||||
</TableHead>
|
||||
<TableBody>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.rawOutput</code>
|
||||
</TableCell>
|
||||
<TableCell>string</TableCell>
|
||||
<TableCell>The raw output response from the flow</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.input</code>
|
||||
</TableCell>
|
||||
<TableCell>string</TableCell>
|
||||
<TableCell>The user input message</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.chatHistory</code>
|
||||
</TableCell>
|
||||
<TableCell>array</TableCell>
|
||||
<TableCell>Array of previous messages in the conversation</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.chatflowId</code>
|
||||
</TableCell>
|
||||
<TableCell>string</TableCell>
|
||||
<TableCell>Unique identifier for the chatflow</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.sessionId</code>
|
||||
</TableCell>
|
||||
<TableCell>string</TableCell>
|
||||
<TableCell>Current session identifier</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.chatId</code>
|
||||
</TableCell>
|
||||
<TableCell>string</TableCell>
|
||||
<TableCell>Current chat identifier</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.sourceDocuments</code>
|
||||
</TableCell>
|
||||
<TableCell>array</TableCell>
|
||||
<TableCell>Source documents used in retrieval (if applicable)</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.usedTools</code>
|
||||
</TableCell>
|
||||
<TableCell>array</TableCell>
|
||||
<TableCell>List of tools used during execution</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell>
|
||||
<code>$flow.artifacts</code>
|
||||
</TableCell>
|
||||
<TableCell>array</TableCell>
|
||||
<TableCell>List of artifacts generated during execution</TableCell>
|
||||
</TableRow>
|
||||
<TableRow>
|
||||
<TableCell sx={{ borderBottom: 'none' }}>
|
||||
<code>$flow.fileAnnotations</code>
|
||||
</TableCell>
|
||||
<TableCell sx={{ borderBottom: 'none' }}>array</TableCell>
|
||||
<TableCell sx={{ borderBottom: 'none' }}>File annotations associated with the response</TableCell>
|
||||
</TableRow>
|
||||
</TableBody>
|
||||
</Table>
|
||||
</TableContainer>
|
||||
</AccordionDetails>
|
||||
</Accordion>
|
||||
</Card>
|
||||
<IconBulb size={30} color='#2d6a4f' />
|
||||
<span style={{ color: '#2d6a4f', marginLeft: 10, fontWeight: 500 }}>
|
||||
The following variables are available to use in the custom function:{' '}
|
||||
<pre>$flow.rawOutput, $flow.input, $flow.chatflowId, $flow.sessionId, $flow.chatId</pre>
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
<StyledButton
|
||||
style={{ marginBottom: 10, marginTop: 10 }}
|
||||
variant='contained'
|
||||
|
|
|
|||
|
|
@ -7,11 +7,11 @@ import { mergeAttributes } from '@tiptap/core'
|
|||
import StarterKit from '@tiptap/starter-kit'
|
||||
import { styled } from '@mui/material/styles'
|
||||
import { Box } from '@mui/material'
|
||||
import Mention from '@tiptap/extension-mention'
|
||||
import CodeBlockLowlight from '@tiptap/extension-code-block-lowlight'
|
||||
import { common, createLowlight } from 'lowlight'
|
||||
import { suggestionOptions } from './suggestionOption'
|
||||
import { getAvailableNodesForVariable } from '@/utils/genericHelper'
|
||||
import { CustomMention } from '@/utils/customMention'
|
||||
|
||||
const lowlight = createLowlight(common)
|
||||
|
||||
|
|
@ -20,7 +20,7 @@ const extensions = (availableNodesForVariable, availableState, acceptNodeOutputA
|
|||
StarterKit.configure({
|
||||
codeBlock: false
|
||||
}),
|
||||
CustomMention.configure({
|
||||
Mention.configure({
|
||||
HTMLAttributes: {
|
||||
class: 'variable'
|
||||
},
|
||||
|
|
|
|||
|
|
@ -112,7 +112,7 @@ export const suggestionOptions = (
|
|||
category: 'Node Outputs'
|
||||
})
|
||||
|
||||
const structuredOutputs = nodeData?.inputs?.llmStructuredOutput ?? nodeData?.inputs?.agentStructuredOutput ?? []
|
||||
const structuredOutputs = nodeData?.inputs?.llmStructuredOutput ?? []
|
||||
if (structuredOutputs && structuredOutputs.length > 0) {
|
||||
structuredOutputs.forEach((item) => {
|
||||
defaultItems.unshift({
|
||||
|
|
|
|||
|
|
@ -59,9 +59,7 @@ export const FlowListTable = ({
|
|||
updateFlowsApi,
|
||||
setError,
|
||||
isAgentCanvas,
|
||||
isAgentflowV2,
|
||||
currentPage,
|
||||
pageLimit
|
||||
isAgentflowV2
|
||||
}) => {
|
||||
const { hasPermission } = useAuth()
|
||||
const isActionsAvailable = isAgentCanvas
|
||||
|
|
@ -333,8 +331,6 @@ export const FlowListTable = ({
|
|||
chatflow={row}
|
||||
setError={setError}
|
||||
updateFlowsApi={updateFlowsApi}
|
||||
currentPage={currentPage}
|
||||
pageLimit={pageLimit}
|
||||
/>
|
||||
</Stack>
|
||||
</StyledTableCell>
|
||||
|
|
@ -359,7 +355,5 @@ FlowListTable.propTypes = {
|
|||
updateFlowsApi: PropTypes.object,
|
||||
setError: PropTypes.func,
|
||||
isAgentCanvas: PropTypes.bool,
|
||||
isAgentflowV2: PropTypes.bool,
|
||||
currentPage: PropTypes.number,
|
||||
pageLimit: PropTypes.number
|
||||
isAgentflowV2: PropTypes.bool
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,26 +0,0 @@
|
|||
import Mention from '@tiptap/extension-mention'
|
||||
import { PasteRule } from '@tiptap/core'
|
||||
|
||||
export const CustomMention = Mention.extend({
|
||||
renderText({ node }) {
|
||||
return `{{${node.attrs.label ?? node.attrs.id}}}`
|
||||
},
|
||||
addPasteRules() {
|
||||
return [
|
||||
new PasteRule({
|
||||
find: /\{\{([^{}]+)\}\}/g,
|
||||
handler: ({ match, chain, range }) => {
|
||||
const label = match[1].trim()
|
||||
if (label) {
|
||||
chain()
|
||||
.deleteRange(range)
|
||||
.insertContentAt(range.from, {
|
||||
type: this.name,
|
||||
attrs: { id: label, label: label }
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
]
|
||||
}
|
||||
})
|
||||
|
|
@ -325,8 +325,6 @@ const Agentflows = () => {
|
|||
filterFunction={filterFlows}
|
||||
updateFlowsApi={getAllAgentflows}
|
||||
setError={setError}
|
||||
currentPage={currentPage}
|
||||
pageLimit={pageLimit}
|
||||
/>
|
||||
)}
|
||||
{/* Pagination and Page Size Controls */}
|
||||
|
|
|
|||
|
|
@ -150,8 +150,6 @@ const AgentFlowNode = ({ data }) => {
|
|||
return <IconWorldWww size={14} color={'white'} />
|
||||
case 'googleSearch':
|
||||
return <IconBrandGoogle size={14} color={'white'} />
|
||||
case 'codeExecution':
|
||||
return <IconCode size={14} color={'white'} />
|
||||
default:
|
||||
return null
|
||||
}
|
||||
|
|
|
|||
|
|
@ -16,7 +16,6 @@ import accountApi from '@/api/account.api'
|
|||
// Hooks
|
||||
import useApi from '@/hooks/useApi'
|
||||
import { useConfig } from '@/store/context/ConfigContext'
|
||||
import { useError } from '@/store/context/ErrorContext'
|
||||
|
||||
// utils
|
||||
import useNotifier from '@/utils/useNotifier'
|
||||
|
|
@ -42,13 +41,10 @@ const ForgotPasswordPage = () => {
|
|||
const [isLoading, setLoading] = useState(false)
|
||||
const [responseMsg, setResponseMsg] = useState(undefined)
|
||||
|
||||
const { authRateLimitError, setAuthRateLimitError } = useError()
|
||||
|
||||
const forgotPasswordApi = useApi(accountApi.forgotPassword)
|
||||
|
||||
const sendResetRequest = async (event) => {
|
||||
event.preventDefault()
|
||||
setAuthRateLimitError(null)
|
||||
const body = {
|
||||
user: {
|
||||
email: usernameVal
|
||||
|
|
@ -58,11 +54,6 @@ const ForgotPasswordPage = () => {
|
|||
await forgotPasswordApi.request(body)
|
||||
}
|
||||
|
||||
useEffect(() => {
|
||||
setAuthRateLimitError(null)
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [setAuthRateLimitError])
|
||||
|
||||
useEffect(() => {
|
||||
if (forgotPasswordApi.error) {
|
||||
const errMessage =
|
||||
|
|
@ -98,11 +89,6 @@ const ForgotPasswordPage = () => {
|
|||
{responseMsg.msg}
|
||||
</Alert>
|
||||
)}
|
||||
{authRateLimitError && (
|
||||
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
|
||||
{authRateLimitError}
|
||||
</Alert>
|
||||
)}
|
||||
{responseMsg && responseMsg?.type !== 'error' && (
|
||||
<Alert icon={<IconCircleCheck />} variant='filled' severity='success'>
|
||||
{responseMsg.msg}
|
||||
|
|
|
|||
|
|
@ -1,51 +0,0 @@
|
|||
import { Box, Button, Stack, Typography } from '@mui/material'
|
||||
import { Link, useLocation } from 'react-router-dom'
|
||||
import unauthorizedSVG from '@/assets/images/unauthorized.svg'
|
||||
import MainCard from '@/ui-component/cards/MainCard'
|
||||
|
||||
// ==============================|| RateLimitedPage ||============================== //
|
||||
|
||||
const RateLimitedPage = () => {
|
||||
const location = useLocation()
|
||||
|
||||
const retryAfter = location.state?.retryAfter || 60
|
||||
|
||||
return (
|
||||
<MainCard>
|
||||
<Box
|
||||
sx={{
|
||||
display: 'flex',
|
||||
justifyContent: 'center',
|
||||
alignItems: 'center',
|
||||
height: 'calc(100vh - 210px)'
|
||||
}}
|
||||
>
|
||||
<Stack
|
||||
sx={{
|
||||
alignItems: 'center',
|
||||
justifyContent: 'center',
|
||||
maxWidth: '500px'
|
||||
}}
|
||||
flexDirection='column'
|
||||
>
|
||||
<Box sx={{ p: 2, height: 'auto' }}>
|
||||
<img style={{ objectFit: 'cover', height: '20vh', width: 'auto' }} src={unauthorizedSVG} alt='rateLimitedSVG' />
|
||||
</Box>
|
||||
<Typography sx={{ mb: 2 }} variant='h4' component='div' fontWeight='bold'>
|
||||
429 Too Many Requests
|
||||
</Typography>
|
||||
<Typography variant='body1' component='div' sx={{ mb: 2, textAlign: 'center' }}>
|
||||
{`You have made too many requests in a short period of time. Please wait ${retryAfter}s before trying again.`}
|
||||
</Typography>
|
||||
<Link to='/'>
|
||||
<Button variant='contained' color='primary'>
|
||||
Back to Home
|
||||
</Button>
|
||||
</Link>
|
||||
</Stack>
|
||||
</Box>
|
||||
</MainCard>
|
||||
)
|
||||
}
|
||||
|
||||
export default RateLimitedPage
|
||||
|
|
@ -18,7 +18,6 @@ import ssoApi from '@/api/sso'
|
|||
// Hooks
|
||||
import useApi from '@/hooks/useApi'
|
||||
import { useConfig } from '@/store/context/ConfigContext'
|
||||
import { useError } from '@/store/context/ErrorContext'
|
||||
|
||||
// utils
|
||||
import useNotifier from '@/utils/useNotifier'
|
||||
|
|
@ -112,9 +111,7 @@ const RegisterPage = () => {
|
|||
|
||||
const [loading, setLoading] = useState(false)
|
||||
const [authError, setAuthError] = useState('')
|
||||
const [successMsg, setSuccessMsg] = useState('')
|
||||
|
||||
const { authRateLimitError, setAuthRateLimitError } = useError()
|
||||
const [successMsg, setSuccessMsg] = useState(undefined)
|
||||
|
||||
const registerApi = useApi(accountApi.registerAccount)
|
||||
const ssoLoginApi = useApi(ssoApi.ssoLogin)
|
||||
|
|
@ -123,7 +120,6 @@ const RegisterPage = () => {
|
|||
|
||||
const register = async (event) => {
|
||||
event.preventDefault()
|
||||
setAuthRateLimitError(null)
|
||||
if (isEnterpriseLicensed) {
|
||||
const result = RegisterEnterpriseUserSchema.safeParse({
|
||||
username,
|
||||
|
|
@ -196,7 +192,6 @@ const RegisterPage = () => {
|
|||
}, [registerApi.error])
|
||||
|
||||
useEffect(() => {
|
||||
setAuthRateLimitError(null)
|
||||
if (!isOpenSource) {
|
||||
getDefaultProvidersApi.request()
|
||||
}
|
||||
|
|
@ -279,11 +274,6 @@ const RegisterPage = () => {
|
|||
)}
|
||||
</Alert>
|
||||
)}
|
||||
{authRateLimitError && (
|
||||
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
|
||||
{authRateLimitError}
|
||||
</Alert>
|
||||
)}
|
||||
{successMsg && (
|
||||
<Alert icon={<IconCircleCheck />} variant='filled' severity='success'>
|
||||
{successMsg}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
import { useEffect, useState } from 'react'
|
||||
import { useState } from 'react'
|
||||
import { useDispatch } from 'react-redux'
|
||||
import { Link, useNavigate, useSearchParams } from 'react-router-dom'
|
||||
|
||||
|
|
@ -19,9 +19,6 @@ import accountApi from '@/api/account.api'
|
|||
import useNotifier from '@/utils/useNotifier'
|
||||
import { validatePassword } from '@/utils/validation'
|
||||
|
||||
// Hooks
|
||||
import { useError } from '@/store/context/ErrorContext'
|
||||
|
||||
// Icons
|
||||
import { IconExclamationCircle, IconX } from '@tabler/icons-react'
|
||||
|
||||
|
|
@ -73,8 +70,6 @@ const ResetPasswordPage = () => {
|
|||
const [loading, setLoading] = useState(false)
|
||||
const [authErrors, setAuthErrors] = useState([])
|
||||
|
||||
const { authRateLimitError, setAuthRateLimitError } = useError()
|
||||
|
||||
const goLogin = () => {
|
||||
navigate('/signin', { replace: true })
|
||||
}
|
||||
|
|
@ -83,7 +78,6 @@ const ResetPasswordPage = () => {
|
|||
event.preventDefault()
|
||||
const validationErrors = []
|
||||
setAuthErrors([])
|
||||
setAuthRateLimitError(null)
|
||||
if (!tokenVal) {
|
||||
validationErrors.push('Token cannot be left blank!')
|
||||
}
|
||||
|
|
@ -148,11 +142,6 @@ const ResetPasswordPage = () => {
|
|||
}
|
||||
}
|
||||
|
||||
useEffect(() => {
|
||||
setAuthRateLimitError(null)
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [])
|
||||
|
||||
return (
|
||||
<>
|
||||
<MainCard>
|
||||
|
|
@ -166,11 +155,6 @@ const ResetPasswordPage = () => {
|
|||
</ul>
|
||||
</Alert>
|
||||
)}
|
||||
{authRateLimitError && (
|
||||
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
|
||||
{authRateLimitError}
|
||||
</Alert>
|
||||
)}
|
||||
<Stack sx={{ gap: 1 }}>
|
||||
<Typography variant='h1'>Reset Password</Typography>
|
||||
<Typography variant='body2' sx={{ color: theme.palette.grey[600] }}>
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@ import { Input } from '@/ui-component/input/Input'
|
|||
// Hooks
|
||||
import useApi from '@/hooks/useApi'
|
||||
import { useConfig } from '@/store/context/ConfigContext'
|
||||
import { useError } from '@/store/context/ErrorContext'
|
||||
|
||||
// API
|
||||
import authApi from '@/api/auth'
|
||||
|
|
@ -63,8 +62,6 @@ const SignInPage = () => {
|
|||
const [showResendButton, setShowResendButton] = useState(false)
|
||||
const [successMessage, setSuccessMessage] = useState('')
|
||||
|
||||
const { authRateLimitError, setAuthRateLimitError } = useError()
|
||||
|
||||
const loginApi = useApi(authApi.login)
|
||||
const ssoLoginApi = useApi(ssoApi.ssoLogin)
|
||||
const getDefaultProvidersApi = useApi(loginMethodApi.getDefaultLoginMethods)
|
||||
|
|
@ -74,7 +71,6 @@ const SignInPage = () => {
|
|||
|
||||
const doLogin = (event) => {
|
||||
event.preventDefault()
|
||||
setAuthRateLimitError(null)
|
||||
setLoading(true)
|
||||
const body = {
|
||||
email: usernameVal,
|
||||
|
|
@ -96,12 +92,11 @@ const SignInPage = () => {
|
|||
|
||||
useEffect(() => {
|
||||
store.dispatch(logoutSuccess())
|
||||
setAuthRateLimitError(null)
|
||||
if (!isOpenSource) {
|
||||
getDefaultProvidersApi.request()
|
||||
}
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [setAuthRateLimitError, isOpenSource])
|
||||
}, [])
|
||||
|
||||
useEffect(() => {
|
||||
// Parse the "user" query parameter from the URL
|
||||
|
|
@ -184,11 +179,6 @@ const SignInPage = () => {
|
|||
{successMessage}
|
||||
</Alert>
|
||||
)}
|
||||
{authRateLimitError && (
|
||||
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
|
||||
{authRateLimitError}
|
||||
</Alert>
|
||||
)}
|
||||
{authError && (
|
||||
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
|
||||
{authError}
|
||||
|
|
|
|||
|
|
@ -208,8 +208,6 @@ const Chatflows = () => {
|
|||
filterFunction={filterFlows}
|
||||
updateFlowsApi={getAllChatflowsApi}
|
||||
setError={setError}
|
||||
currentPage={currentPage}
|
||||
pageLimit={pageLimit}
|
||||
/>
|
||||
)}
|
||||
{/* Pagination and Page Size Controls */}
|
||||
|
|
|
|||
|
|
@ -18,15 +18,11 @@ import {
|
|||
TableContainer,
|
||||
TableRow,
|
||||
TableCell,
|
||||
DialogActions,
|
||||
Card,
|
||||
Stack,
|
||||
Link
|
||||
Checkbox,
|
||||
FormControlLabel,
|
||||
DialogActions
|
||||
} from '@mui/material'
|
||||
import { useTheme } from '@mui/material/styles'
|
||||
import ExpandMoreIcon from '@mui/icons-material/ExpandMore'
|
||||
import SettingsIcon from '@mui/icons-material/Settings'
|
||||
import { IconAlertTriangle } from '@tabler/icons-react'
|
||||
import { TableViewOnly } from '@/ui-component/table/Table'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
|
||||
|
|
@ -40,13 +36,12 @@ import { initNode } from '@/utils/genericHelper'
|
|||
|
||||
const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
|
||||
const portalElement = document.getElementById('portal')
|
||||
const theme = useTheme()
|
||||
const [nodeConfigExpanded, setNodeConfigExpanded] = useState({})
|
||||
const [removeFromVS, setRemoveFromVS] = useState(false)
|
||||
const [vsFlowData, setVSFlowData] = useState([])
|
||||
const [rmFlowData, setRMFlowData] = useState([])
|
||||
|
||||
const getVectorStoreNodeApi = useApi(nodesApi.getSpecificNode)
|
||||
const getRecordManagerNodeApi = useApi(nodesApi.getSpecificNode)
|
||||
const getSpecificNodeApi = useApi(nodesApi.getSpecificNode)
|
||||
|
||||
const handleAccordionChange = (nodeName) => (event, isExpanded) => {
|
||||
const accordianNodes = { ...nodeConfigExpanded }
|
||||
|
|
@ -57,37 +52,42 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
|
|||
useEffect(() => {
|
||||
if (dialogProps.recordManagerConfig) {
|
||||
const nodeName = dialogProps.recordManagerConfig.name
|
||||
if (nodeName) getRecordManagerNodeApi.request(nodeName)
|
||||
}
|
||||
if (nodeName) getSpecificNodeApi.request(nodeName)
|
||||
|
||||
if (dialogProps.vectorStoreConfig) {
|
||||
const nodeName = dialogProps.vectorStoreConfig.name
|
||||
if (nodeName) getVectorStoreNodeApi.request(nodeName)
|
||||
if (dialogProps.vectorStoreConfig) {
|
||||
const nodeName = dialogProps.vectorStoreConfig.name
|
||||
if (nodeName) getSpecificNodeApi.request(nodeName)
|
||||
}
|
||||
}
|
||||
|
||||
return () => {
|
||||
setNodeConfigExpanded({})
|
||||
setRemoveFromVS(false)
|
||||
setVSFlowData([])
|
||||
setRMFlowData([])
|
||||
}
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [dialogProps])
|
||||
|
||||
// Process Vector Store node data
|
||||
useEffect(() => {
|
||||
if (getVectorStoreNodeApi.data && dialogProps.vectorStoreConfig) {
|
||||
const nodeData = cloneDeep(initNode(getVectorStoreNodeApi.data, uuidv4()))
|
||||
if (getSpecificNodeApi.data) {
|
||||
const nodeData = cloneDeep(initNode(getSpecificNodeApi.data, uuidv4()))
|
||||
|
||||
let config = 'vectorStoreConfig'
|
||||
if (nodeData.category === 'Record Manager') config = 'recordManagerConfig'
|
||||
|
||||
const paramValues = []
|
||||
|
||||
for (const inputName in dialogProps.vectorStoreConfig.config) {
|
||||
for (const inputName in dialogProps[config].config) {
|
||||
const inputParam = nodeData.inputParams.find((inp) => inp.name === inputName)
|
||||
|
||||
if (!inputParam) continue
|
||||
|
||||
if (inputParam.type === 'credential') continue
|
||||
|
||||
const inputValue = dialogProps.vectorStoreConfig.config[inputName]
|
||||
let paramValue = {}
|
||||
|
||||
const inputValue = dialogProps[config].config[inputName]
|
||||
|
||||
if (!inputValue) continue
|
||||
|
||||
|
|
@ -95,71 +95,40 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
|
|||
continue
|
||||
}
|
||||
|
||||
paramValues.push({
|
||||
paramValue = {
|
||||
label: inputParam?.label,
|
||||
name: inputParam?.name,
|
||||
type: inputParam?.type,
|
||||
value: inputValue
|
||||
})
|
||||
}
|
||||
paramValues.push(paramValue)
|
||||
}
|
||||
|
||||
setVSFlowData([
|
||||
{
|
||||
label: nodeData.label,
|
||||
name: nodeData.name,
|
||||
category: nodeData.category,
|
||||
id: nodeData.id,
|
||||
paramValues
|
||||
}
|
||||
])
|
||||
if (config === 'vectorStoreConfig') {
|
||||
setVSFlowData([
|
||||
{
|
||||
label: nodeData.label,
|
||||
name: nodeData.name,
|
||||
category: nodeData.category,
|
||||
id: nodeData.id,
|
||||
paramValues
|
||||
}
|
||||
])
|
||||
} else if (config === 'recordManagerConfig') {
|
||||
setRMFlowData([
|
||||
{
|
||||
label: nodeData.label,
|
||||
name: nodeData.name,
|
||||
category: nodeData.category,
|
||||
id: nodeData.id,
|
||||
paramValues
|
||||
}
|
||||
])
|
||||
}
|
||||
}
|
||||
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [getVectorStoreNodeApi.data])
|
||||
|
||||
// Process Record Manager node data
|
||||
useEffect(() => {
|
||||
if (getRecordManagerNodeApi.data && dialogProps.recordManagerConfig) {
|
||||
const nodeData = cloneDeep(initNode(getRecordManagerNodeApi.data, uuidv4()))
|
||||
|
||||
const paramValues = []
|
||||
|
||||
for (const inputName in dialogProps.recordManagerConfig.config) {
|
||||
const inputParam = nodeData.inputParams.find((inp) => inp.name === inputName)
|
||||
|
||||
if (!inputParam) continue
|
||||
|
||||
if (inputParam.type === 'credential') continue
|
||||
|
||||
const inputValue = dialogProps.recordManagerConfig.config[inputName]
|
||||
|
||||
if (!inputValue) continue
|
||||
|
||||
if (typeof inputValue === 'string' && inputValue.startsWith('{{') && inputValue.endsWith('}}')) {
|
||||
continue
|
||||
}
|
||||
|
||||
paramValues.push({
|
||||
label: inputParam?.label,
|
||||
name: inputParam?.name,
|
||||
type: inputParam?.type,
|
||||
value: inputValue
|
||||
})
|
||||
}
|
||||
|
||||
setRMFlowData([
|
||||
{
|
||||
label: nodeData.label,
|
||||
name: nodeData.name,
|
||||
category: nodeData.category,
|
||||
id: nodeData.id,
|
||||
paramValues
|
||||
}
|
||||
])
|
||||
}
|
||||
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [getRecordManagerNodeApi.data])
|
||||
}, [getSpecificNodeApi.data])
|
||||
|
||||
const component = show ? (
|
||||
<Dialog
|
||||
|
|
@ -173,130 +142,91 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
|
|||
<DialogTitle sx={{ fontSize: '1rem', p: 3, pb: 0 }} id='alert-dialog-title'>
|
||||
{dialogProps.title}
|
||||
</DialogTitle>
|
||||
<DialogContent
|
||||
sx={{
|
||||
display: 'flex',
|
||||
flexDirection: 'column',
|
||||
gap: 2,
|
||||
maxHeight: '75vh',
|
||||
position: 'relative',
|
||||
px: 3,
|
||||
pb: 3,
|
||||
overflow: 'auto'
|
||||
}}
|
||||
>
|
||||
<DialogContent sx={{ display: 'flex', flexDirection: 'column', gap: 2, maxHeight: '75vh', position: 'relative', px: 3, pb: 3 }}>
|
||||
<span style={{ marginTop: '20px' }}>{dialogProps.description}</span>
|
||||
{dialogProps.vectorStoreConfig && !dialogProps.recordManagerConfig && (
|
||||
<div
|
||||
style={{
|
||||
display: 'flex',
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center',
|
||||
borderRadius: 10,
|
||||
background: 'rgb(254,252,191)',
|
||||
padding: 10
|
||||
}}
|
||||
>
|
||||
<IconAlertTriangle size={70} color='orange' />
|
||||
<span style={{ color: 'rgb(116,66,16)', marginLeft: 10 }}>
|
||||
<strong>Note:</strong> Without a Record Manager configured, only the document chunks will be removed from the
|
||||
document store. The actual vector embeddings in your vector store database will remain unchanged. To enable
|
||||
automatic cleanup of vector store data, please configure a Record Manager.{' '}
|
||||
<Link
|
||||
href='https://docs.flowiseai.com/integrations/langchain/record-managers'
|
||||
target='_blank'
|
||||
rel='noopener noreferrer'
|
||||
sx={{ fontWeight: 500, color: 'rgb(116,66,16)', textDecoration: 'underline' }}
|
||||
>
|
||||
Learn more
|
||||
</Link>
|
||||
</span>
|
||||
</div>
|
||||
{dialogProps.type === 'STORE' && dialogProps.recordManagerConfig && (
|
||||
<FormControlLabel
|
||||
control={<Checkbox checked={removeFromVS} onChange={(event) => setRemoveFromVS(event.target.checked)} />}
|
||||
label='Remove data from vector store and record manager'
|
||||
/>
|
||||
)}
|
||||
{vsFlowData && vsFlowData.length > 0 && rmFlowData && rmFlowData.length > 0 && (
|
||||
<Card sx={{ borderColor: theme.palette.primary[200] + 75, p: 2 }} variant='outlined'>
|
||||
<Stack sx={{ mt: 1, mb: 2, ml: 1, alignItems: 'center' }} direction='row' spacing={2}>
|
||||
<SettingsIcon />
|
||||
<Typography variant='h4'>Configuration</Typography>
|
||||
</Stack>
|
||||
<Stack direction='column'>
|
||||
<TableContainer component={Paper} sx={{ maxHeight: '400px', overflow: 'auto' }}>
|
||||
<Table sx={{ minWidth: 650 }} aria-label='simple table'>
|
||||
<TableBody>
|
||||
<TableRow sx={{ '& td': { border: 0 } }}>
|
||||
<TableCell sx={{ pb: 0, pt: 0 }} colSpan={6}>
|
||||
<Box>
|
||||
{([...vsFlowData, ...rmFlowData] || []).map((node, index) => {
|
||||
return (
|
||||
<Accordion
|
||||
expanded={nodeConfigExpanded[node.name] || false}
|
||||
onChange={handleAccordionChange(node.name)}
|
||||
key={index}
|
||||
disableGutters
|
||||
{removeFromVS && (
|
||||
<div>
|
||||
<TableContainer component={Paper}>
|
||||
<Table sx={{ minWidth: 650 }} aria-label='simple table'>
|
||||
<TableBody>
|
||||
<TableRow sx={{ '& td': { border: 0 } }}>
|
||||
<TableCell sx={{ pb: 0, pt: 0 }} colSpan={6}>
|
||||
<Box>
|
||||
{([...vsFlowData, ...rmFlowData] || []).map((node, index) => {
|
||||
return (
|
||||
<Accordion
|
||||
expanded={nodeConfigExpanded[node.name] || true}
|
||||
onChange={handleAccordionChange(node.name)}
|
||||
key={index}
|
||||
disableGutters
|
||||
>
|
||||
<AccordionSummary
|
||||
expandIcon={<ExpandMoreIcon />}
|
||||
aria-controls={`nodes-accordian-${node.name}`}
|
||||
id={`nodes-accordian-header-${node.name}`}
|
||||
>
|
||||
<AccordionSummary
|
||||
expandIcon={<ExpandMoreIcon />}
|
||||
aria-controls={`nodes-accordian-${node.name}`}
|
||||
id={`nodes-accordian-header-${node.name}`}
|
||||
<div
|
||||
style={{ display: 'flex', flexDirection: 'row', alignItems: 'center' }}
|
||||
>
|
||||
<div
|
||||
style={{
|
||||
display: 'flex',
|
||||
flexDirection: 'row',
|
||||
alignItems: 'center'
|
||||
width: 40,
|
||||
height: 40,
|
||||
marginRight: 10,
|
||||
borderRadius: '50%',
|
||||
backgroundColor: 'white'
|
||||
}}
|
||||
>
|
||||
<div
|
||||
<img
|
||||
style={{
|
||||
width: 40,
|
||||
height: 40,
|
||||
marginRight: 10,
|
||||
width: '100%',
|
||||
height: '100%',
|
||||
padding: 7,
|
||||
borderRadius: '50%',
|
||||
backgroundColor: 'white'
|
||||
objectFit: 'contain'
|
||||
}}
|
||||
>
|
||||
<img
|
||||
style={{
|
||||
width: '100%',
|
||||
height: '100%',
|
||||
padding: 7,
|
||||
borderRadius: '50%',
|
||||
objectFit: 'contain'
|
||||
}}
|
||||
alt={node.name}
|
||||
src={`${baseURL}/api/v1/node-icon/${node.name}`}
|
||||
/>
|
||||
</div>
|
||||
<Typography variant='h5'>{node.label}</Typography>
|
||||
</div>
|
||||
</AccordionSummary>
|
||||
<AccordionDetails sx={{ p: 0 }}>
|
||||
{node.paramValues[0] && (
|
||||
<TableViewOnly
|
||||
sx={{ minWidth: 150 }}
|
||||
rows={node.paramValues}
|
||||
columns={Object.keys(node.paramValues[0])}
|
||||
alt={node.name}
|
||||
src={`${baseURL}/api/v1/node-icon/${node.name}`}
|
||||
/>
|
||||
)}
|
||||
</AccordionDetails>
|
||||
</Accordion>
|
||||
)
|
||||
})}
|
||||
</Box>
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
</TableBody>
|
||||
</Table>
|
||||
</TableContainer>
|
||||
</Stack>
|
||||
</Card>
|
||||
</div>
|
||||
<Typography variant='h5'>{node.label}</Typography>
|
||||
</div>
|
||||
</AccordionSummary>
|
||||
<AccordionDetails>
|
||||
{node.paramValues[0] && (
|
||||
<TableViewOnly
|
||||
sx={{ minWidth: 150 }}
|
||||
rows={node.paramValues}
|
||||
columns={Object.keys(node.paramValues[0])}
|
||||
/>
|
||||
)}
|
||||
</AccordionDetails>
|
||||
</Accordion>
|
||||
)
|
||||
})}
|
||||
</Box>
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
</TableBody>
|
||||
</Table>
|
||||
</TableContainer>
|
||||
<span style={{ marginTop: '30px', fontStyle: 'italic', color: '#b35702' }}>
|
||||
* Only data that were upserted with Record Manager will be deleted from vector store
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</DialogContent>
|
||||
<DialogActions sx={{ pr: 3, pb: 3 }}>
|
||||
<Button onClick={onCancel} color='primary'>
|
||||
Cancel
|
||||
</Button>
|
||||
<Button variant='contained' onClick={() => onDelete(dialogProps.type, dialogProps.file)} color='error'>
|
||||
<Button variant='contained' onClick={() => onDelete(dialogProps.type, dialogProps.file, removeFromVS)} color='error'>
|
||||
Delete
|
||||
</Button>
|
||||
</DialogActions>
|
||||
|
|
|
|||
|
|
@ -186,19 +186,19 @@ const DocumentStoreDetails = () => {
|
|||
setShowDocumentLoaderListDialog(true)
|
||||
}
|
||||
|
||||
const deleteVectorStoreDataFromStore = async (storeId, docId) => {
|
||||
const deleteVectorStoreDataFromStore = async (storeId) => {
|
||||
try {
|
||||
await documentsApi.deleteVectorStoreDataFromStore(storeId, docId)
|
||||
await documentsApi.deleteVectorStoreDataFromStore(storeId)
|
||||
} catch (error) {
|
||||
console.error(error)
|
||||
}
|
||||
}
|
||||
|
||||
const onDocStoreDelete = async (type, file) => {
|
||||
const onDocStoreDelete = async (type, file, removeFromVectorStore) => {
|
||||
setBackdropLoading(true)
|
||||
setShowDeleteDocStoreDialog(false)
|
||||
if (type === 'STORE') {
|
||||
if (documentStore.recordManagerConfig) {
|
||||
if (removeFromVectorStore) {
|
||||
await deleteVectorStoreDataFromStore(storeId)
|
||||
}
|
||||
try {
|
||||
|
|
@ -239,9 +239,6 @@ const DocumentStoreDetails = () => {
|
|||
})
|
||||
}
|
||||
} else if (type === 'LOADER') {
|
||||
if (documentStore.recordManagerConfig) {
|
||||
await deleteVectorStoreDataFromStore(storeId, file.id)
|
||||
}
|
||||
try {
|
||||
const deleteResp = await documentsApi.deleteLoaderFromStore(storeId, file.id)
|
||||
setBackdropLoading(false)
|
||||
|
|
@ -283,40 +280,9 @@ const DocumentStoreDetails = () => {
|
|||
}
|
||||
|
||||
const onLoaderDelete = (file, vectorStoreConfig, recordManagerConfig) => {
|
||||
// Get the display name in the format "LoaderName (sourceName)"
|
||||
const loaderName = file.loaderName || 'Unknown'
|
||||
let sourceName = ''
|
||||
|
||||
// Prefer files.name when files array exists and has items
|
||||
if (file.files && Array.isArray(file.files) && file.files.length > 0) {
|
||||
sourceName = file.files.map((f) => f.name).join(', ')
|
||||
} else if (file.source) {
|
||||
// Fallback to source logic
|
||||
if (typeof file.source === 'string' && file.source.includes('base64')) {
|
||||
sourceName = getFileName(file.source)
|
||||
} else if (typeof file.source === 'string' && file.source.startsWith('[') && file.source.endsWith(']')) {
|
||||
sourceName = JSON.parse(file.source).join(', ')
|
||||
} else if (typeof file.source === 'string') {
|
||||
sourceName = file.source
|
||||
}
|
||||
}
|
||||
|
||||
const displayName = sourceName ? `${loaderName} (${sourceName})` : loaderName
|
||||
|
||||
let description = `Delete "${displayName}"? This will delete all the associated document chunks from the document store.`
|
||||
|
||||
if (
|
||||
recordManagerConfig &&
|
||||
vectorStoreConfig &&
|
||||
Object.keys(recordManagerConfig).length > 0 &&
|
||||
Object.keys(vectorStoreConfig).length > 0
|
||||
) {
|
||||
description = `Delete "${displayName}"? This will delete all the associated document chunks from the document store and remove the actual data from the vector store database.`
|
||||
}
|
||||
|
||||
const props = {
|
||||
title: `Delete`,
|
||||
description,
|
||||
description: `Delete Loader ${file.loaderName} ? This will delete all the associated document chunks.`,
|
||||
vectorStoreConfig,
|
||||
recordManagerConfig,
|
||||
type: 'LOADER',
|
||||
|
|
@ -328,20 +294,9 @@ const DocumentStoreDetails = () => {
|
|||
}
|
||||
|
||||
const onStoreDelete = (vectorStoreConfig, recordManagerConfig) => {
|
||||
let description = `Delete Store ${getSpecificDocumentStore.data?.name}? This will delete all the associated loaders and document chunks from the document store.`
|
||||
|
||||
if (
|
||||
recordManagerConfig &&
|
||||
vectorStoreConfig &&
|
||||
Object.keys(recordManagerConfig).length > 0 &&
|
||||
Object.keys(vectorStoreConfig).length > 0
|
||||
) {
|
||||
description = `Delete Store ${getSpecificDocumentStore.data?.name}? This will delete all the associated loaders and document chunks from the document store, and remove the actual data from the vector store database.`
|
||||
}
|
||||
|
||||
const props = {
|
||||
title: `Delete`,
|
||||
description,
|
||||
description: `Delete Store ${getSpecificDocumentStore.data?.name} ? This will delete all the associated loaders and document chunks.`,
|
||||
vectorStoreConfig,
|
||||
recordManagerConfig,
|
||||
type: 'STORE'
|
||||
|
|
@ -526,10 +481,7 @@ const DocumentStoreDetails = () => {
|
|||
>
|
||||
<MenuItem
|
||||
disabled={documentStore?.totalChunks <= 0 || documentStore?.status === 'UPSERTING'}
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
showStoredChunks('all')
|
||||
}}
|
||||
onClick={() => showStoredChunks('all')}
|
||||
disableRipple
|
||||
>
|
||||
<FileChunksIcon />
|
||||
|
|
@ -538,10 +490,7 @@ const DocumentStoreDetails = () => {
|
|||
<Available permission={'documentStores:upsert-config'}>
|
||||
<MenuItem
|
||||
disabled={documentStore?.totalChunks <= 0 || documentStore?.status === 'UPSERTING'}
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
showVectorStore(documentStore.id)
|
||||
}}
|
||||
onClick={() => showVectorStore(documentStore.id)}
|
||||
disableRipple
|
||||
>
|
||||
<NoteAddIcon />
|
||||
|
|
@ -550,10 +499,7 @@ const DocumentStoreDetails = () => {
|
|||
</Available>
|
||||
<MenuItem
|
||||
disabled={documentStore?.totalChunks <= 0 || documentStore?.status !== 'UPSERTED'}
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
showVectorStoreQuery(documentStore.id)
|
||||
}}
|
||||
onClick={() => showVectorStoreQuery(documentStore.id)}
|
||||
disableRipple
|
||||
>
|
||||
<SearchIcon />
|
||||
|
|
@ -572,10 +518,7 @@ const DocumentStoreDetails = () => {
|
|||
</Available>
|
||||
<Divider sx={{ my: 0.5 }} />
|
||||
<MenuItem
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
onStoreDelete(documentStore.vectorStoreConfig, documentStore.recordManagerConfig)
|
||||
}}
|
||||
onClick={() => onStoreDelete(documentStore.vectorStoreConfig, documentStore.recordManagerConfig)}
|
||||
disableRipple
|
||||
>
|
||||
<FileDeleteIcon />
|
||||
|
|
@ -813,26 +756,20 @@ function LoaderRow(props) {
|
|||
setAnchorEl(null)
|
||||
}
|
||||
|
||||
const formatSources = (files, source, loaderName) => {
|
||||
let sourceName = ''
|
||||
|
||||
const formatSources = (files, source) => {
|
||||
// Prefer files.name when files array exists and has items
|
||||
if (files && Array.isArray(files) && files.length > 0) {
|
||||
sourceName = files.map((file) => file.name).join(', ')
|
||||
} else if (source && typeof source === 'string' && source.includes('base64')) {
|
||||
// Fallback to original source logic
|
||||
sourceName = getFileName(source)
|
||||
} else if (source && typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
|
||||
sourceName = JSON.parse(source).join(', ')
|
||||
} else if (source) {
|
||||
sourceName = source
|
||||
return files.map((file) => file.name).join(', ')
|
||||
}
|
||||
|
||||
// Return format: "LoaderName (sourceName)" or just "LoaderName" if no source
|
||||
if (!sourceName) {
|
||||
return loaderName || 'No source'
|
||||
// Fallback to original source logic
|
||||
if (source && typeof source === 'string' && source.includes('base64')) {
|
||||
return getFileName(source)
|
||||
}
|
||||
return loaderName ? `${loaderName} (${sourceName})` : sourceName
|
||||
if (source && typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
|
||||
return JSON.parse(source).join(', ')
|
||||
}
|
||||
return source || 'No source'
|
||||
}
|
||||
|
||||
return (
|
||||
|
|
@ -886,62 +823,32 @@ function LoaderRow(props) {
|
|||
onClose={handleClose}
|
||||
>
|
||||
<Available permission={'documentStores:preview-process'}>
|
||||
<MenuItem
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
props.onEditClick()
|
||||
}}
|
||||
disableRipple
|
||||
>
|
||||
<MenuItem onClick={props.onEditClick} disableRipple>
|
||||
<FileEditIcon />
|
||||
Preview & Process
|
||||
</MenuItem>
|
||||
</Available>
|
||||
<Available permission={'documentStores:preview-process'}>
|
||||
<MenuItem
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
props.onViewChunksClick()
|
||||
}}
|
||||
disableRipple
|
||||
>
|
||||
<MenuItem onClick={props.onViewChunksClick} disableRipple>
|
||||
<FileChunksIcon />
|
||||
View & Edit Chunks
|
||||
</MenuItem>
|
||||
</Available>
|
||||
<Available permission={'documentStores:preview-process'}>
|
||||
<MenuItem
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
props.onChunkUpsert()
|
||||
}}
|
||||
disableRipple
|
||||
>
|
||||
<MenuItem onClick={props.onChunkUpsert} disableRipple>
|
||||
<NoteAddIcon />
|
||||
Upsert Chunks
|
||||
</MenuItem>
|
||||
</Available>
|
||||
<Available permission={'documentStores:preview-process'}>
|
||||
<MenuItem
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
props.onViewUpsertAPI()
|
||||
}}
|
||||
disableRipple
|
||||
>
|
||||
<MenuItem onClick={props.onViewUpsertAPI} disableRipple>
|
||||
<CodeIcon />
|
||||
View API
|
||||
</MenuItem>
|
||||
</Available>
|
||||
<Divider sx={{ my: 0.5 }} />
|
||||
<Available permission={'documentStores:delete-loader'}>
|
||||
<MenuItem
|
||||
onClick={() => {
|
||||
handleClose()
|
||||
props.onDeleteClick()
|
||||
}}
|
||||
disableRipple
|
||||
>
|
||||
<MenuItem onClick={props.onDeleteClick} disableRipple>
|
||||
<FileDeleteIcon />
|
||||
Delete
|
||||
</MenuItem>
|
||||
|
|
|
|||
|
|
@ -26,7 +26,6 @@ import useApi from '@/hooks/useApi'
|
|||
import useConfirm from '@/hooks/useConfirm'
|
||||
import useNotifier from '@/utils/useNotifier'
|
||||
import { useAuth } from '@/hooks/useAuth'
|
||||
import { getFileName } from '@/utils/genericHelper'
|
||||
|
||||
// store
|
||||
import { closeSnackbar as closeSnackbarAction, enqueueSnackbar as enqueueSnackbarAction } from '@/store/actions'
|
||||
|
|
@ -77,7 +76,6 @@ const ShowStoredChunks = () => {
|
|||
const [showExpandedChunkDialog, setShowExpandedChunkDialog] = useState(false)
|
||||
const [expandedChunkDialogProps, setExpandedChunkDialogProps] = useState({})
|
||||
const [fileNames, setFileNames] = useState([])
|
||||
const [loaderDisplayName, setLoaderDisplayName] = useState('')
|
||||
|
||||
const chunkSelected = (chunkId) => {
|
||||
const selectedChunk = documentChunks.find((chunk) => chunk.id === chunkId)
|
||||
|
|
@ -214,32 +212,13 @@ const ShowStoredChunks = () => {
|
|||
setCurrentPage(data.currentPage)
|
||||
setStart(data.currentPage * 50 - 49)
|
||||
setEnd(data.currentPage * 50 > data.count ? data.count : data.currentPage * 50)
|
||||
|
||||
// Build the loader display name in format "LoaderName (sourceName)"
|
||||
const loaderName = data.file?.loaderName || data.storeName || ''
|
||||
let sourceName = ''
|
||||
|
||||
if (data.file?.files && data.file.files.length > 0) {
|
||||
const fileNames = []
|
||||
for (const attachedFile of data.file.files) {
|
||||
fileNames.push(attachedFile.name)
|
||||
}
|
||||
setFileNames(fileNames)
|
||||
sourceName = fileNames.join(', ')
|
||||
} else if (data.file?.source) {
|
||||
const source = data.file.source
|
||||
if (typeof source === 'string' && source.includes('base64')) {
|
||||
sourceName = getFileName(source)
|
||||
} else if (typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
|
||||
sourceName = JSON.parse(source).join(', ')
|
||||
} else if (typeof source === 'string') {
|
||||
sourceName = source
|
||||
}
|
||||
}
|
||||
|
||||
// Set display name in format "LoaderName (sourceName)" or just "LoaderName"
|
||||
const displayName = sourceName ? `${loaderName} (${sourceName})` : loaderName
|
||||
setLoaderDisplayName(displayName)
|
||||
}
|
||||
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
|
|
@ -255,7 +234,7 @@ const ShowStoredChunks = () => {
|
|||
<ViewHeader
|
||||
isBackButton={true}
|
||||
search={false}
|
||||
title={loaderDisplayName}
|
||||
title={getChunksApi.data?.file?.loaderName || getChunksApi.data?.storeName}
|
||||
description={getChunksApi.data?.file?.splitterName || getChunksApi.data?.description}
|
||||
onBack={() => navigate(-1)}
|
||||
></ViewHeader>
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ import Storage from '@mui/icons-material/Storage'
|
|||
import DynamicFeed from '@mui/icons-material/Filter1'
|
||||
|
||||
// utils
|
||||
import { initNode, showHideInputParams, getFileName } from '@/utils/genericHelper'
|
||||
import { initNode, showHideInputParams } from '@/utils/genericHelper'
|
||||
import useNotifier from '@/utils/useNotifier'
|
||||
|
||||
// const
|
||||
|
|
@ -69,7 +69,6 @@ const VectorStoreConfigure = () => {
|
|||
const [loading, setLoading] = useState(true)
|
||||
const [documentStore, setDocumentStore] = useState({})
|
||||
const [dialogProps, setDialogProps] = useState({})
|
||||
const [currentLoader, setCurrentLoader] = useState(null)
|
||||
|
||||
const [showEmbeddingsListDialog, setShowEmbeddingsListDialog] = useState(false)
|
||||
const [selectedEmbeddingsProvider, setSelectedEmbeddingsProvider] = useState({})
|
||||
|
|
@ -246,8 +245,7 @@ const VectorStoreConfigure = () => {
|
|||
const prepareConfigData = () => {
|
||||
const data = {
|
||||
storeId: storeId,
|
||||
docId: docId,
|
||||
isStrictSave: true
|
||||
docId: docId
|
||||
}
|
||||
// Set embedding config
|
||||
if (selectedEmbeddingsProvider.inputs) {
|
||||
|
|
@ -355,39 +353,6 @@ const VectorStoreConfigure = () => {
|
|||
return Object.keys(selectedEmbeddingsProvider).length === 0
|
||||
}
|
||||
|
||||
const getLoaderDisplayName = (loader) => {
|
||||
if (!loader) return ''
|
||||
|
||||
const loaderName = loader.loaderName || 'Unknown'
|
||||
let sourceName = ''
|
||||
|
||||
// Prefer files.name when files array exists and has items
|
||||
if (loader.files && Array.isArray(loader.files) && loader.files.length > 0) {
|
||||
sourceName = loader.files.map((file) => file.name).join(', ')
|
||||
} else if (loader.source) {
|
||||
// Fallback to source logic
|
||||
if (typeof loader.source === 'string' && loader.source.includes('base64')) {
|
||||
sourceName = getFileName(loader.source)
|
||||
} else if (typeof loader.source === 'string' && loader.source.startsWith('[') && loader.source.endsWith(']')) {
|
||||
sourceName = JSON.parse(loader.source).join(', ')
|
||||
} else if (typeof loader.source === 'string') {
|
||||
sourceName = loader.source
|
||||
}
|
||||
}
|
||||
|
||||
// Return format: "LoaderName (sourceName)" or just "LoaderName" if no source
|
||||
return sourceName ? `${loaderName} (${sourceName})` : loaderName
|
||||
}
|
||||
|
||||
const getViewHeaderTitle = () => {
|
||||
const storeName = getSpecificDocumentStoreApi.data?.name || ''
|
||||
if (docId && currentLoader) {
|
||||
const loaderName = getLoaderDisplayName(currentLoader)
|
||||
return `${storeName} / ${loaderName}`
|
||||
}
|
||||
return storeName
|
||||
}
|
||||
|
||||
useEffect(() => {
|
||||
if (saveVectorStoreConfigApi.data) {
|
||||
setLoading(false)
|
||||
|
|
@ -446,15 +411,6 @@ const VectorStoreConfigure = () => {
|
|||
return
|
||||
}
|
||||
setDocumentStore(docStore)
|
||||
|
||||
// Find the current loader if docId is provided
|
||||
if (docId && docStore.loaders) {
|
||||
const loader = docStore.loaders.find((l) => l.id === docId)
|
||||
if (loader) {
|
||||
setCurrentLoader(loader)
|
||||
}
|
||||
}
|
||||
|
||||
if (docStore.embeddingConfig) {
|
||||
getEmbeddingNodeDetailsApi.request(docStore.embeddingConfig.name)
|
||||
}
|
||||
|
|
@ -517,7 +473,7 @@ const VectorStoreConfigure = () => {
|
|||
<ViewHeader
|
||||
isBackButton={true}
|
||||
search={false}
|
||||
title={getViewHeaderTitle()}
|
||||
title={getSpecificDocumentStoreApi.data?.name}
|
||||
description='Configure Embeddings, Vector Store and Record Manager'
|
||||
onBack={() => navigate(-1)}
|
||||
>
|
||||
|
|
|
|||
|
|
@ -21,8 +21,7 @@ import {
|
|||
useTheme,
|
||||
Typography,
|
||||
Button,
|
||||
Drawer,
|
||||
TableSortLabel
|
||||
Drawer
|
||||
} from '@mui/material'
|
||||
|
||||
// project imports
|
||||
|
|
@ -186,8 +185,6 @@ function ShowRoleRow(props) {
|
|||
const [openViewPermissionsDrawer, setOpenViewPermissionsDrawer] = useState(false)
|
||||
const [selectedRoleId, setSelectedRoleId] = useState('')
|
||||
const [assignedUsers, setAssignedUsers] = useState([])
|
||||
const [order, setOrder] = useState('asc')
|
||||
const [orderBy, setOrderBy] = useState('workspace')
|
||||
|
||||
const theme = useTheme()
|
||||
const customization = useSelector((state) => state.customization)
|
||||
|
|
@ -199,38 +196,6 @@ function ShowRoleRow(props) {
|
|||
setSelectedRoleId(roleId)
|
||||
}
|
||||
|
||||
const handleRequestSort = (property) => {
|
||||
const isAsc = orderBy === property && order === 'asc'
|
||||
setOrder(isAsc ? 'desc' : 'asc')
|
||||
setOrderBy(property)
|
||||
}
|
||||
|
||||
const sortedAssignedUsers = [...assignedUsers].sort((a, b) => {
|
||||
let comparison = 0
|
||||
|
||||
if (orderBy === 'workspace') {
|
||||
const workspaceA = (a.workspace?.name || '').toLowerCase()
|
||||
const workspaceB = (b.workspace?.name || '').toLowerCase()
|
||||
comparison = workspaceA.localeCompare(workspaceB)
|
||||
if (comparison === 0) {
|
||||
const userA = (a.user?.name || a.user?.email || '').toLowerCase()
|
||||
const userB = (b.user?.name || b.user?.email || '').toLowerCase()
|
||||
comparison = userA.localeCompare(userB)
|
||||
}
|
||||
} else if (orderBy === 'user') {
|
||||
const userA = (a.user?.name || a.user?.email || '').toLowerCase()
|
||||
const userB = (b.user?.name || b.user?.email || '').toLowerCase()
|
||||
comparison = userA.localeCompare(userB)
|
||||
if (comparison === 0) {
|
||||
const workspaceA = (a.workspace?.name || '').toLowerCase()
|
||||
const workspaceB = (b.workspace?.name || '').toLowerCase()
|
||||
comparison = workspaceA.localeCompare(workspaceB)
|
||||
}
|
||||
}
|
||||
|
||||
return order === 'asc' ? comparison : -comparison
|
||||
})
|
||||
|
||||
useEffect(() => {
|
||||
if (getAllUsersByRoleIdApi.data) {
|
||||
setAssignedUsers(getAllUsersByRoleIdApi.data)
|
||||
|
|
@ -238,14 +203,12 @@ function ShowRoleRow(props) {
|
|||
}, [getAllUsersByRoleIdApi.data])
|
||||
|
||||
useEffect(() => {
|
||||
if (openAssignedUsersDrawer && selectedRoleId) {
|
||||
if (open && selectedRoleId) {
|
||||
getAllUsersByRoleIdApi.request(selectedRoleId)
|
||||
} else {
|
||||
setOpenAssignedUsersDrawer(false)
|
||||
setSelectedRoleId('')
|
||||
setAssignedUsers([])
|
||||
setOrder('asc')
|
||||
setOrderBy('workspace')
|
||||
}
|
||||
// eslint-disable-next-line react-hooks/exhaustive-deps
|
||||
}, [openAssignedUsersDrawer])
|
||||
|
|
@ -338,28 +301,12 @@ function ShowRoleRow(props) {
|
|||
}}
|
||||
>
|
||||
<TableRow>
|
||||
<StyledTableCell sx={{ width: '50%' }}>
|
||||
<TableSortLabel
|
||||
active={orderBy === 'user'}
|
||||
direction={orderBy === 'user' ? order : 'asc'}
|
||||
onClick={() => handleRequestSort('user')}
|
||||
>
|
||||
User
|
||||
</TableSortLabel>
|
||||
</StyledTableCell>
|
||||
<StyledTableCell sx={{ width: '50%' }}>
|
||||
<TableSortLabel
|
||||
active={orderBy === 'workspace'}
|
||||
direction={orderBy === 'workspace' ? order : 'asc'}
|
||||
onClick={() => handleRequestSort('workspace')}
|
||||
>
|
||||
Workspace
|
||||
</TableSortLabel>
|
||||
</StyledTableCell>
|
||||
<StyledTableCell sx={{ width: '50%' }}>User</StyledTableCell>
|
||||
<StyledTableCell sx={{ width: '50%' }}>Workspace</StyledTableCell>
|
||||
</TableRow>
|
||||
</TableHead>
|
||||
<TableBody>
|
||||
{sortedAssignedUsers.map((item, index) => (
|
||||
{assignedUsers.map((item, index) => (
|
||||
<TableRow key={index}>
|
||||
<StyledTableCell>{item.user.name || item.user.email}</StyledTableCell>
|
||||
<StyledTableCell>{item.workspace.name}</StyledTableCell>
|
||||
|
|
|
|||
|
|
@ -245,11 +245,7 @@ const Tools = () => {
|
|||
))}
|
||||
</Box>
|
||||
) : (
|
||||
<ToolsTable
|
||||
data={getAllToolsApi.data?.data?.filter(filterTools) || []}
|
||||
isLoading={isLoading}
|
||||
onSelect={edit}
|
||||
/>
|
||||
<ToolsTable data={getAllToolsApi.data.data} isLoading={isLoading} onSelect={edit} />
|
||||
)}
|
||||
{/* Pagination and Page Size Controls */}
|
||||
<TablePagination currentPage={currentPage} limit={pageLimit} total={total} onChange={onChange} />
|
||||
|
|
|
|||
143
pnpm-lock.yaml
143
pnpm-lock.yaml
File diff suppressed because one or more lines are too long
Loading…
Reference in New Issue