Feature/agentflow v2 (#4298)

* agent flow v2

* chat message background

* conditon agent flow

* add sticky note

* update human input dynamic prompt

* add HTTP node

* add default tool icon

* fix export duplicate agentflow v2

* add agentflow v2 marketplaces

* refractor memoization, add iteration nodes

* add agentflow v2 templates

* add agentflow generator

* add migration scripts for mysql, mariadb, posrgres and fix date filters for executions

* update agentflow chat history config

* fix get all flows error after deletion and rename

* add previous nodes from parent node

* update generator prompt

* update run time state when using iteration nodes

* prevent looping connection, prevent duplication of start node, add executeflow node, add nodes agentflow, chat history variable

* update embed

* convert form input to string

* bump openai version

* add react rewards

* add prompt generator to prediction queue

* add array schema to overrideconfig

* UI touchup

* update embedded chat version

* fix node info dialog

* update start node and loop default iteration

* update UI fixes for agentflow v2

* fix async drop down

* add export import to agentflowsv2, executions, fix UI bugs

* add default empty object to flowlisttable

* add ability to share trace link publicly, allow MCP tool use for Agent and Assistant

* add runtime message length to variable, display conditions on UI

* fix array validation

* add ability to add knowledge from vector store and embeddings for agent

* add agent tool require human input

* add ephemeral memory to start node

* update agent flow node to show vs and embeddings icons

* feat: add import chat data functionality for AgentFlowV2

* feat: set chatMessage.executionId to null if not found in import JSON file or database

* fix: MariaDB execution migration script to utf8mb4_unicode_520_ci

---------

Co-authored-by: Ong Chung Yau <33013947+chungyau97@users.noreply.github.com>
Co-authored-by: chungyau97 <chungyau97@gmail.com>
This commit is contained in:
Henry Heng 2025-05-10 10:21:26 +08:00 committed by GitHub
parent 82e6f43b5c
commit 7924fbce0d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
216 changed files with 33304 additions and 5269 deletions

View File

@ -1,8 +1,11 @@
<!-- markdownlint-disable MD030 --> <!-- markdownlint-disable MD030 -->
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.png?raw=true"></a> <p align="center">
<a href="https://www.flowiseai.com">
# Flowise - Build LLM Apps Easily <img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_dark.svg#gh-dark-mode-only" alt="Flowise Logo" width="250">
<img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_white.svg#gh-light-mode-only" alt="Flowise Logo" width="250">
</a>
</p>
[![Release Notes](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases) [![Release Notes](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases)
[![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW) [![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW)
@ -12,9 +15,9 @@
English | [繁體中文](./i18n/README-TW.md) | [简体中文](./i18n/README-ZH.md) | [日本語](./i18n/README-JA.md) | [한국어](./i18n/README-KR.md) English | [繁體中文](./i18n/README-TW.md) | [简体中文](./i18n/README-ZH.md) | [日本語](./i18n/README-JA.md) | [한국어](./i18n/README-KR.md)
<h3>Drag & drop UI to build your customized LLM flow</h3> <h3>Build AI Agents, Visually</h3>
<a href="https://github.com/FlowiseAI/Flowise"> <a href="https://github.com/FlowiseAI/Flowise">
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true"></a> <img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true"></a>
## ⚡Quick Start ## ⚡Quick Start

View File

@ -1,8 +1,11 @@
<!-- markdownlint-disable MD030 --> <!-- markdownlint-disable MD030 -->
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.png?raw=true"></a> <p align="center">
<a href="https://www.flowiseai.com">
# Flowise - LLM アプリを簡単に構築 <img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_dark.svg#gh-dark-mode-only" alt="Flowise Logo" width="250">
<img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_white.svg#gh-light-mode-only" alt="Flowise Logo" width="250">
</a>
</p>
[![Release Notes](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases) [![Release Notes](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases)
[![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW) [![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW)
@ -12,9 +15,9 @@
[English](../README.md) | [繁體中文](./README-TW.md) | [简体中文](./README-ZH.md) | 日本語 | [한국어](./README-KR.md) [English](../README.md) | [繁體中文](./README-TW.md) | [简体中文](./README-ZH.md) | 日本語 | [한국어](./README-KR.md)
<h3>ドラッグ&ドロップでカスタマイズした LLM フローを構築できる UI</h3> <h3>AIエージェントをビジュアルに構築</h3>
<a href="https://github.com/FlowiseAI/Flowise"> <a href="https://github.com/FlowiseAI/Flowise">
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true"></a> <img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true"></a>
## ⚡ クイックスタート ## ⚡ クイックスタート

View File

@ -1,8 +1,11 @@
<!-- markdownlint-disable MD030 --> <!-- markdownlint-disable MD030 -->
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.png?raw=true"></a> <p align="center">
<a href="https://www.flowiseai.com">
# Flowise - 간편한 LLM 애플리케이션 제작 <img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_dark.svg#gh-dark-mode-only" alt="Flowise Logo" width="250">
<img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_white.svg#gh-light-mode-only" alt="Flowise Logo" width="250">
</a>
</p>
[![Release Notes](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases) [![Release Notes](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases)
[![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW) [![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW)
@ -12,9 +15,9 @@
[English](../README.md) | [繁體中文](./README-TW.md) | [简体中文](./README-ZH.md) | [日本語](./README-JA.md) | 한국어 [English](../README.md) | [繁體中文](./README-TW.md) | [简体中文](./README-ZH.md) | [日本語](./README-JA.md) | 한국어
<h3>드래그 앤 드롭 UI로 맞춤형 LLM 플로우 구축하기</h3> <h3>AI 에이전트를 시각적으로 구축하세요</h3>
<a href="https://github.com/FlowiseAI/Flowise"> <a href="https://github.com/FlowiseAI/Flowise">
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true"></a> <img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true"></a>
## ⚡빠른 시작 가이드 ## ⚡빠른 시작 가이드

View File

@ -1,8 +1,11 @@
<!-- markdownlint-disable MD030 --> <!-- markdownlint-disable MD030 -->
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.png?raw=true"></a> <p align="center">
<a href="https://www.flowiseai.com">
# Flowise - 輕鬆構建 LLM 應用 <img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_dark.svg#gh-dark-mode-only" alt="Flowise Logo" width="250">
<img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_white.svg#gh-light-mode-only" alt="Flowise Logo" width="250">
</a>
</p>
[![Release Notes](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases) [![Release Notes](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases)
[![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW) [![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW)
@ -12,9 +15,9 @@
[English](../README.md) | 繁體中文 | [简体中文](./README-ZH.md) | [日本語](./README-JA.md) | [한국어](./README-KR.md) [English](../README.md) | 繁體中文 | [简体中文](./README-ZH.md) | [日本語](./README-JA.md) | [한국어](./README-KR.md)
<h3>拖放 UI 以構建自定義的 LLM 流程</h3> <h3>可視化建構 AI/LLM 流程</h3>
<a href="https://github.com/FlowiseAI/Flowise"> <a href="https://github.com/FlowiseAI/Flowise">
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true"></a> <img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true"></a>
## ⚡ 快速開始 ## ⚡ 快速開始

View File

@ -1,8 +1,11 @@
<!-- markdownlint-disable MD030 --> <!-- markdownlint-disable MD030 -->
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.png?raw=true"></a> <p align="center">
<a href="https://www.flowiseai.com">
# Flowise - 轻松构建 LLM 应用程序 <img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_dark.svg#gh-dark-mode-only" alt="Flowise Logo" width="250">
<img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_white.svg#gh-light-mode-only" alt="Flowise Logo" width="250">
</a>
</p>
[![发布说明](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases) [![发布说明](https://img.shields.io/github/release/FlowiseAI/Flowise)](https://github.com/FlowiseAI/Flowise/releases)
[![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW) [![Discord](https://img.shields.io/discord/1087698854775881778?label=Discord&logo=discord)](https://discord.gg/jbaHfsRVBW)
@ -12,9 +15,9 @@
[English](../README.md) | [繁體中文](./README-TW.md) | 简体中文 | [日本語](./README-JA.md) | [한국어](./README-KR.md) [English](../README.md) | [繁體中文](./README-TW.md) | 简体中文 | [日本語](./README-JA.md) | [한국어](./README-KR.md)
<h3>拖放界面构建定制化的LLM流程</h3> <h3>可视化构建 AI/LLM 流程</h3>
<a href="https://github.com/FlowiseAI/Flowise"> <a href="https://github.com/FlowiseAI/Flowise">
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true"></a> <img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true"></a>
## ⚡ 快速入门 ## ⚡ 快速入门

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 MiB

1
images/flowise_dark.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 40 KiB

1
images/flowise_white.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 40 KiB

View File

@ -87,7 +87,7 @@
"@grpc/grpc-js": "^1.10.10", "@grpc/grpc-js": "^1.10.10",
"@langchain/core": "0.3.37", "@langchain/core": "0.3.37",
"@qdrant/openapi-typescript-fetch": "1.2.6", "@qdrant/openapi-typescript-fetch": "1.2.6",
"openai": "4.82.0", "openai": "4.96.0",
"protobufjs": "7.4.0" "protobufjs": "7.4.0"
}, },
"eslintIgnore": [ "eslintIgnore": [

View File

@ -6,7 +6,7 @@
Flowise 的应用集成。包含节点和凭据。 Flowise 的应用集成。包含节点和凭据。
![Flowise](https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true) ![Flowise](https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true)
安装: 安装:

View File

@ -6,7 +6,7 @@ English | [中文](./README-ZH.md)
Apps integration for Flowise. Contain Nodes and Credentials. Apps integration for Flowise. Contain Nodes and Credentials.
![Flowise](https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true) ![Flowise](https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true)
Install: Install:

View File

@ -0,0 +1,28 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class HTTPApiKeyCredential implements INodeCredential {
label: string
name: string
version: number
inputs: INodeParams[]
constructor() {
this.label = 'HTTP Api Key'
this.name = 'httpApiKey'
this.version = 1.0
this.inputs = [
{
label: 'Key',
name: 'key',
type: 'string'
},
{
label: 'Value',
name: 'value',
type: 'password'
}
]
}
}
module.exports = { credClass: HTTPApiKeyCredential }

View File

@ -0,0 +1,28 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class HttpBasicAuthCredential implements INodeCredential {
label: string
name: string
version: number
inputs: INodeParams[]
constructor() {
this.label = 'HTTP Basic Auth'
this.name = 'httpBasicAuth'
this.version = 1.0
this.inputs = [
{
label: 'Basic Auth Username',
name: 'basicAuthUsername',
type: 'string'
},
{
label: 'Basic Auth Password',
name: 'basicAuthPassword',
type: 'password'
}
]
}
}
module.exports = { credClass: HttpBasicAuthCredential }

View File

@ -0,0 +1,23 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class HTTPBearerTokenCredential implements INodeCredential {
label: string
name: string
version: number
inputs: INodeParams[]
constructor() {
this.label = 'HTTP Bearer Token'
this.name = 'httpBearerToken'
this.version = 1.0
this.inputs = [
{
label: 'Token',
name: 'token',
type: 'password'
}
]
}
}
module.exports = { credClass: HTTPBearerTokenCredential }

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,350 @@
import { CommonType, ICommonObject, ICondition, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
class Condition_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
tags: string[]
baseClasses: string[]
inputs: INodeParams[]
outputs: INodeOutputsValue[]
constructor() {
this.label = 'Condition'
this.name = 'conditionAgentflow'
this.version = 1.0
this.type = 'Condition'
this.category = 'Agent Flows'
this.description = `Split flows based on If Else conditions`
this.baseClasses = [this.type]
this.color = '#FFB938'
this.inputs = [
{
label: 'Conditions',
name: 'conditions',
type: 'array',
description: 'Values to compare',
acceptVariable: true,
default: [
{
type: 'string',
value1: '',
operation: 'equal',
value2: ''
}
],
array: [
{
label: 'Type',
name: 'type',
type: 'options',
options: [
{
label: 'String',
name: 'string'
},
{
label: 'Number',
name: 'number'
},
{
label: 'Boolean',
name: 'boolean'
}
],
default: 'string'
},
/////////////////////////////////////// STRING ////////////////////////////////////////
{
label: 'Value 1',
name: 'value1',
type: 'string',
default: '',
description: 'First value to be compared with',
acceptVariable: true,
show: {
'conditions[$index].type': 'string'
}
},
{
label: 'Operation',
name: 'operation',
type: 'options',
options: [
{
label: 'Contains',
name: 'contains'
},
{
label: 'Ends With',
name: 'endsWith'
},
{
label: 'Equal',
name: 'equal'
},
{
label: 'Not Contains',
name: 'notContains'
},
{
label: 'Not Equal',
name: 'notEqual'
},
{
label: 'Regex',
name: 'regex'
},
{
label: 'Starts With',
name: 'startsWith'
},
{
label: 'Is Empty',
name: 'isEmpty'
},
{
label: 'Not Empty',
name: 'notEmpty'
}
],
default: 'equal',
description: 'Type of operation',
show: {
'conditions[$index].type': 'string'
}
},
{
label: 'Value 2',
name: 'value2',
type: 'string',
default: '',
description: 'Second value to be compared with',
acceptVariable: true,
show: {
'conditions[$index].type': 'string'
},
hide: {
'conditions[$index].operation': ['isEmpty', 'notEmpty']
}
},
/////////////////////////////////////// NUMBER ////////////////////////////////////////
{
label: 'Value 1',
name: 'value1',
type: 'number',
default: '',
description: 'First value to be compared with',
acceptVariable: true,
show: {
'conditions[$index].type': 'number'
}
},
{
label: 'Operation',
name: 'operation',
type: 'options',
options: [
{
label: 'Smaller',
name: 'smaller'
},
{
label: 'Smaller Equal',
name: 'smallerEqual'
},
{
label: 'Equal',
name: 'equal'
},
{
label: 'Not Equal',
name: 'notEqual'
},
{
label: 'Larger',
name: 'larger'
},
{
label: 'Larger Equal',
name: 'largerEqual'
},
{
label: 'Is Empty',
name: 'isEmpty'
},
{
label: 'Not Empty',
name: 'notEmpty'
}
],
default: 'equal',
description: 'Type of operation',
show: {
'conditions[$index].type': 'number'
}
},
{
label: 'Value 2',
name: 'value2',
type: 'number',
default: 0,
description: 'Second value to be compared with',
acceptVariable: true,
show: {
'conditions[$index].type': 'number'
}
},
/////////////////////////////////////// BOOLEAN ////////////////////////////////////////
{
label: 'Value 1',
name: 'value1',
type: 'boolean',
default: false,
description: 'First value to be compared with',
show: {
'conditions[$index].type': 'boolean'
}
},
{
label: 'Operation',
name: 'operation',
type: 'options',
options: [
{
label: 'Equal',
name: 'equal'
},
{
label: 'Not Equal',
name: 'notEqual'
}
],
default: 'equal',
description: 'Type of operation',
show: {
'conditions[$index].type': 'boolean'
}
},
{
label: 'Value 2',
name: 'value2',
type: 'boolean',
default: false,
description: 'Second value to be compared with',
show: {
'conditions[$index].type': 'boolean'
}
}
]
}
]
this.outputs = [
{
label: '0',
name: '0',
description: 'Condition 0'
},
{
label: '1',
name: '1',
description: 'Else'
}
]
}
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const state = options.agentflowRuntime?.state as ICommonObject
const compareOperationFunctions: {
[key: string]: (value1: CommonType, value2: CommonType) => boolean
} = {
contains: (value1: CommonType, value2: CommonType) => (value1 || '').toString().includes((value2 || '').toString()),
notContains: (value1: CommonType, value2: CommonType) => !(value1 || '').toString().includes((value2 || '').toString()),
endsWith: (value1: CommonType, value2: CommonType) => (value1 as string).endsWith(value2 as string),
equal: (value1: CommonType, value2: CommonType) => value1 === value2,
notEqual: (value1: CommonType, value2: CommonType) => value1 !== value2,
larger: (value1: CommonType, value2: CommonType) => (Number(value1) || 0) > (Number(value2) || 0),
largerEqual: (value1: CommonType, value2: CommonType) => (Number(value1) || 0) >= (Number(value2) || 0),
smaller: (value1: CommonType, value2: CommonType) => (Number(value1) || 0) < (Number(value2) || 0),
smallerEqual: (value1: CommonType, value2: CommonType) => (Number(value1) || 0) <= (Number(value2) || 0),
startsWith: (value1: CommonType, value2: CommonType) => (value1 as string).startsWith(value2 as string),
isEmpty: (value1: CommonType) => [undefined, null, ''].includes(value1 as string),
notEmpty: (value1: CommonType) => ![undefined, null, ''].includes(value1 as string)
}
const _conditions = nodeData.inputs?.conditions
const conditions: ICondition[] = typeof _conditions === 'string' ? JSON.parse(_conditions) : _conditions
const initialConditions = { ...conditions }
for (const condition of conditions) {
const _value1 = condition.value1
const _value2 = condition.value2
const operation = condition.operation
let value1: CommonType
let value2: CommonType
switch (condition.type) {
case 'boolean':
value1 = _value1
value2 = _value2
break
case 'number':
value1 = parseFloat(_value1 as string) || 0
value2 = parseFloat(_value2 as string) || 0
break
default: // string
value1 = _value1 as string
value2 = _value2 as string
}
const compareOperationResult = compareOperationFunctions[operation](value1, value2)
if (compareOperationResult) {
// find the matching condition
const conditionIndex = conditions.findIndex((c) => JSON.stringify(c) === JSON.stringify(condition))
// add isFulfilled to the condition
if (conditionIndex > -1) {
conditions[conditionIndex] = { ...condition, isFulfilled: true }
}
break
}
}
// If no condition is fullfilled, add isFulfilled to the ELSE condition
const dummyElseConditionData = {
type: 'string',
value1: '',
operation: 'equal',
value2: ''
}
if (!conditions.some((c) => c.isFulfilled)) {
conditions.push({
...dummyElseConditionData,
isFulfilled: true
})
} else {
conditions.push({
...dummyElseConditionData,
isFulfilled: false
})
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: { conditions: initialConditions },
output: { conditions },
state
}
return returnOutput
}
}
module.exports = { nodeClass: Condition_Agentflow }

View File

@ -0,0 +1,600 @@
import { AnalyticHandler } from '../../../src/handler'
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeOutputsValue, INodeParams } from '../../../src/Interface'
import { AIMessageChunk, BaseMessageLike } from '@langchain/core/messages'
import {
getPastChatHistoryImageMessages,
getUniqueImageMessages,
processMessagesWithImages,
replaceBase64ImagesWithFileReferences
} from '../utils'
import { CONDITION_AGENT_SYSTEM_PROMPT, DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt'
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
class ConditionAgent_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
tags: string[]
baseClasses: string[]
inputs: INodeParams[]
outputs: INodeOutputsValue[]
constructor() {
this.label = 'Condition Agent'
this.name = 'conditionAgentAgentflow'
this.version = 1.0
this.type = 'ConditionAgent'
this.category = 'Agent Flows'
this.description = `Utilize an agent to split flows based on dynamic conditions`
this.baseClasses = [this.type]
this.color = '#ff8fab'
this.inputs = [
{
label: 'Model',
name: 'conditionAgentModel',
type: 'asyncOptions',
loadMethod: 'listModels',
loadConfig: true
},
{
label: 'Instructions',
name: 'conditionAgentInstructions',
type: 'string',
description: 'A general instructions of what the condition agent should do',
rows: 4,
acceptVariable: true,
placeholder: 'Determine if the user is interested in learning about AI'
},
{
label: 'Input',
name: 'conditionAgentInput',
type: 'string',
description: 'Input to be used for the condition agent',
rows: 4,
acceptVariable: true,
default: '<p><span class="variable" data-type="mention" data-id="question" data-label="question">{{ question }}</span> </p>'
},
{
label: 'Scenarios',
name: 'conditionAgentScenarios',
description: 'Define the scenarios that will be used as the conditions to split the flow',
type: 'array',
array: [
{
label: 'Scenario',
name: 'scenario',
type: 'string',
placeholder: 'User is asking for a pizza'
}
],
default: [
{
scenario: ''
},
{
scenario: ''
}
]
}
/*{
label: 'Enable Memory',
name: 'conditionAgentEnableMemory',
type: 'boolean',
description: 'Enable memory for the conversation thread',
default: true,
optional: true
},
{
label: 'Memory Type',
name: 'conditionAgentMemoryType',
type: 'options',
options: [
{
label: 'All Messages',
name: 'allMessages',
description: 'Retrieve all messages from the conversation'
},
{
label: 'Window Size',
name: 'windowSize',
description: 'Uses a fixed window size to surface the last N messages'
},
{
label: 'Conversation Summary',
name: 'conversationSummary',
description: 'Summarizes the whole conversation'
},
{
label: 'Conversation Summary Buffer',
name: 'conversationSummaryBuffer',
description: 'Summarize conversations once token limit is reached. Default to 2000'
}
],
optional: true,
default: 'allMessages',
show: {
conditionAgentEnableMemory: true
}
},
{
label: 'Window Size',
name: 'conditionAgentMemoryWindowSize',
type: 'number',
default: '20',
description: 'Uses a fixed window size to surface the last N messages',
show: {
conditionAgentMemoryType: 'windowSize'
}
},
{
label: 'Max Token Limit',
name: 'conditionAgentMemoryMaxTokenLimit',
type: 'number',
default: '2000',
description: 'Summarize conversations once token limit is reached. Default to 2000',
show: {
conditionAgentMemoryType: 'conversationSummaryBuffer'
}
}*/
]
this.outputs = [
{
label: '0',
name: '0',
description: 'Condition 0'
},
{
label: '1',
name: '1',
description: 'Else'
}
]
}
//@ts-ignore
loadMethods = {
async listModels(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const componentNodes = options.componentNodes as {
[key: string]: INode
}
const returnOptions: INodeOptionsValue[] = []
for (const nodeName in componentNodes) {
const componentNode = componentNodes[nodeName]
if (componentNode.category === 'Chat Models') {
if (componentNode.tags?.includes('LlamaIndex')) {
continue
}
returnOptions.push({
label: componentNode.label,
name: nodeName,
imageSrc: componentNode.icon
})
}
}
return returnOptions
}
}
private parseJsonMarkdown(jsonString: string): any {
// Strip whitespace
jsonString = jsonString.trim()
const starts = ['```json', '```', '``', '`', '{']
const ends = ['```', '``', '`', '}']
let startIndex = -1
let endIndex = -1
// Find start of JSON
for (const s of starts) {
startIndex = jsonString.indexOf(s)
if (startIndex !== -1) {
if (jsonString[startIndex] !== '{') {
startIndex += s.length
}
break
}
}
// Find end of JSON
if (startIndex !== -1) {
for (const e of ends) {
endIndex = jsonString.lastIndexOf(e, jsonString.length)
if (endIndex !== -1) {
if (jsonString[endIndex] === '}') {
endIndex += 1
}
break
}
}
}
if (startIndex !== -1 && endIndex !== -1 && startIndex < endIndex) {
const extractedContent = jsonString.slice(startIndex, endIndex).trim()
try {
return JSON.parse(extractedContent)
} catch (error) {
throw new Error(`Invalid JSON object. Error: ${error}`)
}
}
throw new Error('Could not find JSON block in the output.')
}
async run(nodeData: INodeData, question: string, options: ICommonObject): Promise<any> {
let llmIds: ICommonObject | undefined
let analyticHandlers = options.analyticHandlers as AnalyticHandler
try {
const abortController = options.abortController as AbortController
// Extract input parameters
const model = nodeData.inputs?.conditionAgentModel as string
const modelConfig = nodeData.inputs?.conditionAgentModelConfig as ICommonObject
if (!model) {
throw new Error('Model is required')
}
const conditionAgentInput = nodeData.inputs?.conditionAgentInput as string
let input = conditionAgentInput || question
const conditionAgentInstructions = nodeData.inputs?.conditionAgentInstructions as string
// Extract memory and configuration options
const enableMemory = nodeData.inputs?.conditionAgentEnableMemory as boolean
const memoryType = nodeData.inputs?.conditionAgentMemoryType as string
const _conditionAgentScenarios = nodeData.inputs?.conditionAgentScenarios as { scenario: string }[]
// Extract runtime state and history
const state = options.agentflowRuntime?.state as ICommonObject
const pastChatHistory = (options.pastChatHistory as BaseMessageLike[]) ?? []
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
// Initialize the LLM model instance
const nodeInstanceFilePath = options.componentNodes[model].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const newLLMNodeInstance = new nodeModule.nodeClass()
const newNodeData = {
...nodeData,
credential: modelConfig['FLOWISE_CREDENTIAL_ID'],
inputs: {
...nodeData.inputs,
...modelConfig
}
}
let llmNodeInstance = (await newLLMNodeInstance.init(newNodeData, '', options)) as BaseChatModel
const isStructuredOutput =
_conditionAgentScenarios && Array.isArray(_conditionAgentScenarios) && _conditionAgentScenarios.length > 0
if (!isStructuredOutput) {
throw new Error('Scenarios are required')
}
// Prepare messages array
const messages: BaseMessageLike[] = [
{
role: 'system',
content: CONDITION_AGENT_SYSTEM_PROMPT
},
{
role: 'user',
content: `{"input": "Hello", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}`
},
{
role: 'assistant',
content: `\`\`\`json\n{"output": "default"}\n\`\`\``
},
{
role: 'user',
content: `{"input": "What is AIGC?", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}`
},
{
role: 'assistant',
content: `\`\`\`json\n{"output": "user is asking about AI"}\n\`\`\``
},
{
role: 'user',
content: `{"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "default"], "instruction": "Determine if the user is interested in learning about AI"}`
},
{
role: 'assistant',
content: `\`\`\`json\n{"output": "user is interested in AI topics"}\n\`\`\``
}
]
// Use to store messages with image file references as we do not want to store the base64 data into database
let runtimeImageMessagesWithFileRef: BaseMessageLike[] = []
// Use to keep track of past messages with image file references
let pastImageMessagesWithFileRef: BaseMessageLike[] = []
input = `{"input": ${input}, "scenarios": ${JSON.stringify(
_conditionAgentScenarios.map((scenario) => scenario.scenario)
)}, "instruction": ${conditionAgentInstructions}}`
// Handle memory management if enabled
if (enableMemory) {
await this.handleMemory({
messages,
memoryType,
pastChatHistory,
runtimeChatHistory,
llmNodeInstance,
nodeData,
input,
abortController,
options,
modelConfig,
runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef
})
} else {
/*
* If this is the first node:
* - Add images to messages if exist
*/
if (!runtimeChatHistory.length && options.uploads) {
const imageContents = await getUniqueImageMessages(options, messages, modelConfig)
if (imageContents) {
const { imageMessageWithBase64, imageMessageWithFileRef } = imageContents
messages.push(imageMessageWithBase64)
runtimeImageMessagesWithFileRef.push(imageMessageWithFileRef)
}
}
messages.push({
role: 'user',
content: input
})
}
// Initialize response and determine if streaming is possible
let response: AIMessageChunk = new AIMessageChunk('')
// Start analytics
if (analyticHandlers && options.parentTraceIds) {
const llmLabel = options?.componentNodes?.[model]?.label || model
llmIds = await analyticHandlers.onLLMStart(llmLabel, messages, options.parentTraceIds)
}
// Track execution time
const startTime = Date.now()
response = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
// Calculate execution time
const endTime = Date.now()
const timeDelta = endTime - startTime
// End analytics tracking
if (analyticHandlers && llmIds) {
await analyticHandlers.onLLMEnd(
llmIds,
typeof response.content === 'string' ? response.content : JSON.stringify(response.content)
)
}
let calledOutputName = 'default'
try {
const parsedResponse = this.parseJsonMarkdown(response.content as string)
if (!parsedResponse.output) {
throw new Error('Missing "output" key in response')
}
calledOutputName = parsedResponse.output
} catch (error) {
console.warn(`Failed to parse LLM response: ${error}. Using default output.`)
}
// Clean up empty inputs
for (const key in nodeData.inputs) {
if (nodeData.inputs[key] === '') {
delete nodeData.inputs[key]
}
}
// Find the first exact match
const matchedScenarioIndex = _conditionAgentScenarios.findIndex(
(scenario) => calledOutputName.toLowerCase() === scenario.scenario.toLowerCase()
)
const conditions = _conditionAgentScenarios.map((scenario, index) => {
return {
output: scenario.scenario,
isFulfilled: index === matchedScenarioIndex
}
})
// Replace the actual messages array with one that includes the file references for images instead of base64 data
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
messages,
runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef
)
// Only add to runtime chat history if this is the first node
const inputMessages = []
if (!runtimeChatHistory.length) {
if (runtimeImageMessagesWithFileRef.length) {
inputMessages.push(...runtimeImageMessagesWithFileRef)
}
if (input && typeof input === 'string') {
inputMessages.push({ role: 'user', content: question })
}
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: { messages: messagesWithFileReferences },
output: {
conditions,
content: typeof response.content === 'string' ? response.content : JSON.stringify(response.content),
timeMetadata: {
start: startTime,
end: endTime,
delta: timeDelta
}
},
state,
chatHistory: [...inputMessages]
}
return returnOutput
} catch (error) {
if (options.analyticHandlers && llmIds) {
await options.analyticHandlers.onLLMError(llmIds, error instanceof Error ? error.message : String(error))
}
if (error instanceof Error && error.message === 'Aborted') {
throw error
}
throw new Error(`Error in Condition Agent node: ${error instanceof Error ? error.message : String(error)}`)
}
}
/**
* Handles memory management based on the specified memory type
*/
private async handleMemory({
messages,
memoryType,
pastChatHistory,
runtimeChatHistory,
llmNodeInstance,
nodeData,
input,
abortController,
options,
modelConfig,
runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef
}: {
messages: BaseMessageLike[]
memoryType: string
pastChatHistory: BaseMessageLike[]
runtimeChatHistory: BaseMessageLike[]
llmNodeInstance: BaseChatModel
nodeData: INodeData
input: string
abortController: AbortController
options: ICommonObject
modelConfig: ICommonObject
runtimeImageMessagesWithFileRef: BaseMessageLike[]
pastImageMessagesWithFileRef: BaseMessageLike[]
}): Promise<void> {
const { updatedPastMessages, transformedPastMessages } = await getPastChatHistoryImageMessages(pastChatHistory, options)
pastChatHistory = updatedPastMessages
pastImageMessagesWithFileRef.push(...transformedPastMessages)
let pastMessages = [...pastChatHistory, ...runtimeChatHistory]
if (!runtimeChatHistory.length) {
/*
* If this is the first node:
* - Add images to messages if exist
*/
if (options.uploads) {
const imageContents = await getUniqueImageMessages(options, messages, modelConfig)
if (imageContents) {
const { imageMessageWithBase64, imageMessageWithFileRef } = imageContents
pastMessages.push(imageMessageWithBase64)
runtimeImageMessagesWithFileRef.push(imageMessageWithFileRef)
}
}
}
const { updatedMessages, transformedMessages } = await processMessagesWithImages(pastMessages, options)
pastMessages = updatedMessages
pastImageMessagesWithFileRef.push(...transformedMessages)
if (pastMessages.length > 0) {
if (memoryType === 'windowSize') {
// Window memory: Keep the last N messages
const windowSize = nodeData.inputs?.conditionAgentMemoryWindowSize as number
const windowedMessages = pastMessages.slice(-windowSize * 2)
messages.push(...windowedMessages)
} else if (memoryType === 'conversationSummary') {
// Summary memory: Summarize all past messages
const summary = await llmNodeInstance.invoke(
[
{
role: 'user',
content: DEFAULT_SUMMARIZER_TEMPLATE.replace(
'{conversation}',
pastMessages.map((msg: any) => `${msg.role}: ${msg.content}`).join('\n')
)
}
],
{ signal: abortController?.signal }
)
messages.push({ role: 'assistant', content: summary.content as string })
} else if (memoryType === 'conversationSummaryBuffer') {
// Summary buffer: Summarize messages that exceed token limit
await this.handleSummaryBuffer(messages, pastMessages, llmNodeInstance, nodeData, abortController)
} else {
// Default: Use all messages
messages.push(...pastMessages)
}
}
messages.push({
role: 'user',
content: input
})
}
/**
* Handles conversation summary buffer memory type
*/
private async handleSummaryBuffer(
messages: BaseMessageLike[],
pastMessages: BaseMessageLike[],
llmNodeInstance: BaseChatModel,
nodeData: INodeData,
abortController: AbortController
): Promise<void> {
const maxTokenLimit = (nodeData.inputs?.conditionAgentMemoryMaxTokenLimit as number) || 2000
// Convert past messages to a format suitable for token counting
const messagesString = pastMessages.map((msg: any) => `${msg.role}: ${msg.content}`).join('\n')
const tokenCount = await llmNodeInstance.getNumTokens(messagesString)
if (tokenCount > maxTokenLimit) {
// Calculate how many messages to summarize (messages that exceed the token limit)
let currBufferLength = tokenCount
const messagesToSummarize = []
const remainingMessages = [...pastMessages]
// Remove messages from the beginning until we're under the token limit
while (currBufferLength > maxTokenLimit && remainingMessages.length > 0) {
const poppedMessage = remainingMessages.shift()
if (poppedMessage) {
messagesToSummarize.push(poppedMessage)
// Recalculate token count for remaining messages
const remainingMessagesString = remainingMessages.map((msg: any) => `${msg.role}: ${msg.content}`).join('\n')
currBufferLength = await llmNodeInstance.getNumTokens(remainingMessagesString)
}
}
// Summarize the messages that were removed
const messagesToSummarizeString = messagesToSummarize.map((msg: any) => `${msg.role}: ${msg.content}`).join('\n')
const summary = await llmNodeInstance.invoke(
[
{
role: 'user',
content: DEFAULT_SUMMARIZER_TEMPLATE.replace('{conversation}', messagesToSummarizeString)
}
],
{ signal: abortController?.signal }
)
// Add summary as a system message at the beginning, then add remaining messages
messages.push({ role: 'system', content: `Previous conversation summary: ${summary.content}` })
messages.push(...remainingMessages)
} else {
// If under token limit, use all messages
messages.push(...pastMessages)
}
}
}
module.exports = { nodeClass: ConditionAgent_Agentflow }

View File

@ -0,0 +1,241 @@
import { DataSource } from 'typeorm'
import {
ICommonObject,
IDatabaseEntity,
INode,
INodeData,
INodeOptionsValue,
INodeParams,
IServerSideEventStreamer
} from '../../../src/Interface'
import { availableDependencies, defaultAllowBuiltInDep, getVars, prepareSandboxVars } from '../../../src/utils'
import { NodeVM } from '@flowiseai/nodevm'
import { updateFlowState } from '../utils'
interface ICustomFunctionInputVariables {
variableName: string
variableValue: string
}
const exampleFunc = `/*
* You can use any libraries imported in Flowise
* You can use properties specified in Input Schema as variables. Ex: Property = userid, Variable = $userid
* You can get default flow config: $flow.sessionId, $flow.chatId, $flow.chatflowId, $flow.input, $flow.state
* You can get custom variables: $vars.<variable-name>
* Must return a string value at the end of function
*/
const fetch = require('node-fetch');
const url = 'https://api.open-meteo.com/v1/forecast?latitude=52.52&longitude=13.41&current_weather=true';
const options = {
method: 'GET',
headers: {
'Content-Type': 'application/json'
}
};
try {
const response = await fetch(url, options);
const text = await response.text();
return text;
} catch (error) {
console.error(error);
return '';
}`
class CustomFunction_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
hideOutput: boolean
hint: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Custom Function'
this.name = 'customFunctionAgentflow'
this.version = 1.0
this.type = 'CustomFunction'
this.category = 'Agent Flows'
this.description = 'Execute custom function'
this.baseClasses = [this.type]
this.color = '#E4B7FF'
this.inputs = [
{
label: 'Input Variables',
name: 'customFunctionInputVariables',
description: 'Input variables can be used in the function with prefix $. For example: $foo',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Variable Name',
name: 'variableName',
type: 'string'
},
{
label: 'Variable Value',
name: 'variableValue',
type: 'string',
acceptVariable: true
}
]
},
{
label: 'Javascript Function',
name: 'customFunctionJavascriptFunction',
type: 'code',
codeExample: exampleFunc,
description: 'The function to execute. Must return a string or an object that can be converted to a string.'
},
{
label: 'Update Flow State',
name: 'customFunctionUpdateState',
description: 'Update runtime state during the execution of the workflow',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
},
{
label: 'Value',
name: 'value',
type: 'string',
acceptVariable: true,
acceptNodeOutputAsVariable: true
}
]
}
]
}
//@ts-ignore
loadMethods = {
async listRuntimeStateKeys(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const previousNodes = options.previousNodes as ICommonObject[]
const startAgentflowNode = previousNodes.find((node) => node.name === 'startAgentflow')
const state = startAgentflowNode?.inputs?.startState as ICommonObject[]
return state.map((item) => ({ label: item.key, name: item.key }))
}
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
const javascriptFunction = nodeData.inputs?.customFunctionJavascriptFunction as string
const functionInputVariables = nodeData.inputs?.customFunctionInputVariables as ICustomFunctionInputVariables[]
const _customFunctionUpdateState = nodeData.inputs?.customFunctionUpdateState
const state = options.agentflowRuntime?.state as ICommonObject
const chatId = options.chatId as string
const isLastNode = options.isLastNode as boolean
const isStreamable = isLastNode && options.sseStreamer !== undefined
const appDataSource = options.appDataSource as DataSource
const databaseEntities = options.databaseEntities as IDatabaseEntity
// Update flow state if needed
let newState = { ...state }
if (_customFunctionUpdateState && Array.isArray(_customFunctionUpdateState) && _customFunctionUpdateState.length > 0) {
newState = updateFlowState(state, _customFunctionUpdateState)
}
const variables = await getVars(appDataSource, databaseEntities, nodeData)
const flow = {
chatflowId: options.chatflowid,
sessionId: options.sessionId,
chatId: options.chatId,
input
}
let sandbox: any = {
$input: input,
util: undefined,
Symbol: undefined,
child_process: undefined,
fs: undefined,
process: undefined
}
sandbox['$vars'] = prepareSandboxVars(variables)
sandbox['$flow'] = flow
for (const item of functionInputVariables) {
const variableName = item.variableName
const variableValue = item.variableValue
sandbox[`$${variableName}`] = variableValue
}
const builtinDeps = process.env.TOOL_FUNCTION_BUILTIN_DEP
? defaultAllowBuiltInDep.concat(process.env.TOOL_FUNCTION_BUILTIN_DEP.split(','))
: defaultAllowBuiltInDep
const externalDeps = process.env.TOOL_FUNCTION_EXTERNAL_DEP ? process.env.TOOL_FUNCTION_EXTERNAL_DEP.split(',') : []
const deps = availableDependencies.concat(externalDeps)
const nodeVMOptions = {
console: 'inherit',
sandbox,
require: {
external: { modules: deps },
builtin: builtinDeps
},
eval: false,
wasm: false,
timeout: 10000
} as any
const vm = new NodeVM(nodeVMOptions)
try {
const response = await vm.run(`module.exports = async function() {${javascriptFunction}}()`, __dirname)
let finalOutput = response
if (typeof response === 'object') {
finalOutput = JSON.stringify(response, null, 2)
}
if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer
sseStreamer.streamTokenEvent(chatId, finalOutput)
}
// Process template variables in state
if (newState && Object.keys(newState).length > 0) {
for (const key in newState) {
if (newState[key].toString().includes('{{ output }}')) {
newState[key] = finalOutput
}
}
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: {
inputVariables: functionInputVariables,
code: javascriptFunction
},
output: {
content: finalOutput
},
state: newState
}
return returnOutput
} catch (e) {
throw new Error(e)
}
}
}
module.exports = { nodeClass: CustomFunction_Agentflow }

View File

@ -0,0 +1,67 @@
import { ICommonObject, INode, INodeData, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
class DirectReply_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
hideOutput: boolean
hint: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Direct Reply'
this.name = 'directReplyAgentflow'
this.version = 1.0
this.type = 'DirectReply'
this.category = 'Agent Flows'
this.description = 'Directly reply to the user with a message'
this.baseClasses = [this.type]
this.color = '#4DDBBB'
this.hideOutput = true
this.inputs = [
{
label: 'Message',
name: 'directReplyMessage',
type: 'string',
rows: 4,
acceptVariable: true
}
]
}
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const directReplyMessage = nodeData.inputs?.directReplyMessage as string
const state = options.agentflowRuntime?.state as ICommonObject
const chatId = options.chatId as string
const isLastNode = options.isLastNode as boolean
const isStreamable = isLastNode && options.sseStreamer !== undefined
if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer
sseStreamer.streamTokenEvent(chatId, directReplyMessage)
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: {},
output: {
content: directReplyMessage
},
state
}
return returnOutput
}
}
module.exports = { nodeClass: DirectReply_Agentflow }

View File

@ -0,0 +1,297 @@
import {
ICommonObject,
IDatabaseEntity,
INode,
INodeData,
INodeOptionsValue,
INodeParams,
IServerSideEventStreamer
} from '../../../src/Interface'
import axios, { AxiosRequestConfig } from 'axios'
import { getCredentialData, getCredentialParam } from '../../../src/utils'
import { DataSource } from 'typeorm'
import { BaseMessageLike } from '@langchain/core/messages'
import { updateFlowState } from '../utils'
class ExecuteFlow_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Execute Flow'
this.name = 'executeFlowAgentflow'
this.version = 1.0
this.type = 'ExecuteFlow'
this.category = 'Agent Flows'
this.description = 'Execute another flow'
this.baseClasses = [this.type]
this.color = '#a3b18a'
this.credential = {
label: 'Connect Credential',
name: 'credential',
type: 'credential',
credentialNames: ['chatflowApi'],
optional: true
}
this.inputs = [
{
label: 'Select Flow',
name: 'executeFlowSelectedFlow',
type: 'asyncOptions',
loadMethod: 'listFlows'
},
{
label: 'Input',
name: 'executeFlowInput',
type: 'string',
rows: 4,
acceptVariable: true
},
{
label: 'Override Config',
name: 'executeFlowOverrideConfig',
description: 'Override the config passed to the flow',
type: 'json',
optional: true
},
{
label: 'Base URL',
name: 'executeFlowBaseURL',
type: 'string',
description:
'Base URL to Flowise. By default, it is the URL of the incoming request. Useful when you need to execute flow through an alternative route.',
placeholder: 'http://localhost:3000',
optional: true
},
{
label: 'Return Response As',
name: 'executeFlowReturnResponseAs',
type: 'options',
options: [
{
label: 'User Message',
name: 'userMessage'
},
{
label: 'Assistant Message',
name: 'assistantMessage'
}
],
default: 'userMessage'
},
{
label: 'Update Flow State',
name: 'executeFlowUpdateState',
description: 'Update runtime state during the execution of the workflow',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
},
{
label: 'Value',
name: 'value',
type: 'string',
acceptVariable: true,
acceptNodeOutputAsVariable: true
}
]
}
]
}
//@ts-ignore
loadMethods = {
async listFlows(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const returnData: INodeOptionsValue[] = []
const appDataSource = options.appDataSource as DataSource
const databaseEntities = options.databaseEntities as IDatabaseEntity
if (appDataSource === undefined || !appDataSource) {
return returnData
}
const chatflows = await appDataSource.getRepository(databaseEntities['ChatFlow']).find()
for (let i = 0; i < chatflows.length; i += 1) {
let cfType = 'Chatflow'
if (chatflows[i].type === 'AGENTFLOW') {
cfType = 'Agentflow V2'
} else if (chatflows[i].type === 'MULTIAGENT') {
cfType = 'Agentflow V1'
}
const data = {
label: chatflows[i].name,
name: chatflows[i].id,
description: cfType
} as INodeOptionsValue
returnData.push(data)
}
// order by label
return returnData.sort((a, b) => a.label.localeCompare(b.label))
},
async listRuntimeStateKeys(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const previousNodes = options.previousNodes as ICommonObject[]
const startAgentflowNode = previousNodes.find((node) => node.name === 'startAgentflow')
const state = startAgentflowNode?.inputs?.startState as ICommonObject[]
return state.map((item) => ({ label: item.key, name: item.key }))
}
}
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const baseURL = (nodeData.inputs?.executeFlowBaseURL as string) || (options.baseURL as string)
const selectedFlowId = nodeData.inputs?.executeFlowSelectedFlow as string
const flowInput = nodeData.inputs?.executeFlowInput as string
const returnResponseAs = nodeData.inputs?.executeFlowReturnResponseAs as string
const _executeFlowUpdateState = nodeData.inputs?.executeFlowUpdateState
const overrideConfig =
typeof nodeData.inputs?.executeFlowOverrideConfig === 'string' &&
nodeData.inputs.executeFlowOverrideConfig.startsWith('{') &&
nodeData.inputs.executeFlowOverrideConfig.endsWith('}')
? JSON.parse(nodeData.inputs.executeFlowOverrideConfig)
: nodeData.inputs?.executeFlowOverrideConfig
const state = options.agentflowRuntime?.state as ICommonObject
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
const isLastNode = options.isLastNode as boolean
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
try {
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const chatflowApiKey = getCredentialParam('chatflowApiKey', credentialData, nodeData)
if (selectedFlowId === options.chatflowid) throw new Error('Cannot call the same agentflow!')
let headers: Record<string, string> = {
'Content-Type': 'application/json'
}
if (chatflowApiKey) headers = { ...headers, Authorization: `Bearer ${chatflowApiKey}` }
const finalUrl = `${baseURL}/api/v1/prediction/${selectedFlowId}`
const requestConfig: AxiosRequestConfig = {
method: 'POST',
url: finalUrl,
headers,
data: {
question: flowInput,
chatId: options.chatId,
overrideConfig
}
}
const response = await axios(requestConfig)
let resultText = ''
if (response.data.text) resultText = response.data.text
else if (response.data.json) resultText = '```json\n' + JSON.stringify(response.data.json, null, 2)
else resultText = JSON.stringify(response.data, null, 2)
if (isLastNode && sseStreamer) {
sseStreamer.streamTokenEvent(options.chatId, resultText)
}
// Update flow state if needed
let newState = { ...state }
if (_executeFlowUpdateState && Array.isArray(_executeFlowUpdateState) && _executeFlowUpdateState.length > 0) {
newState = updateFlowState(state, _executeFlowUpdateState)
}
// Process template variables in state
if (newState && Object.keys(newState).length > 0) {
for (const key in newState) {
if (newState[key].toString().includes('{{ output }}')) {
newState[key] = resultText
}
}
}
// Only add to runtime chat history if this is the first node
const inputMessages = []
if (!runtimeChatHistory.length) {
inputMessages.push({ role: 'user', content: flowInput })
}
let returnRole = 'user'
if (returnResponseAs === 'assistantMessage') {
returnRole = 'assistant'
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: {
messages: [
{
role: 'user',
content: flowInput
}
]
},
output: {
content: resultText
},
state: newState,
chatHistory: [
...inputMessages,
{
role: returnRole,
content: resultText,
name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id
}
]
}
return returnOutput
} catch (error) {
console.error('ExecuteFlow Error:', error)
// Format error response
const errorResponse: any = {
id: nodeData.id,
name: this.name,
input: {
messages: [
{
role: 'user',
content: flowInput
}
]
},
error: {
name: error.name || 'Error',
message: error.message || 'An error occurred during the execution of the flow'
},
state
}
// Add more error details if available
if (error.response) {
errorResponse.error.status = error.response.status
errorResponse.error.statusText = error.response.statusText
errorResponse.error.data = error.response.data
errorResponse.error.headers = error.response.headers
}
throw new Error(error)
}
}
}
module.exports = { nodeClass: ExecuteFlow_Agentflow }

View File

@ -0,0 +1,368 @@
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import axios, { AxiosRequestConfig, Method, ResponseType } from 'axios'
import FormData from 'form-data'
import * as querystring from 'querystring'
import { getCredentialData, getCredentialParam } from '../../../src/utils'
class HTTP_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'HTTP'
this.name = 'httpAgentflow'
this.version = 1.0
this.type = 'HTTP'
this.category = 'Agent Flows'
this.description = 'Send a HTTP request'
this.baseClasses = [this.type]
this.color = '#FF7F7F'
this.credential = {
label: 'HTTP Credential',
name: 'credential',
type: 'credential',
credentialNames: ['httpBasicAuth', 'httpBearerToken', 'httpApiKey'],
optional: true
}
this.inputs = [
{
label: 'Method',
name: 'method',
type: 'options',
options: [
{
label: 'GET',
name: 'GET'
},
{
label: 'POST',
name: 'POST'
},
{
label: 'PUT',
name: 'PUT'
},
{
label: 'DELETE',
name: 'DELETE'
},
{
label: 'PATCH',
name: 'PATCH'
}
],
default: 'GET'
},
{
label: 'URL',
name: 'url',
type: 'string'
},
{
label: 'Headers',
name: 'headers',
type: 'array',
array: [
{
label: 'Key',
name: 'key',
type: 'string',
default: ''
},
{
label: 'Value',
name: 'value',
type: 'string',
default: ''
}
],
optional: true
},
{
label: 'Query Params',
name: 'queryParams',
type: 'array',
array: [
{
label: 'Key',
name: 'key',
type: 'string',
default: ''
},
{
label: 'Value',
name: 'value',
type: 'string',
default: ''
}
],
optional: true
},
{
label: 'Body Type',
name: 'bodyType',
type: 'options',
options: [
{
label: 'JSON',
name: 'json'
},
{
label: 'Raw',
name: 'raw'
},
{
label: 'Form Data',
name: 'formData'
},
{
label: 'x-www-form-urlencoded',
name: 'xWwwFormUrlencoded'
}
],
optional: true
},
{
label: 'Body',
name: 'body',
type: 'string',
acceptVariable: true,
rows: 4,
show: {
bodyType: ['raw', 'json']
},
optional: true
},
{
label: 'Body',
name: 'body',
type: 'array',
show: {
bodyType: ['xWwwFormUrlencoded', 'formData']
},
array: [
{
label: 'Key',
name: 'key',
type: 'string',
default: ''
},
{
label: 'Value',
name: 'value',
type: 'string',
default: ''
}
],
optional: true
},
{
label: 'Response Type',
name: 'responseType',
type: 'options',
options: [
{
label: 'JSON',
name: 'json'
},
{
label: 'Text',
name: 'text'
},
{
label: 'Array Buffer',
name: 'arraybuffer'
},
{
label: 'Raw (Base64)',
name: 'base64'
}
],
optional: true
}
]
}
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const method = nodeData.inputs?.method as 'GET' | 'POST' | 'PUT' | 'DELETE' | 'PATCH'
const url = nodeData.inputs?.url as string
const headers = nodeData.inputs?.headers as ICommonObject
const queryParams = nodeData.inputs?.queryParams as ICommonObject
const bodyType = nodeData.inputs?.bodyType as 'json' | 'raw' | 'formData' | 'xWwwFormUrlencoded'
const body = nodeData.inputs?.body as ICommonObject | string | ICommonObject[]
const responseType = nodeData.inputs?.responseType as 'json' | 'text' | 'arraybuffer' | 'base64'
const state = options.agentflowRuntime?.state as ICommonObject
try {
// Prepare headers
const requestHeaders: Record<string, string> = {}
// Add headers from inputs
if (headers && Array.isArray(headers)) {
for (const header of headers) {
if (header.key && header.value) {
requestHeaders[header.key] = header.value
}
}
}
// Add credentials if provided
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
if (credentialData && Object.keys(credentialData).length !== 0) {
const basicAuthUsername = getCredentialParam('username', credentialData, nodeData)
const basicAuthPassword = getCredentialParam('password', credentialData, nodeData)
const bearerToken = getCredentialParam('token', credentialData, nodeData)
const apiKeyName = getCredentialParam('key', credentialData, nodeData)
const apiKeyValue = getCredentialParam('value', credentialData, nodeData)
// Determine which type of auth to use based on available credentials
if (basicAuthUsername && basicAuthPassword) {
// Basic Auth
const auth = Buffer.from(`${basicAuthUsername}:${basicAuthPassword}`).toString('base64')
requestHeaders['Authorization'] = `Basic ${auth}`
} else if (bearerToken) {
// Bearer Token
requestHeaders['Authorization'] = `Bearer ${bearerToken}`
} else if (apiKeyName && apiKeyValue) {
// API Key in header
requestHeaders[apiKeyName] = apiKeyValue
}
}
// Prepare query parameters
let queryString = ''
if (queryParams && Array.isArray(queryParams)) {
const params = new URLSearchParams()
for (const param of queryParams) {
if (param.key && param.value) {
params.append(param.key, param.value)
}
}
queryString = params.toString()
}
// Build final URL with query parameters
const finalUrl = queryString ? `${url}${url.includes('?') ? '&' : '?'}${queryString}` : url
// Prepare request config
const requestConfig: AxiosRequestConfig = {
method: method as Method,
url: finalUrl,
headers: requestHeaders,
responseType: (responseType || 'json') as ResponseType
}
// Handle request body based on body type
if (method !== 'GET' && body) {
switch (bodyType) {
case 'json':
requestConfig.data = typeof body === 'string' ? JSON.parse(body) : body
requestHeaders['Content-Type'] = 'application/json'
break
case 'raw':
requestConfig.data = body
break
case 'formData': {
const formData = new FormData()
if (Array.isArray(body) && body.length > 0) {
for (const item of body) {
formData.append(item.key, item.value)
}
}
requestConfig.data = formData
break
}
case 'xWwwFormUrlencoded':
requestConfig.data = querystring.stringify(typeof body === 'string' ? JSON.parse(body) : body)
requestHeaders['Content-Type'] = 'application/x-www-form-urlencoded'
break
}
}
// Make the HTTP request
const response = await axios(requestConfig)
// Process response based on response type
let responseData
if (responseType === 'base64' && response.data) {
responseData = Buffer.from(response.data, 'binary').toString('base64')
} else {
responseData = response.data
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: {
http: {
method,
url,
headers,
queryParams,
bodyType,
body,
responseType
}
},
output: {
http: {
data: responseData,
status: response.status,
statusText: response.statusText,
headers: response.headers
}
},
state
}
return returnOutput
} catch (error) {
console.error('HTTP Request Error:', error)
// Format error response
const errorResponse: any = {
id: nodeData.id,
name: this.name,
input: {
http: {
method,
url,
headers,
queryParams,
bodyType,
body,
responseType
}
},
error: {
name: error.name || 'Error',
message: error.message || 'An error occurred during the HTTP request'
},
state
}
// Add more error details if available
if (error.response) {
errorResponse.error.status = error.response.status
errorResponse.error.statusText = error.response.statusText
errorResponse.error.data = error.response.data
errorResponse.error.headers = error.response.headers
}
throw new Error(error)
}
}
}
module.exports = { nodeClass: HTTP_Agentflow }

View File

@ -0,0 +1,271 @@
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
import {
ICommonObject,
ICondition,
IHumanInput,
INode,
INodeData,
INodeOptionsValue,
INodeOutputsValue,
INodeParams,
IServerSideEventStreamer
} from '../../../src/Interface'
import { AIMessageChunk, BaseMessageLike } from '@langchain/core/messages'
import { DEFAULT_HUMAN_INPUT_DESCRIPTION, DEFAULT_HUMAN_INPUT_DESCRIPTION_HTML } from '../prompt'
class HumanInput_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
outputs: INodeOutputsValue[]
constructor() {
this.label = 'Human Input'
this.name = 'humanInputAgentflow'
this.version = 1.0
this.type = 'HumanInput'
this.category = 'Agent Flows'
this.description = 'Request human input, approval or rejection during execution'
this.color = '#6E6EFD'
this.baseClasses = [this.type]
this.inputs = [
{
label: 'Description Type',
name: 'humanInputDescriptionType',
type: 'options',
options: [
{
label: 'Fixed',
name: 'fixed',
description: 'Specify a fixed description'
},
{
label: 'Dynamic',
name: 'dynamic',
description: 'Use LLM to generate a description'
}
]
},
{
label: 'Description',
name: 'humanInputDescription',
type: 'string',
placeholder: 'Are you sure you want to proceed?',
acceptVariable: true,
rows: 4,
show: {
humanInputDescriptionType: 'fixed'
}
},
{
label: 'Model',
name: 'humanInputModel',
type: 'asyncOptions',
loadMethod: 'listModels',
loadConfig: true,
show: {
humanInputDescriptionType: 'dynamic'
}
},
{
label: 'Prompt',
name: 'humanInputModelPrompt',
type: 'string',
default: DEFAULT_HUMAN_INPUT_DESCRIPTION_HTML,
acceptVariable: true,
generateInstruction: true,
rows: 4,
show: {
humanInputDescriptionType: 'dynamic'
}
},
{
label: 'Enable Feedback',
name: 'humanInputEnableFeedback',
type: 'boolean',
default: true
}
]
this.outputs = [
{
label: 'Proceed',
name: 'proceed'
},
{
label: 'Reject',
name: 'reject'
}
]
}
//@ts-ignore
loadMethods = {
async listModels(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const componentNodes = options.componentNodes as {
[key: string]: INode
}
const returnOptions: INodeOptionsValue[] = []
for (const nodeName in componentNodes) {
const componentNode = componentNodes[nodeName]
if (componentNode.category === 'Chat Models') {
if (componentNode.tags?.includes('LlamaIndex')) {
continue
}
returnOptions.push({
label: componentNode.label,
name: nodeName,
imageSrc: componentNode.icon
})
}
}
return returnOptions
}
}
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const _humanInput = nodeData.inputs?.humanInput
const humanInput: IHumanInput = typeof _humanInput === 'string' ? JSON.parse(_humanInput) : _humanInput
const humanInputEnableFeedback = nodeData.inputs?.humanInputEnableFeedback as boolean
let humanInputDescriptionType = nodeData.inputs?.humanInputDescriptionType as string
const model = nodeData.inputs?.humanInputModel as string
const modelConfig = nodeData.inputs?.humanInputModelConfig as ICommonObject
const _humanInputModelPrompt = nodeData.inputs?.humanInputModelPrompt as string
const humanInputModelPrompt = _humanInputModelPrompt ? _humanInputModelPrompt : DEFAULT_HUMAN_INPUT_DESCRIPTION
// Extract runtime state and history
const state = options.agentflowRuntime?.state as ICommonObject
const pastChatHistory = (options.pastChatHistory as BaseMessageLike[]) ?? []
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
const chatId = options.chatId as string
const isStreamable = options.sseStreamer !== undefined
if (humanInput) {
const outcomes: Partial<ICondition>[] & Partial<IHumanInput>[] = [
{
type: 'proceed',
startNodeId: humanInput?.startNodeId,
feedback: humanInputEnableFeedback && humanInput?.feedback ? humanInput.feedback : undefined,
isFulfilled: false
},
{
type: 'reject',
startNodeId: humanInput?.startNodeId,
feedback: humanInputEnableFeedback && humanInput?.feedback ? humanInput.feedback : undefined,
isFulfilled: false
}
]
// Only one outcome can be fulfilled at a time
switch (humanInput?.type) {
case 'proceed':
outcomes[0].isFulfilled = true
break
case 'reject':
outcomes[1].isFulfilled = true
break
}
const messages = [
...pastChatHistory,
...runtimeChatHistory,
{
role: 'user',
content: humanInput.feedback || humanInput.type
}
]
const input = { ...humanInput, messages }
const output = { conditions: outcomes }
const nodeOutput = {
id: nodeData.id,
name: this.name,
input,
output,
state
}
if (humanInput.feedback) {
;(nodeOutput as any).chatHistory = [{ role: 'user', content: humanInput.feedback }]
}
return nodeOutput
} else {
let humanInputDescription = ''
if (humanInputDescriptionType === 'fixed') {
humanInputDescription = (nodeData.inputs?.humanInputDescription as string) || 'Do you want to proceed?'
const messages = [...pastChatHistory, ...runtimeChatHistory]
// Find the last message in the messages array
const lastMessage = (messages[messages.length - 1] as any).content || ''
humanInputDescription = `${lastMessage}\n\n${humanInputDescription}`
if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
sseStreamer.streamTokenEvent(chatId, humanInputDescription)
}
} else {
if (model && modelConfig) {
const nodeInstanceFilePath = options.componentNodes[model].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const newNodeInstance = new nodeModule.nodeClass()
const newNodeData = {
...nodeData,
credential: modelConfig['FLOWISE_CREDENTIAL_ID'],
inputs: {
...nodeData.inputs,
...modelConfig
}
}
const llmNodeInstance = (await newNodeInstance.init(newNodeData, '', options)) as BaseChatModel
const messages = [
...pastChatHistory,
...runtimeChatHistory,
{
role: 'user',
content: humanInputModelPrompt || DEFAULT_HUMAN_INPUT_DESCRIPTION
}
]
let response: AIMessageChunk = new AIMessageChunk('')
if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
for await (const chunk of await llmNodeInstance.stream(messages)) {
sseStreamer.streamTokenEvent(chatId, chunk.content.toString())
response = response.concat(chunk)
}
humanInputDescription = response.content as string
} else {
const response = await llmNodeInstance.invoke(messages)
humanInputDescription = response.content as string
}
}
}
const input = { messages: [...pastChatHistory, ...runtimeChatHistory], humanInputEnableFeedback }
const output = { content: humanInputDescription }
const nodeOutput = {
id: nodeData.id,
name: this.name,
input,
output,
state,
chatHistory: [{ role: 'assistant', content: humanInputDescription }]
}
return nodeOutput
}
}
}
module.exports = { nodeClass: HumanInput_Agentflow }

View File

@ -0,0 +1,17 @@
export interface ILLMMessage {
role: 'system' | 'assistant' | 'user' | 'tool' | 'developer'
content: string
}
export interface IStructuredOutput {
key: string
type: 'string' | 'stringArray' | 'number' | 'boolean' | 'enum' | 'jsonArray'
enumValues?: string
description?: string
jsonSchema?: string
}
export interface IFlowState {
key: string
value: string
}

View File

@ -0,0 +1,69 @@
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
class Iteration_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Iteration'
this.name = 'iterationAgentflow'
this.version = 1.0
this.type = 'Iteration'
this.category = 'Agent Flows'
this.description = 'Execute the nodes within the iteration block through N iterations'
this.baseClasses = [this.type]
this.color = '#9C89B8'
this.inputs = [
{
label: 'Array Input',
name: 'iterationInput',
type: 'string',
description: 'The input array to iterate over',
acceptVariable: true,
rows: 4
}
]
}
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const iterationInput = nodeData.inputs?.iterationInput
// Helper function to clean JSON strings with redundant backslashes
const cleanJsonString = (str: string): string => {
return str.replace(/\\(["'[\]{}])/g, '$1')
}
const iterationInputArray =
typeof iterationInput === 'string' && iterationInput !== '' ? JSON.parse(cleanJsonString(iterationInput)) : iterationInput
if (!iterationInputArray || !Array.isArray(iterationInputArray)) {
throw new Error('Invalid input array')
}
const state = options.agentflowRuntime?.state as ICommonObject
const returnOutput = {
id: nodeData.id,
name: this.name,
input: {
iterationInput: iterationInputArray
},
output: {},
state
}
return returnOutput
}
}
module.exports = { nodeClass: Iteration_Agentflow }

View File

@ -0,0 +1,981 @@
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
import { AIMessageChunk, BaseMessageLike, MessageContentText } from '@langchain/core/messages'
import { DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt'
import { z } from 'zod'
import { AnalyticHandler } from '../../../src/handler'
import { ILLMMessage, IStructuredOutput } from '../Interface.Agentflow'
import {
getPastChatHistoryImageMessages,
getUniqueImageMessages,
processMessagesWithImages,
replaceBase64ImagesWithFileReferences,
updateFlowState
} from '../utils'
import { get } from 'lodash'
class LLM_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'LLM'
this.name = 'llmAgentflow'
this.version = 1.0
this.type = 'LLM'
this.category = 'Agent Flows'
this.description = 'Large language models to analyze user-provided inputs and generate responses'
this.color = '#64B5F6'
this.baseClasses = [this.type]
this.inputs = [
{
label: 'Model',
name: 'llmModel',
type: 'asyncOptions',
loadMethod: 'listModels',
loadConfig: true
},
{
label: 'Messages',
name: 'llmMessages',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Role',
name: 'role',
type: 'options',
options: [
{
label: 'System',
name: 'system'
},
{
label: 'Assistant',
name: 'assistant'
},
{
label: 'Developer',
name: 'developer'
},
{
label: 'User',
name: 'user'
}
]
},
{
label: 'Content',
name: 'content',
type: 'string',
acceptVariable: true,
generateInstruction: true,
rows: 4
}
]
},
{
label: 'Enable Memory',
name: 'llmEnableMemory',
type: 'boolean',
description: 'Enable memory for the conversation thread',
default: true,
optional: true
},
{
label: 'Memory Type',
name: 'llmMemoryType',
type: 'options',
options: [
{
label: 'All Messages',
name: 'allMessages',
description: 'Retrieve all messages from the conversation'
},
{
label: 'Window Size',
name: 'windowSize',
description: 'Uses a fixed window size to surface the last N messages'
},
{
label: 'Conversation Summary',
name: 'conversationSummary',
description: 'Summarizes the whole conversation'
},
{
label: 'Conversation Summary Buffer',
name: 'conversationSummaryBuffer',
description: 'Summarize conversations once token limit is reached. Default to 2000'
}
],
optional: true,
default: 'allMessages',
show: {
llmEnableMemory: true
}
},
{
label: 'Window Size',
name: 'llmMemoryWindowSize',
type: 'number',
default: '20',
description: 'Uses a fixed window size to surface the last N messages',
show: {
llmMemoryType: 'windowSize'
}
},
{
label: 'Max Token Limit',
name: 'llmMemoryMaxTokenLimit',
type: 'number',
default: '2000',
description: 'Summarize conversations once token limit is reached. Default to 2000',
show: {
llmMemoryType: 'conversationSummaryBuffer'
}
},
{
label: 'Input Message',
name: 'llmUserMessage',
type: 'string',
description: 'Add an input message as user message at the end of the conversation',
rows: 4,
optional: true,
acceptVariable: true,
show: {
llmEnableMemory: true
}
},
{
label: 'Return Response As',
name: 'llmReturnResponseAs',
type: 'options',
options: [
{
label: 'User Message',
name: 'userMessage'
},
{
label: 'Assistant Message',
name: 'assistantMessage'
}
],
default: 'userMessage'
},
{
label: 'JSON Structured Output',
name: 'llmStructuredOutput',
description: 'Instruct the LLM to give output in a JSON structured schema',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Key',
name: 'key',
type: 'string'
},
{
label: 'Type',
name: 'type',
type: 'options',
options: [
{
label: 'String',
name: 'string'
},
{
label: 'String Array',
name: 'stringArray'
},
{
label: 'Number',
name: 'number'
},
{
label: 'Boolean',
name: 'boolean'
},
{
label: 'Enum',
name: 'enum'
},
{
label: 'JSON Array',
name: 'jsonArray'
}
]
},
{
label: 'Enum Values',
name: 'enumValues',
type: 'string',
placeholder: 'value1, value2, value3',
description: 'Enum values. Separated by comma',
optional: true,
show: {
'llmStructuredOutput[$index].type': 'enum'
}
},
{
label: 'JSON Schema',
name: 'jsonSchema',
type: 'code',
placeholder: `{
"answer": {
"type": "string",
"description": "Value of the answer"
},
"reason": {
"type": "string",
"description": "Reason for the answer"
},
"optional": {
"type": "boolean"
},
"count": {
"type": "number"
},
"children": {
"type": "array",
"items": {
"type": "object",
"properties": {
"value": {
"type": "string",
"description": "Value of the children's answer"
}
}
}
}
}`,
description: 'JSON schema for the structured output',
optional: true,
show: {
'llmStructuredOutput[$index].type': 'jsonArray'
}
},
{
label: 'Description',
name: 'description',
type: 'string',
placeholder: 'Description of the key'
}
]
},
{
label: 'Update Flow State',
name: 'llmUpdateState',
description: 'Update runtime state during the execution of the workflow',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
},
{
label: 'Value',
name: 'value',
type: 'string',
acceptVariable: true,
acceptNodeOutputAsVariable: true
}
]
}
]
}
//@ts-ignore
loadMethods = {
async listModels(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const componentNodes = options.componentNodes as {
[key: string]: INode
}
const returnOptions: INodeOptionsValue[] = []
for (const nodeName in componentNodes) {
const componentNode = componentNodes[nodeName]
if (componentNode.category === 'Chat Models') {
if (componentNode.tags?.includes('LlamaIndex')) {
continue
}
returnOptions.push({
label: componentNode.label,
name: nodeName,
imageSrc: componentNode.icon
})
}
}
return returnOptions
},
async listRuntimeStateKeys(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const previousNodes = options.previousNodes as ICommonObject[]
const startAgentflowNode = previousNodes.find((node) => node.name === 'startAgentflow')
const state = startAgentflowNode?.inputs?.startState as ICommonObject[]
return state.map((item) => ({ label: item.key, name: item.key }))
}
}
async run(nodeData: INodeData, input: string | Record<string, any>, options: ICommonObject): Promise<any> {
let llmIds: ICommonObject | undefined
let analyticHandlers = options.analyticHandlers as AnalyticHandler
try {
const abortController = options.abortController as AbortController
// Extract input parameters
const model = nodeData.inputs?.llmModel as string
const modelConfig = nodeData.inputs?.llmModelConfig as ICommonObject
if (!model) {
throw new Error('Model is required')
}
// Extract memory and configuration options
const enableMemory = nodeData.inputs?.llmEnableMemory as boolean
const memoryType = nodeData.inputs?.llmMemoryType as string
const userMessage = nodeData.inputs?.llmUserMessage as string
const _llmUpdateState = nodeData.inputs?.llmUpdateState
const _llmStructuredOutput = nodeData.inputs?.llmStructuredOutput
const llmMessages = (nodeData.inputs?.llmMessages as unknown as ILLMMessage[]) ?? []
// Extract runtime state and history
const state = options.agentflowRuntime?.state as ICommonObject
const pastChatHistory = (options.pastChatHistory as BaseMessageLike[]) ?? []
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
const chatId = options.chatId as string
// Initialize the LLM model instance
const nodeInstanceFilePath = options.componentNodes[model].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const newLLMNodeInstance = new nodeModule.nodeClass()
const newNodeData = {
...nodeData,
credential: modelConfig['FLOWISE_CREDENTIAL_ID'],
inputs: {
...nodeData.inputs,
...modelConfig
}
}
let llmNodeInstance = (await newLLMNodeInstance.init(newNodeData, '', options)) as BaseChatModel
// Prepare messages array
const messages: BaseMessageLike[] = []
// Use to store messages with image file references as we do not want to store the base64 data into database
let runtimeImageMessagesWithFileRef: BaseMessageLike[] = []
// Use to keep track of past messages with image file references
let pastImageMessagesWithFileRef: BaseMessageLike[] = []
for (const msg of llmMessages) {
const role = msg.role
const content = msg.content
if (role && content) {
messages.push({ role, content })
}
}
// Handle memory management if enabled
if (enableMemory) {
await this.handleMemory({
messages,
memoryType,
pastChatHistory,
runtimeChatHistory,
llmNodeInstance,
nodeData,
userMessage,
input,
abortController,
options,
modelConfig,
runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef
})
} else if (!runtimeChatHistory.length) {
/*
* If this is the first node:
* - Add images to messages if exist
* - Add user message
*/
if (options.uploads) {
const imageContents = await getUniqueImageMessages(options, messages, modelConfig)
if (imageContents) {
const { imageMessageWithBase64, imageMessageWithFileRef } = imageContents
messages.push(imageMessageWithBase64)
runtimeImageMessagesWithFileRef.push(imageMessageWithFileRef)
}
}
if (input && typeof input === 'string') {
messages.push({
role: 'user',
content: input
})
}
}
delete nodeData.inputs?.llmMessages
// Configure structured output if specified
const isStructuredOutput = _llmStructuredOutput && Array.isArray(_llmStructuredOutput) && _llmStructuredOutput.length > 0
if (isStructuredOutput) {
llmNodeInstance = this.configureStructuredOutput(llmNodeInstance, _llmStructuredOutput)
}
// Initialize response and determine if streaming is possible
let response: AIMessageChunk = new AIMessageChunk('')
const isLastNode = options.isLastNode as boolean
const isStreamable = isLastNode && options.sseStreamer !== undefined && modelConfig?.streaming !== false && !isStructuredOutput
// Start analytics
if (analyticHandlers && options.parentTraceIds) {
const llmLabel = options?.componentNodes?.[model]?.label || model
llmIds = await analyticHandlers.onLLMStart(llmLabel, messages, options.parentTraceIds)
}
// Track execution time
const startTime = Date.now()
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
if (isStreamable) {
response = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
} else {
response = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
// Stream whole response back to UI if this is the last node
if (isLastNode && options.sseStreamer) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
sseStreamer.streamTokenEvent(chatId, JSON.stringify(response, null, 2))
}
}
// Calculate execution time
const endTime = Date.now()
const timeDelta = endTime - startTime
// Update flow state if needed
let newState = { ...state }
if (_llmUpdateState && Array.isArray(_llmUpdateState) && _llmUpdateState.length > 0) {
newState = updateFlowState(state, _llmUpdateState)
}
// Clean up empty inputs
for (const key in nodeData.inputs) {
if (nodeData.inputs[key] === '') {
delete nodeData.inputs[key]
}
}
// Prepare final response and output object
const finalResponse = (response.content as string) ?? JSON.stringify(response, null, 2)
const output = this.prepareOutputObject(response, finalResponse, startTime, endTime, timeDelta)
// End analytics tracking
if (analyticHandlers && llmIds) {
await analyticHandlers.onLLMEnd(llmIds, finalResponse)
}
// Send additional streaming events if needed
if (isStreamable) {
this.sendStreamingEvents(options, chatId, response)
}
// Process template variables in state
if (newState && Object.keys(newState).length > 0) {
for (const key in newState) {
const stateValue = newState[key].toString()
if (stateValue.includes('{{ output')) {
// Handle simple output replacement
if (stateValue === '{{ output }}') {
newState[key] = finalResponse
continue
}
// Handle JSON path expressions like {{ output.item1 }}
// eslint-disable-next-line
const match = stateValue.match(/{{[\s]*output\.([\w\.]+)[\s]*}}/)
if (match) {
try {
// Parse the response if it's JSON
const jsonResponse = typeof finalResponse === 'string' ? JSON.parse(finalResponse) : finalResponse
// Get the value using lodash get
const path = match[1]
const value = get(jsonResponse, path)
newState[key] = value ?? stateValue // Fall back to original if path not found
} catch (e) {
// If JSON parsing fails, keep original template
console.warn(`Failed to parse JSON or find path in output: ${e}`)
newState[key] = stateValue
}
}
}
}
}
// Replace the actual messages array with one that includes the file references for images instead of base64 data
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
messages,
runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef
)
// Only add to runtime chat history if this is the first node
const inputMessages = []
if (!runtimeChatHistory.length) {
if (runtimeImageMessagesWithFileRef.length) {
inputMessages.push(...runtimeImageMessagesWithFileRef)
}
if (input && typeof input === 'string') {
inputMessages.push({ role: 'user', content: input })
}
}
const returnResponseAs = nodeData.inputs?.llmReturnResponseAs as string
let returnRole = 'user'
if (returnResponseAs === 'assistantMessage') {
returnRole = 'assistant'
}
// Prepare and return the final output
return {
id: nodeData.id,
name: this.name,
input: {
messages: messagesWithFileReferences,
...nodeData.inputs
},
output,
state: newState,
chatHistory: [
...inputMessages,
// LLM response
{
role: returnRole,
content: finalResponse,
name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id
}
]
}
} catch (error) {
if (options.analyticHandlers && llmIds) {
await options.analyticHandlers.onLLMError(llmIds, error instanceof Error ? error.message : String(error))
}
if (error instanceof Error && error.message === 'Aborted') {
throw error
}
throw new Error(`Error in LLM node: ${error instanceof Error ? error.message : String(error)}`)
}
}
/**
* Handles memory management based on the specified memory type
*/
private async handleMemory({
messages,
memoryType,
pastChatHistory,
runtimeChatHistory,
llmNodeInstance,
nodeData,
userMessage,
input,
abortController,
options,
modelConfig,
runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef
}: {
messages: BaseMessageLike[]
memoryType: string
pastChatHistory: BaseMessageLike[]
runtimeChatHistory: BaseMessageLike[]
llmNodeInstance: BaseChatModel
nodeData: INodeData
userMessage: string
input: string | Record<string, any>
abortController: AbortController
options: ICommonObject
modelConfig: ICommonObject
runtimeImageMessagesWithFileRef: BaseMessageLike[]
pastImageMessagesWithFileRef: BaseMessageLike[]
}): Promise<void> {
const { updatedPastMessages, transformedPastMessages } = await getPastChatHistoryImageMessages(pastChatHistory, options)
pastChatHistory = updatedPastMessages
pastImageMessagesWithFileRef.push(...transformedPastMessages)
let pastMessages = [...pastChatHistory, ...runtimeChatHistory]
if (!runtimeChatHistory.length && input && typeof input === 'string') {
/*
* If this is the first node:
* - Add images to messages if exist
* - Add user message
*/
if (options.uploads) {
const imageContents = await getUniqueImageMessages(options, messages, modelConfig)
if (imageContents) {
const { imageMessageWithBase64, imageMessageWithFileRef } = imageContents
pastMessages.push(imageMessageWithBase64)
runtimeImageMessagesWithFileRef.push(imageMessageWithFileRef)
}
}
pastMessages.push({
role: 'user',
content: input
})
}
const { updatedMessages, transformedMessages } = await processMessagesWithImages(pastMessages, options)
pastMessages = updatedMessages
pastImageMessagesWithFileRef.push(...transformedMessages)
if (pastMessages.length > 0) {
if (memoryType === 'windowSize') {
// Window memory: Keep the last N messages
const windowSize = nodeData.inputs?.llmMemoryWindowSize as number
const windowedMessages = pastMessages.slice(-windowSize * 2)
messages.push(...windowedMessages)
} else if (memoryType === 'conversationSummary') {
// Summary memory: Summarize all past messages
const summary = await llmNodeInstance.invoke(
[
{
role: 'user',
content: DEFAULT_SUMMARIZER_TEMPLATE.replace(
'{conversation}',
pastMessages.map((msg: any) => `${msg.role}: ${msg.content}`).join('\n')
)
}
],
{ signal: abortController?.signal }
)
messages.push({ role: 'assistant', content: summary.content as string })
} else if (memoryType === 'conversationSummaryBuffer') {
// Summary buffer: Summarize messages that exceed token limit
await this.handleSummaryBuffer(messages, pastMessages, llmNodeInstance, nodeData, abortController)
} else {
// Default: Use all messages
messages.push(...pastMessages)
}
}
// Add user message
if (userMessage) {
messages.push({
role: 'user',
content: userMessage
})
}
}
/**
* Handles conversation summary buffer memory type
*/
private async handleSummaryBuffer(
messages: BaseMessageLike[],
pastMessages: BaseMessageLike[],
llmNodeInstance: BaseChatModel,
nodeData: INodeData,
abortController: AbortController
): Promise<void> {
const maxTokenLimit = (nodeData.inputs?.llmMemoryMaxTokenLimit as number) || 2000
// Convert past messages to a format suitable for token counting
const messagesString = pastMessages.map((msg: any) => `${msg.role}: ${msg.content}`).join('\n')
const tokenCount = await llmNodeInstance.getNumTokens(messagesString)
if (tokenCount > maxTokenLimit) {
// Calculate how many messages to summarize (messages that exceed the token limit)
let currBufferLength = tokenCount
const messagesToSummarize = []
const remainingMessages = [...pastMessages]
// Remove messages from the beginning until we're under the token limit
while (currBufferLength > maxTokenLimit && remainingMessages.length > 0) {
const poppedMessage = remainingMessages.shift()
if (poppedMessage) {
messagesToSummarize.push(poppedMessage)
// Recalculate token count for remaining messages
const remainingMessagesString = remainingMessages.map((msg: any) => `${msg.role}: ${msg.content}`).join('\n')
currBufferLength = await llmNodeInstance.getNumTokens(remainingMessagesString)
}
}
// Summarize the messages that were removed
const messagesToSummarizeString = messagesToSummarize.map((msg: any) => `${msg.role}: ${msg.content}`).join('\n')
const summary = await llmNodeInstance.invoke(
[
{
role: 'user',
content: DEFAULT_SUMMARIZER_TEMPLATE.replace('{conversation}', messagesToSummarizeString)
}
],
{ signal: abortController?.signal }
)
// Add summary as a system message at the beginning, then add remaining messages
messages.push({ role: 'system', content: `Previous conversation summary: ${summary.content}` })
messages.push(...remainingMessages)
} else {
// If under token limit, use all messages
messages.push(...pastMessages)
}
}
/**
* Configures structured output for the LLM
*/
private configureStructuredOutput(llmNodeInstance: BaseChatModel, llmStructuredOutput: IStructuredOutput[]): BaseChatModel {
try {
const zodObj: ICommonObject = {}
for (const sch of llmStructuredOutput) {
if (sch.type === 'string') {
zodObj[sch.key] = z.string().describe(sch.description || '')
} else if (sch.type === 'stringArray') {
zodObj[sch.key] = z.array(z.string()).describe(sch.description || '')
} else if (sch.type === 'number') {
zodObj[sch.key] = z.number().describe(sch.description || '')
} else if (sch.type === 'boolean') {
zodObj[sch.key] = z.boolean().describe(sch.description || '')
} else if (sch.type === 'enum') {
const enumValues = sch.enumValues?.split(',').map((item: string) => item.trim()) || []
zodObj[sch.key] = z
.enum(enumValues.length ? (enumValues as [string, ...string[]]) : ['default'])
.describe(sch.description || '')
} else if (sch.type === 'jsonArray') {
const jsonSchema = sch.jsonSchema
if (jsonSchema) {
try {
// Parse the JSON schema
const schemaObj = JSON.parse(jsonSchema)
// Create a Zod schema from the JSON schema
const itemSchema = this.createZodSchemaFromJSON(schemaObj)
// Create an array schema of the item schema
zodObj[sch.key] = z.array(itemSchema).describe(sch.description || '')
} catch (err) {
console.error(`Error parsing JSON schema for ${sch.key}:`, err)
// Fallback to generic array of records
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
}
} else {
// If no schema provided, use generic array of records
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
}
}
}
const structuredOutput = z.object(zodObj)
// @ts-ignore
return llmNodeInstance.withStructuredOutput(structuredOutput)
} catch (exception) {
console.error(exception)
return llmNodeInstance
}
}
/**
* Handles streaming response from the LLM
*/
private async handleStreamingResponse(
sseStreamer: IServerSideEventStreamer | undefined,
llmNodeInstance: BaseChatModel,
messages: BaseMessageLike[],
chatId: string,
abortController: AbortController
): Promise<AIMessageChunk> {
let response = new AIMessageChunk('')
try {
for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) {
if (sseStreamer) {
let content = ''
if (Array.isArray(chunk.content) && chunk.content.length > 0) {
const contents = chunk.content as MessageContentText[]
content = contents.map((item) => item.text).join('')
} else {
content = chunk.content.toString()
}
sseStreamer.streamTokenEvent(chatId, content)
}
response = response.concat(chunk)
}
} catch (error) {
console.error('Error during streaming:', error)
throw error
}
if (Array.isArray(response.content) && response.content.length > 0) {
const responseContents = response.content as MessageContentText[]
response.content = responseContents.map((item) => item.text).join('')
}
return response
}
/**
* Prepares the output object with response and metadata
*/
private prepareOutputObject(
response: AIMessageChunk,
finalResponse: string,
startTime: number,
endTime: number,
timeDelta: number
): any {
const output: any = {
content: finalResponse,
timeMetadata: {
start: startTime,
end: endTime,
delta: timeDelta
}
}
if (response.tool_calls) {
output.calledTools = response.tool_calls
}
if (response.usage_metadata) {
output.usageMetadata = response.usage_metadata
}
return output
}
/**
* Sends additional streaming events for tool calls and metadata
*/
private sendStreamingEvents(options: ICommonObject, chatId: string, response: AIMessageChunk): void {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
if (response.tool_calls) {
sseStreamer.streamCalledToolsEvent(chatId, response.tool_calls)
}
if (response.usage_metadata) {
sseStreamer.streamUsageMetadataEvent(chatId, response.usage_metadata)
}
sseStreamer.streamEndEvent(chatId)
}
/**
* Creates a Zod schema from a JSON schema object
* @param jsonSchema The JSON schema object
* @returns A Zod schema
*/
private createZodSchemaFromJSON(jsonSchema: any): z.ZodTypeAny {
// If the schema is an object with properties, create an object schema
if (typeof jsonSchema === 'object' && jsonSchema !== null) {
const schemaObj: Record<string, z.ZodTypeAny> = {}
// Process each property in the schema
for (const [key, value] of Object.entries(jsonSchema)) {
if (value === null) {
// Handle null values
schemaObj[key] = z.null()
} else if (typeof value === 'object' && !Array.isArray(value)) {
// Check if the property has a type definition
if ('type' in value) {
const type = value.type as string
const description = ('description' in value ? (value.description as string) : '') || ''
// Create the appropriate Zod type based on the type property
if (type === 'string') {
schemaObj[key] = z.string().describe(description)
} else if (type === 'number') {
schemaObj[key] = z.number().describe(description)
} else if (type === 'boolean') {
schemaObj[key] = z.boolean().describe(description)
} else if (type === 'array') {
// If it's an array type, check if items is defined
if ('items' in value && value.items) {
const itemSchema = this.createZodSchemaFromJSON(value.items)
schemaObj[key] = z.array(itemSchema).describe(description)
} else {
// Default to array of any if items not specified
schemaObj[key] = z.array(z.any()).describe(description)
}
} else if (type === 'object') {
// If it's an object type, check if properties is defined
if ('properties' in value && value.properties) {
const nestedSchema = this.createZodSchemaFromJSON(value.properties)
schemaObj[key] = nestedSchema.describe(description)
} else {
// Default to record of any if properties not specified
schemaObj[key] = z.record(z.any()).describe(description)
}
} else {
// Default to any for unknown types
schemaObj[key] = z.any().describe(description)
}
// Check if the property is optional
if ('optional' in value && value.optional === true) {
schemaObj[key] = schemaObj[key].optional()
}
} else if (Array.isArray(value)) {
// Array values without a type property
if (value.length > 0) {
// If the array has items, recursively create a schema for the first item
const itemSchema = this.createZodSchemaFromJSON(value[0])
schemaObj[key] = z.array(itemSchema)
} else {
// Empty array, allow any array
schemaObj[key] = z.array(z.any())
}
} else {
// It's a nested object without a type property, recursively create schema
schemaObj[key] = this.createZodSchemaFromJSON(value)
}
} else if (Array.isArray(value)) {
// Array values
if (value.length > 0) {
// If the array has items, recursively create a schema for the first item
const itemSchema = this.createZodSchemaFromJSON(value[0])
schemaObj[key] = z.array(itemSchema)
} else {
// Empty array, allow any array
schemaObj[key] = z.array(z.any())
}
} else {
// For primitive values (which shouldn't be in the schema directly)
// Use the corresponding Zod type
if (typeof value === 'string') {
schemaObj[key] = z.string()
} else if (typeof value === 'number') {
schemaObj[key] = z.number()
} else if (typeof value === 'boolean') {
schemaObj[key] = z.boolean()
} else {
schemaObj[key] = z.any()
}
}
}
return z.object(schemaObj)
}
// Fallback to any for unknown types
return z.any()
}
}
module.exports = { nodeClass: LLM_Agentflow }

View File

@ -0,0 +1,94 @@
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
class Loop_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
hideOutput: boolean
hint: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Loop'
this.name = 'loopAgentflow'
this.version = 1.0
this.type = 'Loop'
this.category = 'Agent Flows'
this.description = 'Loop back to a previous node'
this.baseClasses = [this.type]
this.color = '#FFA07A'
this.hint = 'Make sure to have memory enabled in the LLM/Agent node to retain the chat history'
this.hideOutput = true
this.inputs = [
{
label: 'Loop Back To',
name: 'loopBackToNode',
type: 'asyncOptions',
loadMethod: 'listPreviousNodes',
freeSolo: true
},
{
label: 'Max Loop Count',
name: 'maxLoopCount',
type: 'number',
default: 5
}
]
}
//@ts-ignore
loadMethods = {
async listPreviousNodes(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const previousNodes = options.previousNodes as ICommonObject[]
const returnOptions: INodeOptionsValue[] = []
for (const node of previousNodes) {
returnOptions.push({
label: node.label,
name: `${node.id}-${node.label}`,
description: node.id
})
}
return returnOptions
}
}
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const loopBackToNode = nodeData.inputs?.loopBackToNode as string
const _maxLoopCount = nodeData.inputs?.maxLoopCount as string
const state = options.agentflowRuntime?.state as ICommonObject
const loopBackToNodeId = loopBackToNode.split('-')[0]
const loopBackToNodeLabel = loopBackToNode.split('-')[1]
const data = {
nodeID: loopBackToNodeId,
maxLoopCount: _maxLoopCount ? parseInt(_maxLoopCount) : 5
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: data,
output: {
content: 'Loop back to ' + `${loopBackToNodeLabel} (${loopBackToNodeId})`,
nodeID: loopBackToNodeId,
maxLoopCount: _maxLoopCount ? parseInt(_maxLoopCount) : 5
},
state
}
return returnOutput
}
}
module.exports = { nodeClass: Loop_Agentflow }

View File

@ -0,0 +1,227 @@
import {
ICommonObject,
IDatabaseEntity,
INode,
INodeData,
INodeOptionsValue,
INodeParams,
IServerSideEventStreamer
} from '../../../src/Interface'
import { updateFlowState } from '../utils'
import { DataSource } from 'typeorm'
import { BaseRetriever } from '@langchain/core/retrievers'
import { Document } from '@langchain/core/documents'
interface IKnowledgeBase {
documentStore: string
}
class Retriever_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
hideOutput: boolean
hint: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Retriever'
this.name = 'retrieverAgentflow'
this.version = 1.0
this.type = 'Retriever'
this.category = 'Agent Flows'
this.description = 'Retrieve information from vector database'
this.baseClasses = [this.type]
this.color = '#b8bedd'
this.inputs = [
{
label: 'Knowledge (Document Stores)',
name: 'retrieverKnowledgeDocumentStores',
type: 'array',
description: 'Document stores to retrieve information from. Document stores must be upserted in advance.',
array: [
{
label: 'Document Store',
name: 'documentStore',
type: 'asyncOptions',
loadMethod: 'listStores'
}
]
},
{
label: 'Retriever Query',
name: 'retrieverQuery',
type: 'string',
placeholder: 'Enter your query here',
rows: 4,
acceptVariable: true
},
{
label: 'Output Format',
name: 'outputFormat',
type: 'options',
options: [
{ label: 'Text', name: 'text' },
{ label: 'Text with Metadata', name: 'textWithMetadata' }
],
default: 'text'
},
{
label: 'Update Flow State',
name: 'retrieverUpdateState',
description: 'Update runtime state during the execution of the workflow',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
},
{
label: 'Value',
name: 'value',
type: 'string',
acceptVariable: true,
acceptNodeOutputAsVariable: true
}
]
}
]
}
//@ts-ignore
loadMethods = {
async listRuntimeStateKeys(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const previousNodes = options.previousNodes as ICommonObject[]
const startAgentflowNode = previousNodes.find((node) => node.name === 'startAgentflow')
const state = startAgentflowNode?.inputs?.startState as ICommonObject[]
return state.map((item) => ({ label: item.key, name: item.key }))
},
async listStores(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const returnData: INodeOptionsValue[] = []
const appDataSource = options.appDataSource as DataSource
const databaseEntities = options.databaseEntities as IDatabaseEntity
if (appDataSource === undefined || !appDataSource) {
return returnData
}
const stores = await appDataSource.getRepository(databaseEntities['DocumentStore']).find()
for (const store of stores) {
if (store.status === 'UPSERTED') {
const obj = {
name: `${store.id}:${store.name}`,
label: store.name,
description: store.description
}
returnData.push(obj)
}
}
return returnData
}
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
const retrieverQuery = nodeData.inputs?.retrieverQuery as string
const outputFormat = nodeData.inputs?.outputFormat as string
const _retrieverUpdateState = nodeData.inputs?.retrieverUpdateState
const state = options.agentflowRuntime?.state as ICommonObject
const chatId = options.chatId as string
const isLastNode = options.isLastNode as boolean
const isStreamable = isLastNode && options.sseStreamer !== undefined
const abortController = options.abortController as AbortController
// Extract knowledge
let docs: Document[] = []
const knowledgeBases = nodeData.inputs?.retrieverKnowledgeDocumentStores as IKnowledgeBase[]
if (knowledgeBases && knowledgeBases.length > 0) {
for (const knowledgeBase of knowledgeBases) {
const [storeId, _] = knowledgeBase.documentStore.split(':')
const docStoreVectorInstanceFilePath = options.componentNodes['documentStoreVS'].filePath as string
const docStoreVectorModule = await import(docStoreVectorInstanceFilePath)
const newDocStoreVectorInstance = new docStoreVectorModule.nodeClass()
const docStoreVectorInstance = (await newDocStoreVectorInstance.init(
{
...nodeData,
inputs: {
...nodeData.inputs,
selectedStore: storeId
},
outputs: {
output: 'retriever'
}
},
'',
options
)) as BaseRetriever
docs = await docStoreVectorInstance.invoke(retrieverQuery || input, { signal: abortController?.signal })
}
}
const docsText = docs.map((doc) => doc.pageContent).join('\n')
// Update flow state if needed
let newState = { ...state }
if (_retrieverUpdateState && Array.isArray(_retrieverUpdateState) && _retrieverUpdateState.length > 0) {
newState = updateFlowState(state, _retrieverUpdateState)
}
try {
let finalOutput = ''
if (outputFormat === 'text') {
finalOutput = docsText
} else if (outputFormat === 'textWithMetadata') {
finalOutput = JSON.stringify(docs, null, 2)
}
if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer
sseStreamer.streamTokenEvent(chatId, finalOutput)
}
// Process template variables in state
if (newState && Object.keys(newState).length > 0) {
for (const key in newState) {
if (newState[key].toString().includes('{{ output }}')) {
newState[key] = finalOutput
}
}
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: {
question: retrieverQuery || input
},
output: {
content: finalOutput
},
state: newState
}
return returnOutput
} catch (e) {
throw new Error(e)
}
}
}
module.exports = { nodeClass: Retriever_Agentflow }

View File

@ -0,0 +1,217 @@
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
class Start_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
hideInput: boolean
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Start'
this.name = 'startAgentflow'
this.version = 1.0
this.type = 'Start'
this.category = 'Agent Flows'
this.description = 'Starting point of the agentflow'
this.baseClasses = [this.type]
this.color = '#7EE787'
this.hideInput = true
this.inputs = [
{
label: 'Input Type',
name: 'startInputType',
type: 'options',
options: [
{
label: 'Chat Input',
name: 'chatInput',
description: 'Start the conversation with chat input'
},
{
label: 'Form Input',
name: 'formInput',
description: 'Start the workflow with form inputs'
}
],
default: 'chatInput'
},
{
label: 'Form Title',
name: 'formTitle',
type: 'string',
placeholder: 'Please Fill Out The Form',
show: {
startInputType: 'formInput'
}
},
{
label: 'Form Description',
name: 'formDescription',
type: 'string',
placeholder: 'Complete all fields below to continue',
show: {
startInputType: 'formInput'
}
},
{
label: 'Form Input Types',
name: 'formInputTypes',
description: 'Specify the type of form input',
type: 'array',
show: {
startInputType: 'formInput'
},
array: [
{
label: 'Type',
name: 'type',
type: 'options',
options: [
{
label: 'String',
name: 'string'
},
{
label: 'Number',
name: 'number'
},
{
label: 'Boolean',
name: 'boolean'
},
{
label: 'Options',
name: 'options'
}
],
default: 'string'
},
{
label: 'Label',
name: 'label',
type: 'string',
placeholder: 'Label for the input'
},
{
label: 'Variable Name',
name: 'name',
type: 'string',
placeholder: 'Variable name for the input (must be camel case)',
description: 'Variable name must be camel case. For example: firstName, lastName, etc.'
},
{
label: 'Add Options',
name: 'addOptions',
type: 'array',
show: {
'formInputTypes[$index].type': 'options'
},
array: [
{
label: 'Option',
name: 'option',
type: 'string'
}
]
}
]
},
{
label: 'Ephemeral Memory',
name: 'startEphemeralMemory',
type: 'boolean',
description: 'Start fresh for every execution without past chat history',
optional: true
},
{
label: 'Flow State',
name: 'startState',
description: 'Runtime state during the execution of the workflow',
type: 'array',
optional: true,
array: [
{
label: 'Key',
name: 'key',
type: 'string',
placeholder: 'Foo'
},
{
label: 'Value',
name: 'value',
type: 'string',
placeholder: 'Bar',
optional: true
}
]
}
]
}
async run(nodeData: INodeData, input: string | Record<string, any>, options: ICommonObject): Promise<any> {
const _flowState = nodeData.inputs?.startState as string
const startInputType = nodeData.inputs?.startInputType as string
const startEphemeralMemory = nodeData.inputs?.startEphemeralMemory as boolean
let flowStateArray = []
if (_flowState) {
try {
flowStateArray = typeof _flowState === 'string' ? JSON.parse(_flowState) : _flowState
} catch (error) {
throw new Error('Invalid Flow State')
}
}
let flowState: Record<string, any> = {}
for (const state of flowStateArray) {
flowState[state.key] = state.value
}
const inputData: ICommonObject = {}
const outputData: ICommonObject = {}
if (startInputType === 'chatInput') {
inputData.question = input
outputData.question = input
}
if (startInputType === 'formInput') {
inputData.form = {
title: nodeData.inputs?.formTitle,
description: nodeData.inputs?.formDescription,
inputs: nodeData.inputs?.formInputTypes
}
let form = input
if (options.agentflowRuntime?.form && Object.keys(options.agentflowRuntime.form).length) {
form = options.agentflowRuntime.form
}
outputData.form = form
}
if (startEphemeralMemory) {
outputData.ephemeralMemory = true
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: inputData,
output: outputData,
state: flowState
}
return returnOutput
}
}
module.exports = { nodeClass: Start_Agentflow }

View File

@ -0,0 +1,42 @@
import { INode, INodeParams } from '../../../src/Interface'
class StickyNote_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
tags: string[]
baseClasses: string[]
inputs: INodeParams[]
constructor() {
this.label = 'Sticky Note'
this.name = 'stickyNoteAgentflow'
this.version = 1.0
this.type = 'StickyNote'
this.color = '#fee440'
this.category = 'Agent Flows'
this.description = 'Add notes to the agent flow'
this.inputs = [
{
label: '',
name: 'note',
type: 'string',
rows: 1,
placeholder: 'Type something here',
optional: true
}
]
this.baseClasses = [this.type]
}
async run(): Promise<any> {
return undefined
}
}
module.exports = { nodeClass: StickyNote_Agentflow }

View File

@ -0,0 +1,304 @@
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
import { updateFlowState } from '../utils'
import { Tool } from '@langchain/core/tools'
import { ARTIFACTS_PREFIX } from '../../../src/agents'
import zodToJsonSchema from 'zod-to-json-schema'
interface IToolInputArgs {
inputArgName: string
inputArgValue: string
}
class Tool_Agentflow implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
color: string
hideOutput: boolean
hint: string
baseClasses: string[]
documentation?: string
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Tool'
this.name = 'toolAgentflow'
this.version = 1.0
this.type = 'Tool'
this.category = 'Agent Flows'
this.description = 'Tools allow LLM to interact with external systems'
this.baseClasses = [this.type]
this.color = '#d4a373'
this.inputs = [
{
label: 'Tool',
name: 'selectedTool',
type: 'asyncOptions',
loadMethod: 'listTools',
loadConfig: true
},
{
label: 'Tool Input Arguments',
name: 'toolInputArgs',
type: 'array',
acceptVariable: true,
refresh: true,
array: [
{
label: 'Input Argument Name',
name: 'inputArgName',
type: 'asyncOptions',
loadMethod: 'listToolInputArgs',
refresh: true
},
{
label: 'Input Argument Value',
name: 'inputArgValue',
type: 'string',
acceptVariable: true
}
],
show: {
selectedTool: '.+'
}
},
{
label: 'Update Flow State',
name: 'toolUpdateState',
description: 'Update runtime state during the execution of the workflow',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
},
{
label: 'Value',
name: 'value',
type: 'string',
acceptVariable: true,
acceptNodeOutputAsVariable: true
}
]
}
]
}
//@ts-ignore
loadMethods = {
async listTools(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const componentNodes = options.componentNodes as {
[key: string]: INode
}
const removeTools = ['chainTool', 'retrieverTool', 'webBrowser']
const returnOptions: INodeOptionsValue[] = []
for (const nodeName in componentNodes) {
const componentNode = componentNodes[nodeName]
if (componentNode.category === 'Tools' || componentNode.category === 'Tools (MCP)') {
if (componentNode.tags?.includes('LlamaIndex')) {
continue
}
if (removeTools.includes(nodeName)) {
continue
}
returnOptions.push({
label: componentNode.label,
name: nodeName,
imageSrc: componentNode.icon
})
}
}
return returnOptions
},
async listToolInputArgs(nodeData: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const currentNode = options.currentNode as ICommonObject
const selectedTool = currentNode?.inputs?.selectedTool as string
const selectedToolConfig = currentNode?.inputs?.selectedToolConfig as ICommonObject
const nodeInstanceFilePath = options.componentNodes[selectedTool].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const newToolNodeInstance = new nodeModule.nodeClass()
const newNodeData = {
...nodeData,
credential: selectedToolConfig['FLOWISE_CREDENTIAL_ID'],
inputs: {
...nodeData.inputs,
...selectedToolConfig
}
}
try {
const toolInstance = (await newToolNodeInstance.init(newNodeData, '', options)) as Tool
let toolInputArgs: ICommonObject = {}
if (Array.isArray(toolInstance)) {
// Combine schemas from all tools in the array
const allProperties = toolInstance.reduce((acc, tool) => {
if (tool?.schema) {
const schema: Record<string, any> = zodToJsonSchema(tool.schema)
return { ...acc, ...(schema.properties || {}) }
}
return acc
}, {})
toolInputArgs = { properties: allProperties }
} else {
// Handle single tool instance
toolInputArgs = toolInstance.schema ? zodToJsonSchema(toolInstance.schema) : {}
}
if (toolInputArgs && Object.keys(toolInputArgs).length > 0) {
delete toolInputArgs.$schema
}
return Object.keys(toolInputArgs.properties || {}).map((item) => ({
label: item,
name: item,
description: toolInputArgs.properties[item].description
}))
} catch (e) {
return []
}
},
async listRuntimeStateKeys(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
const previousNodes = options.previousNodes as ICommonObject[]
const startAgentflowNode = previousNodes.find((node) => node.name === 'startAgentflow')
const state = startAgentflowNode?.inputs?.startState as ICommonObject[]
return state.map((item) => ({ label: item.key, name: item.key }))
}
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
const selectedTool = nodeData.inputs?.selectedTool as string
const selectedToolConfig = nodeData.inputs?.selectedToolConfig as ICommonObject
const toolInputArgs = nodeData.inputs?.toolInputArgs as IToolInputArgs[]
const _toolUpdateState = nodeData.inputs?.toolUpdateState
const state = options.agentflowRuntime?.state as ICommonObject
const chatId = options.chatId as string
const isLastNode = options.isLastNode as boolean
const isStreamable = isLastNode && options.sseStreamer !== undefined
const abortController = options.abortController as AbortController
// Update flow state if needed
let newState = { ...state }
if (_toolUpdateState && Array.isArray(_toolUpdateState) && _toolUpdateState.length > 0) {
newState = updateFlowState(state, _toolUpdateState)
}
if (!selectedTool) {
throw new Error('Tool not selected')
}
const nodeInstanceFilePath = options.componentNodes[selectedTool].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const newToolNodeInstance = new nodeModule.nodeClass()
const newNodeData = {
...nodeData,
credential: selectedToolConfig['FLOWISE_CREDENTIAL_ID'],
inputs: {
...nodeData.inputs,
...selectedToolConfig
}
}
const toolInstance = (await newToolNodeInstance.init(newNodeData, '', options)) as Tool | Tool[]
let toolCallArgs: Record<string, any> = {}
for (const item of toolInputArgs) {
const variableName = item.inputArgName
const variableValue = item.inputArgValue
toolCallArgs[variableName] = variableValue
}
const flowConfig = {
sessionId: options.sessionId,
chatId: options.chatId,
input: input,
state: options.agentflowRuntime?.state
}
try {
let toolOutput: string
if (Array.isArray(toolInstance)) {
// Execute all tools and combine their outputs
const outputs = await Promise.all(
toolInstance.map((tool) =>
//@ts-ignore
tool.call(toolCallArgs, { signal: abortController?.signal }, undefined, flowConfig)
)
)
toolOutput = outputs.join('\n')
} else {
//@ts-ignore
toolOutput = await toolInstance.call(toolCallArgs, { signal: abortController?.signal }, undefined, flowConfig)
}
let parsedArtifacts
// Extract artifacts if present
if (typeof toolOutput === 'string' && toolOutput.includes(ARTIFACTS_PREFIX)) {
const [output, artifact] = toolOutput.split(ARTIFACTS_PREFIX)
toolOutput = output
try {
parsedArtifacts = JSON.parse(artifact)
} catch (e) {
console.error('Error parsing artifacts from tool:', e)
}
}
if (typeof toolOutput === 'object') {
toolOutput = JSON.stringify(toolOutput, null, 2)
}
if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer
sseStreamer.streamTokenEvent(chatId, toolOutput)
}
// Process template variables in state
if (newState && Object.keys(newState).length > 0) {
for (const key in newState) {
if (newState[key].toString().includes('{{ output }}')) {
newState[key] = toolOutput
}
}
}
const returnOutput = {
id: nodeData.id,
name: this.name,
input: {
toolInputArgs: toolInputArgs,
selectedTool: selectedTool
},
output: {
content: toolOutput,
artifacts: parsedArtifacts
},
state: newState
}
return returnOutput
} catch (e) {
throw new Error(e)
}
}
}
module.exports = { nodeClass: Tool_Agentflow }

View File

@ -0,0 +1,75 @@
export const DEFAULT_SUMMARIZER_TEMPLATE = `Progressively summarize the conversation provided and return a new summary.
EXAMPLE:
Human: Why do you think artificial intelligence is a force for good?
AI: Because artificial intelligence will help humans reach their full potential.
New summary:
The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.
END OF EXAMPLE
Conversation:
{conversation}
New summary:`
export const DEFAULT_HUMAN_INPUT_DESCRIPTION = `Summarize the conversation between the user and the assistant, reiterate the last message from the assistant, and ask if user would like to proceed or if they have any feedback.
- Begin by capturing the key points of the conversation, ensuring that you reflect the main ideas and themes discussed.
- Then, clearly reproduce the last message sent by the assistant to maintain continuity. Make sure the whole message is reproduced.
- Finally, ask the user if they would like to proceed, or provide any feedback on the last assistant message
## Output Format The output should be structured in three parts in text:
- A summary of the conversation (1-3 sentences).
- The last assistant message (exactly as it appeared).
- Ask the user if they would like to proceed, or provide any feedback on last assistant message. No other explanation and elaboration is needed.
`
export const DEFAULT_HUMAN_INPUT_DESCRIPTION_HTML = `<p>Summarize the conversation between the user and the assistant, reiterate the last message from the assistant, and ask if user would like to proceed or if they have any feedback. </p>
<ul>
<li>Begin by capturing the key points of the conversation, ensuring that you reflect the main ideas and themes discussed.</li>
<li>Then, clearly reproduce the last message sent by the assistant to maintain continuity. Make sure the whole message is reproduced.</li>
<li>Finally, ask the user if they would like to proceed, or provide any feedback on the last assistant message</li>
</ul>
<h2 id="output-format-the-output-should-be-structured-in-three-parts-">Output Format The output should be structured in three parts in text:</h2>
<ul>
<li>A summary of the conversation (1-3 sentences).</li>
<li>The last assistant message (exactly as it appeared).</li>
<li>Ask the user if they would like to proceed, or provide any feedback on last assistant message. No other explanation and elaboration is needed.</li>
</ul>
`
export const CONDITION_AGENT_SYSTEM_PROMPT = `You are part of a multi-agent system designed to make agent coordination and execution easy. Your task is to analyze the given input and select one matching scenario from a provided set of scenarios. If none of the scenarios match the input, you should return "default."
- **Input**: A string representing the user's query or message.
- **Scenarios**: A list of predefined scenarios that relate to the input.
- **Instruction**: Determine if the input fits any of the scenarios.
## Steps
1. **Read the input string** and the list of scenarios.
2. **Analyze the content of the input** to identify its main topic or intention.
3. **Compare the input with each scenario**:
- If a scenario matches the main topic of the input, select that scenario.
- If no scenarios match, prepare to output "\`\`\`json\n{"output": "default"}\`\`\`"
4. **Output the result**: If a match is found, return the corresponding scenario in JSON; otherwise, return "\`\`\`json\n{"output": "default"}\`\`\`"
## Output Format
Output should be a JSON object that either names the matching scenario or returns "\`\`\`json\n{"output": "default"}\`\`\`" if no scenarios match. No explanation is needed.
## Examples
1. **Input**: {"input": "Hello", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}
**Output**: "\`\`\`json\n{"output": "default"}\`\`\`"
2. **Input**: {"input": "What is AIGC?", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}
**Output**: "\`\`\`json\n{"output": "user is asking about AI"}\`\`\`"
3. **Input**: {"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "default"], "instruction": "Determine if the user is interested in learning about AI"}
**Output**: "\`\`\`json\n{"output": "user is interested in AI topics"}\`\`\`"
## Note
- Ensure that the input scenarios align well with potential user queries for accurate matching
- DO NOT include anything other than the JSON in your response.
`

View File

@ -0,0 +1,342 @@
import { BaseMessage, MessageContentImageUrl } from '@langchain/core/messages'
import { getImageUploads } from '../../src/multiModalUtils'
import { getFileFromStorage } from '../../src/storageUtils'
import { ICommonObject, IFileUpload } from '../../src/Interface'
import { BaseMessageLike } from '@langchain/core/messages'
import { IFlowState } from './Interface.Agentflow'
import { mapMimeTypeToInputField } from '../../src/utils'
export const addImagesToMessages = async (
options: ICommonObject,
allowImageUploads: boolean,
imageResolution?: 'auto' | 'low' | 'high'
): Promise<MessageContentImageUrl[]> => {
const imageContent: MessageContentImageUrl[] = []
if (allowImageUploads && options?.uploads && options?.uploads.length > 0) {
const imageUploads = getImageUploads(options.uploads)
for (const upload of imageUploads) {
let bf = upload.data
if (upload.type == 'stored-file') {
const contents = await getFileFromStorage(upload.name, options.chatflowid, options.chatId)
// as the image is stored in the server, read the file and convert it to base64
bf = 'data:' + upload.mime + ';base64,' + contents.toString('base64')
imageContent.push({
type: 'image_url',
image_url: {
url: bf,
detail: imageResolution ?? 'low'
}
})
} else if (upload.type == 'url' && bf) {
imageContent.push({
type: 'image_url',
image_url: {
url: bf,
detail: imageResolution ?? 'low'
}
})
}
}
}
return imageContent
}
/**
* Process message array to replace stored file references with base64 image data
* @param messages Array of messages that may contain image references
* @param options Common options object containing chatflowid and chatId
* @returns Object containing updated messages array and transformed original messages
*/
export const processMessagesWithImages = async (
messages: BaseMessageLike[],
options: ICommonObject
): Promise<{
updatedMessages: BaseMessageLike[]
transformedMessages: BaseMessageLike[]
}> => {
if (!messages || !options.chatflowid || !options.chatId) {
return {
updatedMessages: messages,
transformedMessages: []
}
}
// Create a deep copy of the messages to avoid mutating the original
const updatedMessages = JSON.parse(JSON.stringify(messages))
// Track which messages were transformed
const transformedMessages: BaseMessageLike[] = []
// Scan through all messages looking for stored-file references
for (let i = 0; i < updatedMessages.length; i++) {
const message = updatedMessages[i]
// Skip non-user messages or messages without content
if (message.role !== 'user' || !message.content) {
continue
}
// Handle array content (typically containing file references)
if (Array.isArray(message.content)) {
const imageContents: MessageContentImageUrl[] = []
let hasImageReferences = false
// Process each content item
for (const item of message.content) {
// Look for stored-file type items
if (item.type === 'stored-file' && item.name && item.mime.startsWith('image/')) {
hasImageReferences = true
try {
// Get file contents from storage
const contents = await getFileFromStorage(item.name, options.chatflowid, options.chatId)
// Create base64 data URL
const base64Data = 'data:' + item.mime + ';base64,' + contents.toString('base64')
// Add to image content array
imageContents.push({
type: 'image_url',
image_url: {
url: base64Data,
detail: item.imageResolution ?? 'low'
}
})
} catch (error) {
console.error(`Failed to load image ${item.name}:`, error)
}
}
}
// Replace the content with the image content array
if (imageContents.length > 0) {
// Store the original message before modifying
if (hasImageReferences) {
transformedMessages.push(JSON.parse(JSON.stringify(messages[i])))
}
updatedMessages[i].content = imageContents
}
}
}
return {
updatedMessages,
transformedMessages
}
}
/**
* Replace base64 image data in messages with file references
* @param messages Array of messages that may contain base64 image data
* @param uniqueImageMessages Array of messages with file references for new images
* @param pastImageMessages Array of messages with file references for previous images
* @returns Updated messages array with file references instead of base64 data
*/
export const replaceBase64ImagesWithFileReferences = (
messages: BaseMessageLike[],
uniqueImageMessages: BaseMessageLike[] = [],
pastImageMessages: BaseMessageLike[] = []
): BaseMessageLike[] => {
// Create a deep copy to avoid mutating the original
const updatedMessages = JSON.parse(JSON.stringify(messages))
let imageMessagesIndex = 0
for (let i = 0; i < updatedMessages.length; i++) {
const message = updatedMessages[i]
if (message.content && Array.isArray(message.content)) {
for (let j = 0; j < message.content.length; j++) {
const item = message.content[j]
if (item.type === 'image_url') {
// Look for matching file reference in uniqueImageMessages or pastImageMessages
const imageMessage =
(uniqueImageMessages[imageMessagesIndex] as BaseMessage | undefined) ||
(pastImageMessages[imageMessagesIndex] as BaseMessage | undefined)
if (imageMessage && Array.isArray(imageMessage.content) && imageMessage.content[j]) {
const replaceContent = imageMessage.content[j]
message.content[j] = {
...replaceContent
}
imageMessagesIndex++
}
}
}
}
}
return updatedMessages
}
/**
* Get unique image messages from uploads
* @param options Common options object containing uploads
* @param messages Array of messages to check for existing images
* @param modelConfig Model configuration object containing allowImageUploads and imageResolution
* @returns Object containing imageMessageWithFileRef and imageMessageWithBase64
*/
export const getUniqueImageMessages = async (
options: ICommonObject,
messages: BaseMessageLike[],
modelConfig?: ICommonObject
): Promise<{ imageMessageWithFileRef: BaseMessageLike; imageMessageWithBase64: BaseMessageLike } | undefined> => {
if (!options.uploads) return undefined
// Get images from uploads
const images = await addImagesToMessages(options, modelConfig?.allowImageUploads, modelConfig?.imageResolution)
// Filter out images that are already in previous messages
const uniqueImages = images.filter((image) => {
// Check if this image is already in any existing message
return !messages.some((msg: any) => {
// For multimodal content (arrays with image objects)
if (Array.isArray(msg.content)) {
return msg.content.some(
(item: any) =>
// Compare by image URL/content for image objects
item.type === 'image_url' && image.type === 'image_url' && JSON.stringify(item) === JSON.stringify(image)
)
}
// For direct comparison of simple content
return JSON.stringify(msg.content) === JSON.stringify(image)
})
})
if (uniqueImages.length === 0) {
return undefined
}
// Create messages with the original file references for storage/display
const imageMessageWithFileRef = {
role: 'user',
content: options.uploads.map((upload: IFileUpload) => ({
type: upload.type,
name: upload.name,
mime: upload.mime,
imageResolution: modelConfig?.imageResolution
}))
}
// Create messages with base64 data for the LLM
const imageMessageWithBase64 = {
role: 'user',
content: uniqueImages
}
return {
imageMessageWithFileRef,
imageMessageWithBase64
}
}
/**
* Get past chat history image messages
* @param pastChatHistory Array of past chat history messages
* @param options Common options object
* @returns Object containing updatedPastMessages and transformedPastMessages
*/
export const getPastChatHistoryImageMessages = async (
pastChatHistory: BaseMessageLike[],
options: ICommonObject
): Promise<{ updatedPastMessages: BaseMessageLike[]; transformedPastMessages: BaseMessageLike[] }> => {
const chatHistory = []
const transformedPastMessages = []
for (let i = 0; i < pastChatHistory.length; i++) {
const message = pastChatHistory[i] as BaseMessage & { role: string }
const messageRole = message.role || 'user'
if (message.additional_kwargs && message.additional_kwargs.fileUploads) {
// example: [{"type":"stored-file","name":"0_DiXc4ZklSTo3M8J4.jpg","mime":"image/jpeg"}]
const fileUploads = message.additional_kwargs.fileUploads
try {
let messageWithFileUploads = ''
const uploads: IFileUpload[] = typeof fileUploads === 'string' ? JSON.parse(fileUploads) : fileUploads
const imageContents: MessageContentImageUrl[] = []
for (const upload of uploads) {
if (upload.type === 'stored-file' && upload.mime.startsWith('image/')) {
const fileData = await getFileFromStorage(upload.name, options.chatflowid, options.chatId)
// as the image is stored in the server, read the file and convert it to base64
const bf = 'data:' + upload.mime + ';base64,' + fileData.toString('base64')
imageContents.push({
type: 'image_url',
image_url: {
url: bf
}
})
} else if (upload.type === 'url' && upload.mime.startsWith('image') && upload.data) {
imageContents.push({
type: 'image_url',
image_url: {
url: upload.data
}
})
} else if (upload.type === 'stored-file:full') {
const fileLoaderNodeModule = await import('../../nodes/documentloaders/File/File')
// @ts-ignore
const fileLoaderNodeInstance = new fileLoaderNodeModule.nodeClass()
const nodeOptions = {
retrieveAttachmentChatId: true,
chatflowid: options.chatflowid,
chatId: options.chatId
}
let fileInputFieldFromMimeType = 'txtFile'
fileInputFieldFromMimeType = mapMimeTypeToInputField(upload.mime)
const nodeData = {
inputs: {
[fileInputFieldFromMimeType]: `FILE-STORAGE::${JSON.stringify([upload.name])}`
}
}
const documents: string = await fileLoaderNodeInstance.init(nodeData, '', nodeOptions)
messageWithFileUploads += `<doc name='${upload.name}'>${documents}</doc>\n\n`
}
}
const messageContent = messageWithFileUploads ? `${messageWithFileUploads}\n\n${message.content}` : message.content
if (imageContents.length > 0) {
chatHistory.push({
role: messageRole,
content: imageContents
})
transformedPastMessages.push({
role: messageRole,
content: [...JSON.parse((pastChatHistory[i] as any).additional_kwargs.fileUploads)]
})
}
chatHistory.push({
role: messageRole,
content: messageContent
})
} catch (e) {
// failed to parse fileUploads, continue with text only
chatHistory.push({
role: messageRole,
content: message.content
})
}
} else {
chatHistory.push({
role: messageRole,
content: message.content
})
}
}
return {
updatedPastMessages: chatHistory,
transformedPastMessages
}
}
/**
* Updates the flow state with new values
*/
export const updateFlowState = (state: ICommonObject, llmUpdateState: IFlowState[]): ICommonObject => {
let newFlowState: Record<string, any> = {}
for (const state of llmUpdateState) {
newFlowState[state.key] = state.value
}
return {
...state,
...newFlowState
}
}

View File

@ -224,7 +224,7 @@ class OpenAIAssistant_Agents implements INode {
const openai = new OpenAI({ apiKey: openAIApiKey }) const openai = new OpenAI({ apiKey: openAIApiKey })
// Start analytics // Start analytics
const analyticHandlers = new AnalyticHandler(nodeData, options) const analyticHandlers = AnalyticHandler.getInstance(nodeData, options)
await analyticHandlers.init() await analyticHandlers.init()
const parentIds = await analyticHandlers.onChainStart('OpenAIAssistant', input) const parentIds = await analyticHandlers.onChainStart('OpenAIAssistant', input)
@ -743,7 +743,7 @@ class OpenAIAssistant_Agents implements INode {
state = await promise(threadId, newRunThread.id) state = await promise(threadId, newRunThread.id)
} else { } else {
const errMsg = `Error processing thread: ${state}, Thread ID: ${threadId}` const errMsg = `Error processing thread: ${state}, Thread ID: ${threadId}`
await analyticHandlers.onChainError(parentIds, errMsg) await analyticHandlers.onChainError(parentIds, errMsg, true)
throw new Error(errMsg) throw new Error(errMsg)
} }
} }

View File

@ -62,7 +62,8 @@ class GoogleGenerativeAI_ChatModels implements INode {
type: 'string', type: 'string',
placeholder: 'gemini-1.5-pro-exp-0801', placeholder: 'gemini-1.5-pro-exp-0801',
description: 'Custom model name to use. If provided, it will override the model selected', description: 'Custom model name to use. If provided, it will override the model selected',
additionalParams: true additionalParams: true,
optional: true
}, },
{ {
label: 'Temperature', label: 'Temperature',

View File

@ -21,7 +21,7 @@ class ChatOpenAI_ChatModels implements INode {
constructor() { constructor() {
this.label = 'ChatOpenAI' this.label = 'ChatOpenAI'
this.name = 'chatOpenAI' this.name = 'chatOpenAI'
this.version = 8.1 this.version = 8.2
this.type = 'ChatOpenAI' this.type = 'ChatOpenAI'
this.icon = 'openai.svg' this.icon = 'openai.svg'
this.category = 'Chat Models' this.category = 'Chat Models'
@ -172,7 +172,9 @@ class ChatOpenAI_ChatModels implements INode {
], ],
default: 'low', default: 'low',
optional: false, optional: false,
additionalParams: true show: {
allowImageUploads: true
}
}, },
{ {
label: 'Reasoning Effort', label: 'Reasoning Effort',

View File

@ -1,2 +1,2 @@
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools --> <?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg width="800px" height="800px" viewBox="0 0 48 48" xmlns="http://www.w3.org/2000/svg"><defs><style>.a{fill:none;stroke:#000000;stroke-linecap:round;stroke-linejoin:round;}</style></defs><path class="a" d="M5.5,22.9722h0a8.7361,8.7361,0,0,0,8.7361,8.7361h2.0556v2.0556A8.7361,8.7361,0,0,0,25.0278,42.5h0V22.9722Z"/><path class="a" d="M14.2361,14.2361h0a8.7361,8.7361,0,0,0,8.7361,8.7361h2.0556v2.0556a8.7361,8.7361,0,0,0,8.7361,8.7361h0V14.2361Z"/><path class="a" d="M22.9722,5.5h0a8.7361,8.7361,0,0,0,8.7361,8.7361h2.0556v2.0556A8.7361,8.7361,0,0,0,42.5,25.0278h0V5.5Z"/></svg> <svg width="800px" height="800px" viewBox="0 0 48 48" xmlns="http://www.w3.org/2000/svg"><defs><style>.a{fill:none;stroke:#000000;stroke-linecap:round;stroke-linejoin:round;}</style></defs><path class="a" d="M5.5,22.9722h0a8.7361,8.7361,0,0,0,8.7361,8.7361h2.0556v2.0556A8.7361,8.7361,0,0,0,25.0278,42.5h0V22.9722Z"/><path class="a" d="M14.2361,14.2361h0a8.7361,8.7361,0,0,0,8.7361,8.7361h2.0556v2.0556a8.7361,8.7361,0,0,0,8.7361,8.7361h0V14.2361Z"/><path class="a" d="M22.9722,5.5h0a8.7361,8.7361,0,0,0,8.7361,8.7361h2.0556v2.0556A8.7361,8.7361,0,0,0,42.5,25.0278h0V5.5Z"/></svg>

Before

Width:  |  Height:  |  Size: 700 B

After

Width:  |  Height:  |  Size: 699 B

View File

@ -313,6 +313,7 @@ class ChatflowTool extends StructuredTool {
method: 'POST', method: 'POST',
headers: { headers: {
'Content-Type': 'application/json', 'Content-Type': 'application/json',
'flowise-tool': 'true',
...this.headers ...this.headers
}, },
body: JSON.stringify(body) body: JSON.stringify(body)

View File

@ -6,7 +6,6 @@
"types": "dist/src/index.d.ts", "types": "dist/src/index.d.ts",
"scripts": { "scripts": {
"build": "tsc && gulp", "build": "tsc && gulp",
"dev:gulp": "gulp",
"lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0", "lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0",
"clean": "rimraf dist", "clean": "rimraf dist",
"nuke": "rimraf dist node_modules .turbo" "nuke": "rimraf dist node_modules .turbo"
@ -26,7 +25,7 @@
"@aws-sdk/client-s3": "^3.427.0", "@aws-sdk/client-s3": "^3.427.0",
"@aws-sdk/client-secrets-manager": "^3.699.0", "@aws-sdk/client-secrets-manager": "^3.699.0",
"@datastax/astra-db-ts": "1.5.0", "@datastax/astra-db-ts": "1.5.0",
"@dqbd/tiktoken": "^1.0.7", "@dqbd/tiktoken": "^1.0.21",
"@e2b/code-interpreter": "^0.0.5", "@e2b/code-interpreter": "^0.0.5",
"@elastic/elasticsearch": "^8.9.0", "@elastic/elasticsearch": "^8.9.0",
"@flowiseai/nodevm": "^3.9.25", "@flowiseai/nodevm": "^3.9.25",
@ -52,7 +51,7 @@
"@langchain/mistralai": "^0.2.0", "@langchain/mistralai": "^0.2.0",
"@langchain/mongodb": "^0.0.1", "@langchain/mongodb": "^0.0.1",
"@langchain/ollama": "0.2.0", "@langchain/ollama": "0.2.0",
"@langchain/openai": "0.4.4", "@langchain/openai": "0.5.6",
"@langchain/pinecone": "^0.1.3", "@langchain/pinecone": "^0.1.3",
"@langchain/qdrant": "^0.0.5", "@langchain/qdrant": "^0.0.5",
"@langchain/weaviate": "^0.0.1", "@langchain/weaviate": "^0.0.1",
@ -122,7 +121,7 @@
"notion-to-md": "^3.1.1", "notion-to-md": "^3.1.1",
"object-hash": "^3.0.0", "object-hash": "^3.0.0",
"ollama": "^0.5.11", "ollama": "^0.5.11",
"openai": "^4.82.0", "openai": "^4.96.0",
"papaparse": "^5.4.1", "papaparse": "^5.4.1",
"pdf-parse": "^1.1.1", "pdf-parse": "^1.1.1",
"pdfjs-dist": "^3.7.107", "pdfjs-dist": "^3.7.107",

View File

@ -8,6 +8,7 @@ import { Moderation } from '../nodes/moderation/Moderation'
export type NodeParamsType = export type NodeParamsType =
| 'asyncOptions' | 'asyncOptions'
| 'asyncMultiOptions'
| 'options' | 'options'
| 'multiOptions' | 'multiOptions'
| 'datagrid' | 'datagrid'
@ -57,12 +58,13 @@ export interface INodeOptionsValue {
label: string label: string
name: string name: string
description?: string description?: string
imageSrc?: string
} }
export interface INodeOutputsValue { export interface INodeOutputsValue {
label: string label: string
name: string name: string
baseClasses: string[] baseClasses?: string[]
description?: string description?: string
hidden?: boolean hidden?: boolean
isAnchor?: boolean isAnchor?: boolean
@ -83,10 +85,12 @@ export interface INodeParams {
rows?: number rows?: number
list?: boolean list?: boolean
acceptVariable?: boolean acceptVariable?: boolean
acceptNodeOutputAsVariable?: boolean
placeholder?: string placeholder?: string
fileType?: string fileType?: string
additionalParams?: boolean additionalParams?: boolean
loadMethod?: string loadMethod?: string
loadConfig?: boolean
hidden?: boolean hidden?: boolean
hideCodeExecute?: boolean hideCodeExecute?: boolean
codeExample?: string codeExample?: string
@ -96,6 +100,11 @@ export interface INodeParams {
refresh?: boolean refresh?: boolean
freeSolo?: boolean freeSolo?: boolean
loadPreviousNodes?: boolean loadPreviousNodes?: boolean
array?: Array<INodeParams>
show?: INodeDisplay
hide?: INodeDisplay
generateDocStoreDescription?: boolean
generateInstruction?: boolean
} }
export interface INodeExecutionData { export interface INodeExecutionData {
@ -103,7 +112,7 @@ export interface INodeExecutionData {
} }
export interface INodeDisplay { export interface INodeDisplay {
[key: string]: string[] | string [key: string]: string[] | string | boolean | number | ICommonObject
} }
export interface INodeProperties { export interface INodeProperties {
@ -120,11 +129,15 @@ export interface INodeProperties {
badge?: string badge?: string
deprecateMessage?: string deprecateMessage?: string
hideOutput?: boolean hideOutput?: boolean
hideInput?: boolean
author?: string author?: string
documentation?: string documentation?: string
color?: string
hint?: string
} }
export interface INode extends INodeProperties { export interface INode extends INodeProperties {
credential?: INodeParams
inputs?: INodeParams[] inputs?: INodeParams[]
output?: INodeOutputsValue[] output?: INodeOutputsValue[]
loadMethods?: { loadMethods?: {
@ -412,14 +425,19 @@ export interface IServerSideEventStreamer {
streamCustomEvent(chatId: string, eventType: string, data: any): void streamCustomEvent(chatId: string, eventType: string, data: any): void
streamSourceDocumentsEvent(chatId: string, data: any): void streamSourceDocumentsEvent(chatId: string, data: any): void
streamUsedToolsEvent(chatId: string, data: any): void streamUsedToolsEvent(chatId: string, data: any): void
streamCalledToolsEvent(chatId: string, data: any): void
streamFileAnnotationsEvent(chatId: string, data: any): void streamFileAnnotationsEvent(chatId: string, data: any): void
streamToolEvent(chatId: string, data: any): void streamToolEvent(chatId: string, data: any): void
streamAgentReasoningEvent(chatId: string, data: any): void streamAgentReasoningEvent(chatId: string, data: any): void
streamAgentFlowExecutedDataEvent(chatId: string, data: any): void
streamAgentFlowEvent(chatId: string, data: any): void
streamNextAgentEvent(chatId: string, data: any): void streamNextAgentEvent(chatId: string, data: any): void
streamNextAgentFlowEvent(chatId: string, data: any): void
streamActionEvent(chatId: string, data: any): void streamActionEvent(chatId: string, data: any): void
streamArtifactsEvent(chatId: string, data: any): void streamArtifactsEvent(chatId: string, data: any): void
streamAbortEvent(chatId: string): void streamAbortEvent(chatId: string): void
streamEndEvent(chatId: string): void streamEndEvent(chatId: string): void
streamUsageMetadataEvent(chatId: string, data: any): void
} }
export enum FollowUpPromptProvider { export enum FollowUpPromptProvider {
@ -446,3 +464,17 @@ export type FollowUpPromptConfig = {
status: boolean status: boolean
selectedProvider: FollowUpPromptProvider selectedProvider: FollowUpPromptProvider
} & FollowUpPromptProviderConfig } & FollowUpPromptProviderConfig
export interface ICondition {
type: string
value1: CommonType
operation: string
value2: CommonType
isFulfilled?: boolean
}
export interface IHumanInput {
type: 'proceed' | 'reject'
startNodeId: string
feedback?: string
}

View File

@ -0,0 +1,655 @@
import { ICommonObject } from './Interface'
import { z } from 'zod'
import { StructuredOutputParser } from '@langchain/core/output_parsers'
import { isEqual, get, cloneDeep } from 'lodash'
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
const ToolType = z.array(z.string()).describe('List of tools')
// Define a more specific NodePosition schema
const NodePositionType = z.object({
x: z.number().describe('X coordinate of the node position'),
y: z.number().describe('Y coordinate of the node position')
})
// Define a more specific EdgeData schema
const EdgeDataType = z.object({
edgeLabel: z.string().optional().describe('Label for the edge')
})
// Define a basic NodeData schema to avoid using .passthrough() which might cause issues
const NodeDataType = z
.object({
label: z.string().optional().describe('Label for the node'),
name: z.string().optional().describe('Name of the node')
})
.optional()
const NodeType = z.object({
id: z.string().describe('Unique identifier for the node'),
type: z.enum(['agentFlow']).describe('Type of the node'),
position: NodePositionType.describe('Position of the node in the UI'),
width: z.number().describe('Width of the node'),
height: z.number().describe('Height of the node'),
selected: z.boolean().optional().describe('Whether the node is selected'),
positionAbsolute: NodePositionType.optional().describe('Absolute position of the node'),
data: NodeDataType
})
const EdgeType = z.object({
id: z.string().describe('Unique identifier for the edge'),
type: z.enum(['agentFlow']).describe('Type of the node'),
source: z.string().describe('ID of the source node'),
sourceHandle: z.string().describe('ID of the source handle'),
target: z.string().describe('ID of the target node'),
targetHandle: z.string().describe('ID of the target handle'),
data: EdgeDataType.optional().describe('Data associated with the edge')
})
const NodesEdgesType = z
.object({
description: z.string().optional().describe('Description of the workflow'),
usecases: z.array(z.string()).optional().describe('Use cases for this workflow'),
nodes: z.array(NodeType).describe('Array of nodes in the workflow'),
edges: z.array(EdgeType).describe('Array of edges connecting the nodes')
})
.describe('Generate Agentflowv2 nodes and edges')
interface NodePosition {
x: number
y: number
}
interface EdgeData {
edgeLabel?: string
sourceColor?: string
targetColor?: string
isHumanInput?: boolean
}
interface AgentToolConfig {
agentSelectedTool: string
agentSelectedToolConfig: {
agentSelectedTool: string
}
}
interface NodeInputs {
agentTools?: AgentToolConfig[]
selectedTool?: string
toolInputArgs?: Record<string, any>[]
selectedToolConfig?: {
selectedTool: string
}
[key: string]: any
}
interface NodeData {
label?: string
name?: string
id?: string
inputs?: NodeInputs
inputAnchors?: InputAnchor[]
inputParams?: InputParam[]
outputs?: Record<string, any>
outputAnchors?: OutputAnchor[]
credential?: string
color?: string
[key: string]: any
}
interface Node {
id: string
type: 'agentFlow' | 'iteration'
position: NodePosition
width: number
height: number
selected?: boolean
positionAbsolute?: NodePosition
data: NodeData
parentNode?: string
extent?: string
}
interface Edge {
id: string
type: 'agentFlow'
source: string
sourceHandle: string
target: string
targetHandle: string
data?: EdgeData
label?: string
}
interface InputAnchor {
id: string
label: string
name: string
type?: string
[key: string]: any
}
interface InputParam {
id: string
name: string
label?: string
type?: string
display?: boolean
show?: Record<string, any>
hide?: Record<string, any>
[key: string]: any
}
interface OutputAnchor {
id: string
label: string
name: string
}
export const generateAgentflowv2 = async (config: Record<string, any>, question: string, options: ICommonObject) => {
try {
const result = await generateNodesEdges(config, question, options)
const { nodes, edges } = generateNodesData(result, config)
const updatedNodes = await generateSelectedTools(nodes, config, question, options)
const updatedEdges = updateEdges(edges, nodes)
return { nodes: updatedNodes, edges: updatedEdges }
} catch (error) {
console.error('Error generating AgentflowV2:', error)
return { error: error.message || 'Unknown error occurred' }
}
}
const updateEdges = (edges: Edge[], nodes: Node[]): Edge[] => {
const isMultiOutput = (source: string) => {
return source.includes('conditionAgentflow') || source.includes('conditionAgentAgentflow') || source.includes('humanInputAgentflow')
}
const findNodeColor = (nodeId: string) => {
const node = nodes.find((node) => node.id === nodeId)
return node?.data?.color
}
// filter out edges that do not exist in nodes
edges = edges.filter((edge) => {
return nodes.some((node) => node.id === edge.source || node.id === edge.target)
})
// filter out the edge that has hideInput/hideOutput on the source/target node
const indexToDelete = []
for (let i = 0; i < edges.length; i += 1) {
const edge = edges[i]
const sourceNode = nodes.find((node) => node.id === edge.source)
if (sourceNode?.data?.hideOutput) {
indexToDelete.push(i)
}
const targetNode = nodes.find((node) => node.id === edge.target)
if (targetNode?.data?.hideInput) {
indexToDelete.push(i)
}
}
// delete the edges at the index in indexToDelete
for (let i = indexToDelete.length - 1; i >= 0; i -= 1) {
edges.splice(indexToDelete[i], 1)
}
const updatedEdges = edges.map((edge) => {
return {
...edge,
data: {
...edge.data,
sourceColor: findNodeColor(edge.source),
targetColor: findNodeColor(edge.target),
edgeLabel: isMultiOutput(edge.source) && edge.label && edge.label.trim() !== '' ? edge.label.trim() : undefined,
isHumanInput: edge.source.includes('humanInputAgentflow') ? true : false
},
type: 'agentFlow',
id: `${edge.source}-${edge.sourceHandle}-${edge.target}-${edge.targetHandle}`
}
}) as Edge[]
if (updatedEdges.length > 0) {
updatedEdges.forEach((edge) => {
if (isMultiOutput(edge.source)) {
if (edge.sourceHandle.includes('true')) {
edge.sourceHandle = edge.sourceHandle.replace('true', '0')
} else if (edge.sourceHandle.includes('false')) {
edge.sourceHandle = edge.sourceHandle.replace('false', '1')
}
}
})
}
return updatedEdges
}
const generateSelectedTools = async (nodes: Node[], config: Record<string, any>, question: string, options: ICommonObject) => {
const selectedTools: string[] = []
for (let i = 0; i < nodes.length; i += 1) {
const node = nodes[i]
if (!node.data.inputs) {
node.data.inputs = {}
}
if (node.data.name === 'agentAgentflow') {
const sysPrompt = `You are a workflow orchestrator that is designed to make agent coordination and execution easy. Your goal is to select the tools that are needed to achieve the given task.
Here are the tools to choose from:
${config.toolNodes}
Here's the selected tools:
${JSON.stringify(selectedTools, null, 2)}
Output Format should be a list of tool names:
For example:["googleCustomSearch", "slackMCP"]
Now, select the tools that are needed to achieve the given task. You must only select tools that are in the list of tools above. You must NOT select the tools that are already in the list of selected tools.
`
const tools = await _generateSelectedTools({ ...config, prompt: sysPrompt }, question, options)
if (Array.isArray(tools) && tools.length > 0) {
selectedTools.push(...tools)
const existingTools = node.data.inputs.agentTools || []
node.data.inputs.agentTools = [
...existingTools,
...tools.map((tool) => ({
agentSelectedTool: tool,
agentSelectedToolConfig: {
agentSelectedTool: tool
}
}))
]
}
} else if (node.data.name === 'toolAgentflow') {
const sysPrompt = `You are a workflow orchestrator that is designed to make agent coordination and execution easy. Your goal is to select ONE tool that is needed to achieve the given task.
Here are the tools to choose from:
${config.toolNodes}
Here's the selected tools:
${JSON.stringify(selectedTools, null, 2)}
Output Format should ONLY one tool name inside of a list:
For example:["googleCustomSearch"]
Now, select the ONLY tool that is needed to achieve the given task. You must only select tool that is in the list of tools above. You must NOT select the tool that is already in the list of selected tools.
`
const tools = await _generateSelectedTools({ ...config, prompt: sysPrompt }, question, options)
if (Array.isArray(tools) && tools.length > 0) {
selectedTools.push(...tools)
node.data.inputs.selectedTool = tools[0]
node.data.inputs.toolInputArgs = []
node.data.inputs.selectedToolConfig = {
selectedTool: tools[0]
}
}
}
}
return nodes
}
const _generateSelectedTools = async (config: Record<string, any>, question: string, options: ICommonObject) => {
try {
const chatModelComponent = config.componentNodes[config.selectedChatModel?.name]
if (!chatModelComponent) {
throw new Error('Chat model component not found')
}
const nodeInstanceFilePath = chatModelComponent.filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const newToolNodeInstance = new nodeModule.nodeClass()
const model = (await newToolNodeInstance.init(config.selectedChatModel, '', options)) as BaseChatModel
// Create a parser to validate the output
const parser = StructuredOutputParser.fromZodSchema(ToolType)
// Generate JSON schema from our Zod schema
const formatInstructions = parser.getFormatInstructions()
// Full conversation with system prompt and instructions
const messages = [
{
role: 'system',
content: `${config.prompt}\n\n${formatInstructions}\n\nMake sure to follow the exact JSON schema structure.`
},
{
role: 'user',
content: question
}
]
// Standard completion without structured output
const response = await model.invoke(messages)
// Try to extract JSON from the response
const responseContent = response.content.toString()
const jsonMatch = responseContent.match(/```json\n([\s\S]*?)\n```/) || responseContent.match(/{[\s\S]*?}/)
if (jsonMatch) {
const jsonStr = jsonMatch[1] || jsonMatch[0]
try {
const parsedJSON = JSON.parse(jsonStr)
// Validate with our schema
return ToolType.parse(parsedJSON)
} catch (parseError) {
console.error('Error parsing JSON from response:', parseError)
return { error: 'Failed to parse JSON from response', content: responseContent }
}
} else {
console.error('No JSON found in response:', responseContent)
return { error: 'No JSON found in response', content: responseContent }
}
} catch (error) {
console.error('Error generating AgentflowV2:', error)
return { error: error.message || 'Unknown error occurred' }
}
}
const generateNodesEdges = async (config: Record<string, any>, question: string, options?: ICommonObject) => {
try {
const chatModelComponent = config.componentNodes[config.selectedChatModel?.name]
if (!chatModelComponent) {
throw new Error('Chat model component not found')
}
const nodeInstanceFilePath = chatModelComponent.filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const newToolNodeInstance = new nodeModule.nodeClass()
const model = (await newToolNodeInstance.init(config.selectedChatModel, '', options)) as BaseChatModel
// Create a parser to validate the output
const parser = StructuredOutputParser.fromZodSchema(NodesEdgesType)
// Generate JSON schema from our Zod schema
const formatInstructions = parser.getFormatInstructions()
// Full conversation with system prompt and instructions
const messages = [
{
role: 'system',
content: `${config.prompt}\n\n${formatInstructions}\n\nMake sure to follow the exact JSON schema structure.`
},
{
role: 'user',
content: question
}
]
// Standard completion without structured output
const response = await model.invoke(messages)
// Try to extract JSON from the response
const responseContent = response.content.toString()
const jsonMatch = responseContent.match(/```json\n([\s\S]*?)\n```/) || responseContent.match(/{[\s\S]*?}/)
if (jsonMatch) {
const jsonStr = jsonMatch[1] || jsonMatch[0]
try {
const parsedJSON = JSON.parse(jsonStr)
// Validate with our schema
return NodesEdgesType.parse(parsedJSON)
} catch (parseError) {
console.error('Error parsing JSON from response:', parseError)
return { error: 'Failed to parse JSON from response', content: responseContent }
}
} else {
console.error('No JSON found in response:', responseContent)
return { error: 'No JSON found in response', content: responseContent }
}
} catch (error) {
console.error('Error generating AgentflowV2:', error)
return { error: error.message || 'Unknown error occurred' }
}
}
const generateNodesData = (result: Record<string, any>, config: Record<string, any>) => {
try {
if (result.error) {
return result
}
let nodes = result.nodes
for (let i = 0; i < nodes.length; i += 1) {
const node = nodes[i]
let nodeName = node.data.name
// If nodeName is not found in data.name, try extracting from node.id
if (!nodeName || !config.componentNodes[nodeName]) {
nodeName = node.id.split('_')[0]
}
const componentNode = config.componentNodes[nodeName]
if (!componentNode) {
continue
}
const initializedNodeData = initNode(cloneDeep(componentNode), node.id)
nodes[i].data = {
...initializedNodeData,
label: node.data?.label
}
if (nodes[i].data.name === 'iterationAgentflow') {
nodes[i].type = 'iteration'
}
if (nodes[i].parentNode) {
nodes[i].extent = 'parent'
}
}
return { nodes, edges: result.edges }
} catch (error) {
console.error('Error generating AgentflowV2:', error)
return { error: error.message || 'Unknown error occurred' }
}
}
const initNode = (nodeData: Record<string, any>, newNodeId: string): NodeData => {
const inputParams = []
const incoming = nodeData.inputs ? nodeData.inputs.length : 0
// Inputs
for (let i = 0; i < incoming; i += 1) {
const newInput = {
...nodeData.inputs[i],
id: `${newNodeId}-input-${nodeData.inputs[i].name}-${nodeData.inputs[i].type}`
}
inputParams.push(newInput)
}
// Credential
if (nodeData.credential) {
const newInput = {
...nodeData.credential,
id: `${newNodeId}-input-${nodeData.credential.name}-${nodeData.credential.type}`
}
inputParams.unshift(newInput)
}
// Outputs
let outputAnchors = initializeOutputAnchors(nodeData, newNodeId)
/* Initial
inputs = [
{
label: 'field_label_1',
name: 'string'
},
{
label: 'field_label_2',
name: 'CustomType'
}
]
=> Convert to inputs, inputParams, inputAnchors
=> inputs = { 'field': 'defaultvalue' } // Turn into inputs object with default values
=> // For inputs that are part of whitelistTypes
inputParams = [
{
label: 'field_label_1',
name: 'string'
}
]
=> // For inputs that are not part of whitelistTypes
inputAnchors = [
{
label: 'field_label_2',
name: 'CustomType'
}
]
*/
// Inputs
if (nodeData.inputs) {
const defaultInputs = initializeDefaultNodeData(nodeData.inputs)
nodeData.inputAnchors = showHideInputAnchors({ ...nodeData, inputAnchors: [], inputs: defaultInputs })
nodeData.inputParams = showHideInputParams({ ...nodeData, inputParams, inputs: defaultInputs })
nodeData.inputs = defaultInputs
} else {
nodeData.inputAnchors = []
nodeData.inputParams = []
nodeData.inputs = {}
}
// Outputs
if (nodeData.outputs) {
nodeData.outputs = initializeDefaultNodeData(outputAnchors)
} else {
nodeData.outputs = {}
}
nodeData.outputAnchors = outputAnchors
// Credential
if (nodeData.credential) nodeData.credential = ''
nodeData.id = newNodeId
return nodeData
}
const initializeDefaultNodeData = (nodeParams: Record<string, any>[]) => {
const initialValues: Record<string, any> = {}
for (let i = 0; i < nodeParams.length; i += 1) {
const input = nodeParams[i]
initialValues[input.name] = input.default || ''
}
return initialValues
}
const createAgentFlowOutputs = (nodeData: Record<string, any>, newNodeId: string) => {
if (nodeData.hideOutput) return []
if (nodeData.outputs?.length) {
return nodeData.outputs.map((_: any, index: number) => ({
id: `${newNodeId}-output-${index}`,
label: nodeData.label,
name: nodeData.name
}))
}
return [
{
id: `${newNodeId}-output-${nodeData.name}`,
label: nodeData.label,
name: nodeData.name
}
]
}
const initializeOutputAnchors = (nodeData: Record<string, any>, newNodeId: string): OutputAnchor[] => {
return createAgentFlowOutputs(nodeData, newNodeId)
}
const _showHideOperation = (nodeData: Record<string, any>, inputParam: Record<string, any>, displayType: string, index?: number) => {
const displayOptions = inputParam[displayType]
/* For example:
show: {
enableMemory: true
}
*/
Object.keys(displayOptions).forEach((path) => {
const comparisonValue = displayOptions[path]
if (path.includes('$index') && index) {
path = path.replace('$index', index.toString())
}
const groundValue = get(nodeData.inputs, path, '')
if (Array.isArray(comparisonValue)) {
if (displayType === 'show' && !comparisonValue.includes(groundValue)) {
inputParam.display = false
}
if (displayType === 'hide' && comparisonValue.includes(groundValue)) {
inputParam.display = false
}
} else if (typeof comparisonValue === 'string') {
if (displayType === 'show' && !(comparisonValue === groundValue || new RegExp(comparisonValue).test(groundValue))) {
inputParam.display = false
}
if (displayType === 'hide' && (comparisonValue === groundValue || new RegExp(comparisonValue).test(groundValue))) {
inputParam.display = false
}
} else if (typeof comparisonValue === 'boolean') {
if (displayType === 'show' && comparisonValue !== groundValue) {
inputParam.display = false
}
if (displayType === 'hide' && comparisonValue === groundValue) {
inputParam.display = false
}
} else if (typeof comparisonValue === 'object') {
if (displayType === 'show' && !isEqual(comparisonValue, groundValue)) {
inputParam.display = false
}
if (displayType === 'hide' && isEqual(comparisonValue, groundValue)) {
inputParam.display = false
}
} else if (typeof comparisonValue === 'number') {
if (displayType === 'show' && comparisonValue !== groundValue) {
inputParam.display = false
}
if (displayType === 'hide' && comparisonValue === groundValue) {
inputParam.display = false
}
}
})
}
const showHideInputs = (nodeData: Record<string, any>, inputType: string, overrideParams?: Record<string, any>, arrayIndex?: number) => {
const params = overrideParams ?? nodeData[inputType] ?? []
for (let i = 0; i < params.length; i += 1) {
const inputParam = params[i]
// Reset display flag to false for each inputParam
inputParam.display = true
if (inputParam.show) {
_showHideOperation(nodeData, inputParam, 'show', arrayIndex)
}
if (inputParam.hide) {
_showHideOperation(nodeData, inputParam, 'hide', arrayIndex)
}
}
return params
}
const showHideInputParams = (nodeData: Record<string, any>): InputParam[] => {
return showHideInputs(nodeData, 'inputParams')
}
const showHideInputAnchors = (nodeData: Record<string, any>): InputAnchor[] => {
return showHideInputs(nodeData, 'inputAnchors')
}

View File

@ -29,7 +29,7 @@ import { ICommonObject, IDatabaseEntity, INodeData, IServerSideEventStreamer } f
import { LangWatch, LangWatchSpan, LangWatchTrace, autoconvertTypedValues } from 'langwatch' import { LangWatch, LangWatchSpan, LangWatchTrace, autoconvertTypedValues } from 'langwatch'
import { DataSource } from 'typeorm' import { DataSource } from 'typeorm'
import { ChatGenerationChunk } from '@langchain/core/outputs' import { ChatGenerationChunk } from '@langchain/core/outputs'
import { AIMessageChunk } from '@langchain/core/messages' import { AIMessageChunk, BaseMessageLike } from '@langchain/core/messages'
import { Serialized } from '@langchain/core/load/serializable' import { Serialized } from '@langchain/core/load/serializable'
interface AgentRun extends Run { interface AgentRun extends Run {
@ -635,137 +635,184 @@ export const additionalCallbacks = async (nodeData: INodeData, options: ICommonO
} }
export class AnalyticHandler { export class AnalyticHandler {
nodeData: INodeData private static instances: Map<string, AnalyticHandler> = new Map()
options: ICommonObject = {} private nodeData: INodeData
handlers: ICommonObject = {} private options: ICommonObject
private handlers: ICommonObject = {}
private initialized: boolean = false
private analyticsConfig: string | undefined
private chatId: string
private createdAt: number
constructor(nodeData: INodeData, options: ICommonObject) { private constructor(nodeData: INodeData, options: ICommonObject) {
this.options = options
this.nodeData = nodeData this.nodeData = nodeData
this.init() this.options = options
this.analyticsConfig = options.analytic
this.chatId = options.chatId
this.createdAt = Date.now()
}
static getInstance(nodeData: INodeData, options: ICommonObject): AnalyticHandler {
const chatId = options.chatId
if (!chatId) throw new Error('ChatId is required for analytics')
// Reset instance if analytics config changed for this chat
const instance = AnalyticHandler.instances.get(chatId)
if (instance?.analyticsConfig !== options.analytic) {
AnalyticHandler.resetInstance(chatId)
}
if (!AnalyticHandler.instances.get(chatId)) {
AnalyticHandler.instances.set(chatId, new AnalyticHandler(nodeData, options))
}
return AnalyticHandler.instances.get(chatId)!
}
static resetInstance(chatId: string): void {
AnalyticHandler.instances.delete(chatId)
}
// Keep this as backup for orphaned instances
static cleanup(maxAge: number = 3600000): void {
const now = Date.now()
for (const [chatId, instance] of AnalyticHandler.instances) {
if (now - instance.createdAt > maxAge) {
AnalyticHandler.resetInstance(chatId)
}
}
} }
async init() { async init() {
if (this.initialized) return
try { try {
if (!this.options.analytic) return if (!this.options.analytic) return
const analytic = JSON.parse(this.options.analytic) const analytic = JSON.parse(this.options.analytic)
for (const provider in analytic) { for (const provider in analytic) {
const providerStatus = analytic[provider].status as boolean const providerStatus = analytic[provider].status as boolean
if (providerStatus) { if (providerStatus) {
const credentialId = analytic[provider].credentialId as string const credentialId = analytic[provider].credentialId as string
const credentialData = await getCredentialData(credentialId ?? '', this.options) const credentialData = await getCredentialData(credentialId ?? '', this.options)
if (provider === 'langSmith') { await this.initializeProvider(provider, analytic[provider], credentialData)
const langSmithProject = analytic[provider].projectName as string
const langSmithApiKey = getCredentialParam('langSmithApiKey', credentialData, this.nodeData)
const langSmithEndpoint = getCredentialParam('langSmithEndpoint', credentialData, this.nodeData)
const client = new LangsmithClient({
apiUrl: langSmithEndpoint ?? 'https://api.smith.langchain.com',
apiKey: langSmithApiKey
})
this.handlers['langSmith'] = { client, langSmithProject }
} else if (provider === 'langFuse') {
const release = analytic[provider].release as string
const langFuseSecretKey = getCredentialParam('langFuseSecretKey', credentialData, this.nodeData)
const langFusePublicKey = getCredentialParam('langFusePublicKey', credentialData, this.nodeData)
const langFuseEndpoint = getCredentialParam('langFuseEndpoint', credentialData, this.nodeData)
const langfuse = new Langfuse({
secretKey: langFuseSecretKey,
publicKey: langFusePublicKey,
baseUrl: langFuseEndpoint ?? 'https://cloud.langfuse.com',
sdkIntegration: 'Flowise',
release
})
this.handlers['langFuse'] = { client: langfuse }
} else if (provider === 'lunary') {
const lunaryPublicKey = getCredentialParam('lunaryAppId', credentialData, this.nodeData)
const lunaryEndpoint = getCredentialParam('lunaryEndpoint', credentialData, this.nodeData)
lunary.init({
publicKey: lunaryPublicKey,
apiUrl: lunaryEndpoint,
runtime: 'flowise'
})
this.handlers['lunary'] = { client: lunary }
} else if (provider === 'langWatch') {
const langWatchApiKey = getCredentialParam('langWatchApiKey', credentialData, this.nodeData)
const langWatchEndpoint = getCredentialParam('langWatchEndpoint', credentialData, this.nodeData)
const langwatch = new LangWatch({
apiKey: langWatchApiKey,
endpoint: langWatchEndpoint
})
this.handlers['langWatch'] = { client: langwatch }
} else if (provider === 'arize') {
const arizeApiKey = getCredentialParam('arizeApiKey', credentialData, this.nodeData)
const arizeSpaceId = getCredentialParam('arizeSpaceId', credentialData, this.nodeData)
const arizeEndpoint = getCredentialParam('arizeEndpoint', credentialData, this.nodeData)
const arizeProject = analytic[provider].projectName as string
let arizeOptions: ArizeTracerOptions = {
apiKey: arizeApiKey,
spaceId: arizeSpaceId,
baseUrl: arizeEndpoint ?? 'https://otlp.arize.com',
projectName: arizeProject ?? 'default',
sdkIntegration: 'Flowise',
enableCallback: false
}
const arize: Tracer | undefined = getArizeTracer(arizeOptions)
const rootSpan: Span | undefined = undefined
this.handlers['arize'] = { client: arize, arizeProject, rootSpan }
} else if (provider === 'phoenix') {
const phoenixApiKey = getCredentialParam('phoenixApiKey', credentialData, this.nodeData)
const phoenixEndpoint = getCredentialParam('phoenixEndpoint', credentialData, this.nodeData)
const phoenixProject = analytic[provider].projectName as string
let phoenixOptions: PhoenixTracerOptions = {
apiKey: phoenixApiKey,
baseUrl: phoenixEndpoint ?? 'https://app.phoenix.arize.com',
projectName: phoenixProject ?? 'default',
sdkIntegration: 'Flowise',
enableCallback: false
}
const phoenix: Tracer | undefined = getPhoenixTracer(phoenixOptions)
const rootSpan: Span | undefined = undefined
this.handlers['phoenix'] = { client: phoenix, phoenixProject, rootSpan }
} else if (provider === 'opik') {
const opikApiKey = getCredentialParam('opikApiKey', credentialData, this.nodeData)
const opikEndpoint = getCredentialParam('opikUrl', credentialData, this.nodeData)
const opikWorkspace = getCredentialParam('opikWorkspace', credentialData, this.nodeData)
const opikProject = analytic[provider].opikProjectName as string
let opikOptions: OpikTracerOptions = {
apiKey: opikApiKey,
baseUrl: opikEndpoint ?? 'https://www.comet.com/opik/api',
projectName: opikProject ?? 'default',
workspace: opikWorkspace ?? 'default',
sdkIntegration: 'Flowise',
enableCallback: false
}
const opik: Tracer | undefined = getOpikTracer(opikOptions)
const rootSpan: Span | undefined = undefined
this.handlers['opik'] = { client: opik, opikProject, rootSpan }
}
} }
} }
this.initialized = true
} catch (e) { } catch (e) {
throw new Error(e) throw new Error(e)
} }
} }
// Add getter for handlers (useful for debugging)
getHandlers(): ICommonObject {
return this.handlers
}
async initializeProvider(provider: string, providerConfig: any, credentialData: any) {
if (provider === 'langSmith') {
const langSmithProject = providerConfig.projectName as string
const langSmithApiKey = getCredentialParam('langSmithApiKey', credentialData, this.nodeData)
const langSmithEndpoint = getCredentialParam('langSmithEndpoint', credentialData, this.nodeData)
const client = new LangsmithClient({
apiUrl: langSmithEndpoint ?? 'https://api.smith.langchain.com',
apiKey: langSmithApiKey
})
this.handlers['langSmith'] = { client, langSmithProject }
} else if (provider === 'langFuse') {
const release = providerConfig.release as string
const langFuseSecretKey = getCredentialParam('langFuseSecretKey', credentialData, this.nodeData)
const langFusePublicKey = getCredentialParam('langFusePublicKey', credentialData, this.nodeData)
const langFuseEndpoint = getCredentialParam('langFuseEndpoint', credentialData, this.nodeData)
const langfuse = new Langfuse({
secretKey: langFuseSecretKey,
publicKey: langFusePublicKey,
baseUrl: langFuseEndpoint ?? 'https://cloud.langfuse.com',
sdkIntegration: 'Flowise',
release
})
this.handlers['langFuse'] = { client: langfuse }
} else if (provider === 'lunary') {
const lunaryPublicKey = getCredentialParam('lunaryAppId', credentialData, this.nodeData)
const lunaryEndpoint = getCredentialParam('lunaryEndpoint', credentialData, this.nodeData)
lunary.init({
publicKey: lunaryPublicKey,
apiUrl: lunaryEndpoint,
runtime: 'flowise'
})
this.handlers['lunary'] = { client: lunary }
} else if (provider === 'langWatch') {
const langWatchApiKey = getCredentialParam('langWatchApiKey', credentialData, this.nodeData)
const langWatchEndpoint = getCredentialParam('langWatchEndpoint', credentialData, this.nodeData)
const langwatch = new LangWatch({
apiKey: langWatchApiKey,
endpoint: langWatchEndpoint
})
this.handlers['langWatch'] = { client: langwatch }
} else if (provider === 'arize') {
const arizeApiKey = getCredentialParam('arizeApiKey', credentialData, this.nodeData)
const arizeSpaceId = getCredentialParam('arizeSpaceId', credentialData, this.nodeData)
const arizeEndpoint = getCredentialParam('arizeEndpoint', credentialData, this.nodeData)
const arizeProject = providerConfig.projectName as string
let arizeOptions: ArizeTracerOptions = {
apiKey: arizeApiKey,
spaceId: arizeSpaceId,
baseUrl: arizeEndpoint ?? 'https://otlp.arize.com',
projectName: arizeProject ?? 'default',
sdkIntegration: 'Flowise',
enableCallback: false
}
const arize: Tracer | undefined = getArizeTracer(arizeOptions)
const rootSpan: Span | undefined = undefined
this.handlers['arize'] = { client: arize, arizeProject, rootSpan }
} else if (provider === 'phoenix') {
const phoenixApiKey = getCredentialParam('phoenixApiKey', credentialData, this.nodeData)
const phoenixEndpoint = getCredentialParam('phoenixEndpoint', credentialData, this.nodeData)
const phoenixProject = providerConfig.projectName as string
let phoenixOptions: PhoenixTracerOptions = {
apiKey: phoenixApiKey,
baseUrl: phoenixEndpoint ?? 'https://app.phoenix.arize.com',
projectName: phoenixProject ?? 'default',
sdkIntegration: 'Flowise',
enableCallback: false
}
const phoenix: Tracer | undefined = getPhoenixTracer(phoenixOptions)
const rootSpan: Span | undefined = undefined
this.handlers['phoenix'] = { client: phoenix, phoenixProject, rootSpan }
} else if (provider === 'opik') {
const opikApiKey = getCredentialParam('opikApiKey', credentialData, this.nodeData)
const opikEndpoint = getCredentialParam('opikUrl', credentialData, this.nodeData)
const opikWorkspace = getCredentialParam('opikWorkspace', credentialData, this.nodeData)
const opikProject = providerConfig.opikProjectName as string
let opikOptions: OpikTracerOptions = {
apiKey: opikApiKey,
baseUrl: opikEndpoint ?? 'https://www.comet.com/opik/api',
projectName: opikProject ?? 'default',
workspace: opikWorkspace ?? 'default',
sdkIntegration: 'Flowise',
enableCallback: false
}
const opik: Tracer | undefined = getOpikTracer(opikOptions)
const rootSpan: Span | undefined = undefined
this.handlers['opik'] = { client: opik, opikProject, rootSpan }
}
}
async onChainStart(name: string, input: string, parentIds?: ICommonObject) { async onChainStart(name: string, input: string, parentIds?: ICommonObject) {
const returnIds: ICommonObject = { const returnIds: ICommonObject = {
langSmith: {}, langSmith: {},
@ -1077,6 +1124,11 @@ export class AnalyticHandler {
chainSpan.end() chainSpan.end()
} }
} }
if (shutdown) {
// Cleanup this instance when chain ends
AnalyticHandler.resetInstance(this.chatId)
}
} }
async onChainError(returnIds: ICommonObject, error: string | object, shutdown = false) { async onChainError(returnIds: ICommonObject, error: string | object, shutdown = false) {
@ -1155,9 +1207,14 @@ export class AnalyticHandler {
chainSpan.end() chainSpan.end()
} }
} }
if (shutdown) {
// Cleanup this instance when chain ends
AnalyticHandler.resetInstance(this.chatId)
}
} }
async onLLMStart(name: string, input: string, parentIds: ICommonObject) { async onLLMStart(name: string, input: string | BaseMessageLike[], parentIds: ICommonObject) {
const returnIds: ICommonObject = { const returnIds: ICommonObject = {
langSmith: {}, langSmith: {},
langFuse: {}, langFuse: {},
@ -1169,13 +1226,18 @@ export class AnalyticHandler {
if (Object.prototype.hasOwnProperty.call(this.handlers, 'langSmith')) { if (Object.prototype.hasOwnProperty.call(this.handlers, 'langSmith')) {
const parentRun: RunTree | undefined = this.handlers['langSmith'].chainRun[parentIds['langSmith'].chainRun] const parentRun: RunTree | undefined = this.handlers['langSmith'].chainRun[parentIds['langSmith'].chainRun]
if (parentRun) { if (parentRun) {
const inputs: any = {}
if (Array.isArray(input)) {
inputs.messages = input
} else {
inputs.prompts = [input]
}
const childLLMRun = await parentRun.createChild({ const childLLMRun = await parentRun.createChild({
name, name,
run_type: 'llm', run_type: 'llm',
inputs: { inputs
prompts: [input]
}
}) })
await childLLMRun.postRun() await childLLMRun.postRun()
this.handlers['langSmith'].llmRun = { [childLLMRun.id]: childLLMRun } this.handlers['langSmith'].llmRun = { [childLLMRun.id]: childLLMRun }

View File

@ -11,3 +11,4 @@ export * from './storageUtils'
export * from './handler' export * from './handler'
export * from './followUpPrompts' export * from './followUpPrompts'
export * from './validator' export * from './validator'
export * from './agentflowv2Generator'

View File

@ -712,7 +712,7 @@ export const mapChatMessageToBaseMessage = async (chatmessages: any[] = []): Pro
for (const message of chatmessages) { for (const message of chatmessages) {
if (message.role === 'apiMessage' || message.type === 'apiMessage') { if (message.role === 'apiMessage' || message.type === 'apiMessage') {
chatHistory.push(new AIMessage(message.content || '')) chatHistory.push(new AIMessage(message.content || ''))
} else if (message.role === 'userMessage' || message.role === 'userMessage') { } else if (message.role === 'userMessage' || message.type === 'userMessage') {
// check for image/files uploads // check for image/files uploads
if (message.fileUploads) { if (message.fileUploads) {
// example: [{"type":"stored-file","name":"0_DiXc4ZklSTo3M8J4.jpg","mime":"image/jpeg"}] // example: [{"type":"stored-file","name":"0_DiXc4ZklSTo3M8J4.jpg","mime":"image/jpeg"}]
@ -788,17 +788,23 @@ export const mapChatMessageToBaseMessage = async (chatmessages: any[] = []): Pro
* @param {IMessage[]} chatHistory * @param {IMessage[]} chatHistory
* @returns {string} * @returns {string}
*/ */
export const convertChatHistoryToText = (chatHistory: IMessage[] = []): string => { export const convertChatHistoryToText = (chatHistory: IMessage[] | { content: string; role: string }[] = []): string => {
return chatHistory return chatHistory
.map((chatMessage) => { .map((chatMessage) => {
if (chatMessage.type === 'apiMessage') { if (!chatMessage) return ''
return `Assistant: ${chatMessage.message}` const messageContent = 'message' in chatMessage ? chatMessage.message : chatMessage.content
} else if (chatMessage.type === 'userMessage') { if (!messageContent || messageContent.trim() === '') return ''
return `Human: ${chatMessage.message}`
const messageType = 'type' in chatMessage ? chatMessage.type : chatMessage.role
if (messageType === 'apiMessage' || messageType === 'assistant') {
return `Assistant: ${messageContent}`
} else if (messageType === 'userMessage' || messageType === 'user') {
return `Human: ${messageContent}`
} else { } else {
return `${chatMessage.message}` return `${messageContent}`
} }
}) })
.filter((message) => message !== '') // Remove empty messages
.join('\n') .join('\n')
} }

View File

@ -4,7 +4,7 @@
[English](./README.md) | 中文 [English](./README.md) | 中文
![Flowise](https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true) ![Flowise](https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true)
拖放界面来构建自定义的 LLM 流程 拖放界面来构建自定义的 LLM 流程

View File

@ -4,7 +4,7 @@
English | [中文](./README-ZH.md) English | [中文](./README-ZH.md)
![Flowise](https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true) ![Flowise](https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true)
Drag & drop UI to build your customized LLM flow Drag & drop UI to build your customized LLM flow

View File

@ -1,7 +1,7 @@
{ {
"description": "Customer support team consisting of Support Representative and Quality Assurance Specialist to handle support tickets", "description": "Customer support team consisting of Support Representative and Quality Assurance Specialist to handle support tickets",
"framework": ["Langchain"], "framework": ["Langchain"],
"usecases": ["Customer Support"], "usecases": ["Customer Support", "Hierarchical Agent Teams"],
"nodes": [ "nodes": [
{ {
"id": "supervisor_0", "id": "supervisor_0",

View File

@ -1,7 +1,7 @@
{ {
"description": "Research leads and create personalized email drafts for sales team", "description": "Research leads and create personalized email drafts for sales team",
"framework": ["Langchain"], "framework": ["Langchain"],
"usecases": ["Leads"], "usecases": ["Leads", "Hierarchical Agent Teams"],
"nodes": [ "nodes": [
{ {
"id": "supervisor_0", "id": "supervisor_0",

View File

@ -1,7 +1,7 @@
{ {
"description": "A team of portfolio manager, financial analyst, and risk manager working together to optimize an investment portfolio.", "description": "A team of portfolio manager, financial analyst, and risk manager working together to optimize an investment portfolio.",
"framework": ["Langchain"], "framework": ["Langchain"],
"usecases": ["Finance & Accounting"], "usecases": ["Finance & Accounting", "Hierarchical Agent Teams"],
"nodes": [ "nodes": [
{ {
"id": "supervisor_0", "id": "supervisor_0",

View File

@ -1,7 +1,7 @@
{ {
"description": "Prompt engineering team working together to craft Worker Prompts for your AgentFlow.", "description": "Prompt engineering team working together to craft Worker Prompts for your AgentFlow.",
"framework": ["Langchain"], "framework": ["Langchain"],
"usecases": ["Engineering"], "usecases": ["Engineering", "Hierarchical Agent Teams"],
"nodes": [ "nodes": [
{ {
"id": "supervisor_0", "id": "supervisor_0",

View File

@ -1,7 +1,7 @@
{ {
"description": "Software engineering team working together to build a feature, solve a problem, or complete a task.", "description": "Software engineering team working together to build a feature, solve a problem, or complete a task.",
"framework": ["Langchain"], "framework": ["Langchain"],
"usecases": ["Engineering"], "usecases": ["Engineering", "Hierarchical Agent Teams"],
"nodes": [ "nodes": [
{ {
"id": "supervisor_0", "id": "supervisor_0",

View File

@ -1,7 +1,7 @@
{ {
"description": "Text to SQL query process using team of 3 agents: SQL Expert, SQL Reviewer, and SQL Executor", "description": "Text to SQL query process using team of 3 agents: SQL Expert, SQL Reviewer, and SQL Executor",
"framework": ["Langchain"], "framework": ["Langchain"],
"usecases": ["SQL"], "usecases": ["SQL", "Hierarchical Agent Teams"],
"nodes": [ "nodes": [
{ {
"id": "supervisor_0", "id": "supervisor_0",

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,847 @@
{
"description": "An email reply HITL (human in the loop) agent that can proceed or refine the email with user input",
"usecases": ["Human In Loop"],
"nodes": [
{
"id": "startAgentflow_0",
"type": "agentFlow",
"position": {
"x": -212.0817769699585,
"y": 95.2304753249555
},
"data": {
"id": "startAgentflow_0",
"label": "Start",
"version": 1,
"name": "startAgentflow",
"type": "Start",
"color": "#7EE787",
"hideInput": true,
"baseClasses": ["Start"],
"category": "Agent Flows",
"description": "Starting point of the agentflow",
"inputParams": [
{
"label": "Input Type",
"name": "startInputType",
"type": "options",
"options": [
{
"label": "Chat Input",
"name": "chatInput",
"description": "Start the conversation with chat input"
},
{
"label": "Form Input",
"name": "formInput",
"description": "Start the workflow with form inputs"
}
],
"default": "chatInput",
"id": "startAgentflow_0-input-startInputType-options",
"display": true
},
{
"label": "Form Title",
"name": "formTitle",
"type": "string",
"placeholder": "Please Fill Out The Form",
"show": {
"startInputType": "formInput"
},
"id": "startAgentflow_0-input-formTitle-string",
"display": false
},
{
"label": "Form Description",
"name": "formDescription",
"type": "string",
"placeholder": "Complete all fields below to continue",
"show": {
"startInputType": "formInput"
},
"id": "startAgentflow_0-input-formDescription-string",
"display": false
},
{
"label": "Form Input Types",
"name": "formInputTypes",
"description": "Specify the type of form input",
"type": "array",
"show": {
"startInputType": "formInput"
},
"array": [
{
"label": "Type",
"name": "type",
"type": "options",
"options": [
{
"label": "String",
"name": "string"
},
{
"label": "Number",
"name": "number"
},
{
"label": "Boolean",
"name": "boolean"
},
{
"label": "Options",
"name": "options"
}
],
"default": "string"
},
{
"label": "Label",
"name": "label",
"type": "string",
"placeholder": "Label for the input"
},
{
"label": "Variable Name",
"name": "name",
"type": "string",
"placeholder": "Variable name for the input (must be camel case)",
"description": "Variable name must be camel case. For example: firstName, lastName, etc."
},
{
"label": "Add Options",
"name": "addOptions",
"type": "array",
"show": {
"formInputTypes[$index].type": "options"
},
"array": [
{
"label": "Option",
"name": "option",
"type": "string"
}
]
}
],
"id": "startAgentflow_0-input-formInputTypes-array",
"display": false
},
{
"label": "Ephemeral Memory",
"name": "startEphemeralMemory",
"type": "boolean",
"description": "Start fresh for every execution without past chat history",
"optional": true
},
{
"label": "Flow State",
"name": "startState",
"description": "Runtime state during the execution of the workflow",
"type": "array",
"optional": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "string",
"placeholder": "Foo"
},
{
"label": "Value",
"name": "value",
"type": "string",
"placeholder": "Bar"
}
],
"id": "startAgentflow_0-input-startState-array",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"startInputType": "chatInput",
"formTitle": "",
"formDescription": "",
"formInputTypes": "",
"startState": ""
},
"outputAnchors": [
{
"id": "startAgentflow_0-output-startAgentflow",
"label": "Start",
"name": "startAgentflow"
}
],
"outputs": {},
"selected": false
},
"width": 101,
"height": 65,
"selected": false,
"positionAbsolute": {
"x": -212.0817769699585,
"y": 95.2304753249555
},
"dragging": false
},
{
"id": "agentAgentflow_0",
"position": {
"x": -62.25,
"y": 76
},
"data": {
"id": "agentAgentflow_0",
"label": "Email Reply Agent",
"version": 1,
"name": "agentAgentflow",
"type": "Agent",
"color": "#4DD0E1",
"baseClasses": ["Agent"],
"category": "Agent Flows",
"description": "Dynamically choose and utilize tools during runtime, enabling multi-step reasoning",
"inputParams": [
{
"label": "Model",
"name": "agentModel",
"type": "asyncOptions",
"loadMethod": "listModels",
"loadConfig": true,
"id": "agentAgentflow_0-input-agentModel-asyncOptions",
"display": true
},
{
"label": "Messages",
"name": "agentMessages",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Role",
"name": "role",
"type": "options",
"options": [
{
"label": "System",
"name": "system"
},
{
"label": "Assistant",
"name": "assistant"
},
{
"label": "Developer",
"name": "developer"
},
{
"label": "User",
"name": "user"
}
]
},
{
"label": "Content",
"name": "content",
"type": "string",
"acceptVariable": true,
"generateInstruction": true,
"rows": 4
}
],
"id": "agentAgentflow_0-input-agentMessages-array",
"display": true
},
{
"label": "Tools",
"name": "agentTools",
"type": "array",
"optional": true,
"array": [
{
"label": "Tool",
"name": "agentSelectedTool",
"type": "asyncOptions",
"loadMethod": "listTools",
"loadConfig": true
},
{
"label": "Require Human Input",
"name": "agentSelectedToolRequiresHumanInput",
"type": "boolean",
"optional": true
}
],
"id": "agentAgentflow_0-input-agentTools-array",
"display": true
},
{
"label": "Knowledge (Document Stores)",
"name": "agentKnowledgeDocumentStores",
"type": "array",
"description": "Give your agent context about different document sources. Document stores must be upserted in advance.",
"array": [
{
"label": "Document Store",
"name": "documentStore",
"type": "asyncOptions",
"loadMethod": "listStores"
},
{
"label": "Describe Knowledge",
"name": "docStoreDescription",
"type": "string",
"generateDocStoreDescription": true,
"placeholder": "Describe what the knowledge base is about, this is useful for the AI to know when and how to search for correct information",
"rows": 4
},
{
"label": "Return Source Documents",
"name": "returnSourceDocuments",
"type": "boolean",
"optional": true
}
],
"optional": true,
"id": "agentAgentflow_0-input-agentKnowledgeDocumentStores-array",
"display": true
},
{
"label": "Knowledge (Vector Embeddings)",
"name": "agentKnowledgeVSEmbeddings",
"type": "array",
"description": "Give your agent context about different document sources from existing vector stores and embeddings",
"array": [
{
"label": "Vector Store",
"name": "vectorStore",
"type": "asyncOptions",
"loadMethod": "listVectorStores",
"loadConfig": true
},
{
"label": "Embedding Model",
"name": "embeddingModel",
"type": "asyncOptions",
"loadMethod": "listEmbeddings",
"loadConfig": true
},
{
"label": "Knowledge Name",
"name": "knowledgeName",
"type": "string",
"placeholder": "A short name for the knowledge base, this is useful for the AI to know when and how to search for correct information"
},
{
"label": "Describe Knowledge",
"name": "knowledgeDescription",
"type": "string",
"placeholder": "Describe what the knowledge base is about, this is useful for the AI to know when and how to search for correct information",
"rows": 4
},
{
"label": "Return Source Documents",
"name": "returnSourceDocuments",
"type": "boolean",
"optional": true
}
],
"optional": true,
"id": "agentAgentflow_0-input-agentKnowledgeVSEmbeddings-array",
"display": true
},
{
"label": "Enable Memory",
"name": "agentEnableMemory",
"type": "boolean",
"description": "Enable memory for the conversation thread",
"default": true,
"optional": true,
"id": "agentAgentflow_0-input-agentEnableMemory-boolean",
"display": true
},
{
"label": "Memory Type",
"name": "agentMemoryType",
"type": "options",
"options": [
{
"label": "All Messages",
"name": "allMessages",
"description": "Retrieve all messages from the conversation"
},
{
"label": "Window Size",
"name": "windowSize",
"description": "Uses a fixed window size to surface the last N messages"
},
{
"label": "Conversation Summary",
"name": "conversationSummary",
"description": "Summarizes the whole conversation"
},
{
"label": "Conversation Summary Buffer",
"name": "conversationSummaryBuffer",
"description": "Summarize conversations once token limit is reached. Default to 2000"
}
],
"optional": true,
"default": "allMessages",
"show": {
"agentEnableMemory": true
},
"id": "agentAgentflow_0-input-agentMemoryType-options",
"display": true
},
{
"label": "Window Size",
"name": "agentMemoryWindowSize",
"type": "number",
"default": "20",
"description": "Uses a fixed window size to surface the last N messages",
"show": {
"agentMemoryType": "windowSize"
},
"id": "agentAgentflow_0-input-agentMemoryWindowSize-number",
"display": false
},
{
"label": "Max Token Limit",
"name": "agentMemoryMaxTokenLimit",
"type": "number",
"default": "2000",
"description": "Summarize conversations once token limit is reached. Default to 2000",
"show": {
"agentMemoryType": "conversationSummaryBuffer"
},
"id": "agentAgentflow_0-input-agentMemoryMaxTokenLimit-number",
"display": false
},
{
"label": "Input Message",
"name": "agentUserMessage",
"type": "string",
"description": "Add an input message as user message at the end of the conversation",
"rows": 4,
"optional": true,
"acceptVariable": true,
"show": {
"agentEnableMemory": true
},
"id": "agentAgentflow_0-input-agentUserMessage-string",
"display": true
},
{
"label": "Return Response As",
"name": "agentReturnResponseAs",
"type": "options",
"options": [
{
"label": "User Message",
"name": "userMessage"
},
{
"label": "Assistant Message",
"name": "assistantMessage"
}
],
"default": "userMessage",
"id": "agentAgentflow_0-input-agentReturnResponseAs-options",
"display": true
},
{
"label": "Update Flow State",
"name": "agentUpdateState",
"description": "Update runtime state during the execution of the workflow",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "asyncOptions",
"loadMethod": "listRuntimeStateKeys",
"freeSolo": true
},
{
"label": "Value",
"name": "value",
"type": "string",
"acceptVariable": true,
"acceptNodeOutputAsVariable": true
}
],
"id": "agentAgentflow_0-input-agentUpdateState-array",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"agentModel": "chatOpenAI",
"agentMessages": [
{
"role": "system",
"content": "<p>You are a customer support agent working in Flowise Inc. Write a professional email reply to user's query. Use the web search tools to get more details about the prospect.</p>"
}
],
"agentTools": [
{
"agentSelectedTool": "googleCustomSearch",
"agentSelectedToolConfig": {
"agentSelectedTool": "googleCustomSearch"
}
},
{
"agentSelectedTool": "currentDateTime",
"agentSelectedToolConfig": {
"agentSelectedTool": "currentDateTime"
}
}
],
"agentKnowledgeDocumentStores": "",
"agentEnableMemory": true,
"agentMemoryType": "allMessages",
"agentUserMessage": "",
"agentReturnResponseAs": "userMessage",
"agentUpdateState": "",
"agentModelConfig": {
"cache": "",
"modelName": "gpt-4o-mini",
"temperature": 0.9,
"streaming": true,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"strictToolCalling": "",
"stopSequence": "",
"basepath": "",
"proxyUrl": "",
"baseOptions": "",
"allowImageUploads": "",
"imageResolution": "low",
"reasoningEffort": "medium",
"agentModel": "chatOpenAI"
}
},
"outputAnchors": [
{
"id": "agentAgentflow_0-output-agentAgentflow",
"label": "Agent",
"name": "agentAgentflow"
}
],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 182,
"height": 103,
"selected": false,
"positionAbsolute": {
"x": -62.25,
"y": 76
},
"dragging": false
},
{
"id": "humanInputAgentflow_0",
"position": {
"x": 156.05666363734434,
"y": 86.62266545493773
},
"data": {
"id": "humanInputAgentflow_0",
"label": "Human Input 0",
"version": 1,
"name": "humanInputAgentflow",
"type": "HumanInput",
"color": "#6E6EFD",
"baseClasses": ["HumanInput"],
"category": "Agent Flows",
"description": "Request human input, approval or rejection during execution",
"inputParams": [
{
"label": "Description Type",
"name": "humanInputDescriptionType",
"type": "options",
"options": [
{
"label": "Fixed",
"name": "fixed",
"description": "Specify a fixed description"
},
{
"label": "Dynamic",
"name": "dynamic",
"description": "Use LLM to generate a description"
}
],
"id": "humanInputAgentflow_0-input-humanInputDescriptionType-options",
"display": true
},
{
"label": "Description",
"name": "humanInputDescription",
"type": "string",
"placeholder": "Are you sure you want to proceed?",
"acceptVariable": true,
"rows": 4,
"show": {
"humanInputDescriptionType": "fixed"
},
"id": "humanInputAgentflow_0-input-humanInputDescription-string",
"display": true
},
{
"label": "Model",
"name": "humanInputModel",
"type": "asyncOptions",
"loadMethod": "listModels",
"loadConfig": true,
"show": {
"humanInputDescriptionType": "dynamic"
},
"id": "humanInputAgentflow_0-input-humanInputModel-asyncOptions",
"display": false
},
{
"label": "Prompt",
"name": "humanInputModelPrompt",
"type": "string",
"default": "<p>Summarize the conversation between the user and the assistant, reiterate the last message from the assistant, and ask if user would like to proceed or if they have any feedback. </p>\n<ul>\n<li>Begin by capturing the key points of the conversation, ensuring that you reflect the main ideas and themes discussed.</li>\n<li>Then, clearly reproduce the last message sent by the assistant to maintain continuity. Make sure the whole message is reproduced.</li>\n<li>Finally, ask the user if they would like to proceed, or provide any feedback on the last assistant message</li>\n</ul>\n<h2 id=\"output-format-the-output-should-be-structured-in-three-parts-\">Output Format The output should be structured in three parts in text:</h2>\n<ul>\n<li>A summary of the conversation (1-3 sentences).</li>\n<li>The last assistant message (exactly as it appeared).</li>\n<li>Ask the user if they would like to proceed, or provide any feedback on last assistant message. No other explanation and elaboration is needed.</li>\n</ul>\n",
"acceptVariable": true,
"generateInstruction": true,
"rows": 4,
"show": {
"humanInputDescriptionType": "dynamic"
},
"id": "humanInputAgentflow_0-input-humanInputModelPrompt-string",
"display": false
},
{
"label": "Enable Feedback",
"name": "humanInputEnableFeedback",
"type": "boolean",
"default": true,
"id": "humanInputAgentflow_0-input-humanInputEnableFeedback-boolean",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"humanInputDescriptionType": "fixed",
"humanInputEnableFeedback": true,
"humanInputModelConfig": {
"cache": "",
"modelName": "gpt-4o-mini",
"temperature": 0.9,
"streaming": true,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"strictToolCalling": "",
"stopSequence": "",
"basepath": "",
"proxyUrl": "",
"baseOptions": "",
"allowImageUploads": "",
"imageResolution": "low",
"reasoningEffort": "medium",
"humanInputModel": "chatOpenAI"
},
"humanInputDescription": "<p>Are you sure you want to proceed?</p>"
},
"outputAnchors": [
{
"id": "humanInputAgentflow_0-output-0",
"label": "Human Input",
"name": "humanInputAgentflow"
},
{
"id": "humanInputAgentflow_0-output-1",
"label": "Human Input",
"name": "humanInputAgentflow"
}
],
"outputs": {
"humanInputAgentflow": ""
},
"selected": false
},
"type": "agentFlow",
"width": 161,
"height": 80,
"selected": false,
"positionAbsolute": {
"x": 156.05666363734434,
"y": 86.62266545493773
},
"dragging": false
},
{
"id": "directReplyAgentflow_0",
"position": {
"x": 363.0101864947954,
"y": 35.15053748988734
},
"data": {
"id": "directReplyAgentflow_0",
"label": "Direct Reply 0",
"version": 1,
"name": "directReplyAgentflow",
"type": "DirectReply",
"color": "#4DDBBB",
"hideOutput": true,
"baseClasses": ["DirectReply"],
"category": "Agent Flows",
"description": "Directly reply to the user with a message",
"inputParams": [
{
"label": "Message",
"name": "directReplyMessage",
"type": "string",
"rows": 4,
"acceptVariable": true,
"id": "directReplyAgentflow_0-input-directReplyMessage-string",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"directReplyMessage": "<p><span class=\"variable\" data-type=\"mention\" data-id=\"agentAgentflow_0\" data-label=\"agentAgentflow_0\">{{ agentAgentflow_0 }}</span> </p>"
},
"outputAnchors": [],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 155,
"height": 65,
"selected": false,
"positionAbsolute": {
"x": 363.0101864947954,
"y": 35.15053748988734
},
"dragging": false
},
{
"id": "loopAgentflow_0",
"position": {
"x": 366.5975521223236,
"y": 130.12266545493773
},
"data": {
"id": "loopAgentflow_0",
"label": "Loop 0",
"version": 1,
"name": "loopAgentflow",
"type": "Loop",
"color": "#FFA07A",
"hideOutput": true,
"baseClasses": ["Loop"],
"category": "Agent Flows",
"description": "Loop back to a previous node",
"inputParams": [
{
"label": "Loop Back To",
"name": "loopBackToNode",
"type": "asyncOptions",
"loadMethod": "listPreviousNodes",
"freeSolo": true,
"id": "loopAgentflow_0-input-loopBackToNode-asyncOptions",
"display": true
},
{
"label": "Max Loop Count",
"name": "maxLoopCount",
"type": "number",
"default": 5,
"id": "loopAgentflow_0-input-maxLoopCount-number",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"loopBackToNode": "agentAgentflow_0-Email Reply Agent",
"maxLoopCount": 5
},
"outputAnchors": [],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 113,
"height": 65,
"selected": false,
"positionAbsolute": {
"x": 366.5975521223236,
"y": 130.12266545493773
},
"dragging": false
}
],
"edges": [
{
"source": "startAgentflow_0",
"sourceHandle": "startAgentflow_0-output-startAgentflow",
"target": "agentAgentflow_0",
"targetHandle": "agentAgentflow_0",
"data": {
"sourceColor": "#7EE787",
"targetColor": "#4DD0E1",
"isHumanInput": false
},
"type": "agentFlow",
"id": "startAgentflow_0-startAgentflow_0-output-startAgentflow-agentAgentflow_0-agentAgentflow_0"
},
{
"source": "agentAgentflow_0",
"sourceHandle": "agentAgentflow_0-output-agentAgentflow",
"target": "humanInputAgentflow_0",
"targetHandle": "humanInputAgentflow_0",
"data": {
"sourceColor": "#4DD0E1",
"targetColor": "#6E6EFD",
"isHumanInput": false
},
"type": "agentFlow",
"id": "agentAgentflow_0-agentAgentflow_0-output-agentAgentflow-humanInputAgentflow_0-humanInputAgentflow_0"
},
{
"source": "humanInputAgentflow_0",
"sourceHandle": "humanInputAgentflow_0-output-0",
"target": "directReplyAgentflow_0",
"targetHandle": "directReplyAgentflow_0",
"data": {
"sourceColor": "#6E6EFD",
"targetColor": "#4DDBBB",
"edgeLabel": "proceed",
"isHumanInput": true
},
"type": "agentFlow",
"id": "humanInputAgentflow_0-humanInputAgentflow_0-output-0-directReplyAgentflow_0-directReplyAgentflow_0"
},
{
"source": "humanInputAgentflow_0",
"sourceHandle": "humanInputAgentflow_0-output-1",
"target": "loopAgentflow_0",
"targetHandle": "loopAgentflow_0",
"data": {
"sourceColor": "#6E6EFD",
"targetColor": "#FFA07A",
"edgeLabel": "reject",
"isHumanInput": true
},
"type": "agentFlow",
"id": "humanInputAgentflow_0-humanInputAgentflow_0-output-1-loopAgentflow_0-loopAgentflow_0"
}
]
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,718 @@
{
"description": "An agent that can post message to Slack channel",
"usecases": ["Agent"],
"nodes": [
{
"id": "startAgentflow_0",
"type": "agentFlow",
"position": {
"x": -192.5,
"y": 68
},
"data": {
"id": "startAgentflow_0",
"label": "Start",
"version": 1,
"name": "startAgentflow",
"type": "Start",
"color": "#7EE787",
"hideInput": true,
"baseClasses": ["Start"],
"category": "Agent Flows",
"description": "Starting point of the agentflow",
"inputParams": [
{
"label": "Input Type",
"name": "startInputType",
"type": "options",
"options": [
{
"label": "Chat Input",
"name": "chatInput",
"description": "Start the conversation with chat input"
},
{
"label": "Form Input",
"name": "formInput",
"description": "Start the workflow with form inputs"
}
],
"default": "chatInput",
"id": "startAgentflow_0-input-startInputType-options",
"display": true
},
{
"label": "Form Title",
"name": "formTitle",
"type": "string",
"placeholder": "Please Fill Out The Form",
"show": {
"startInputType": "formInput"
},
"id": "startAgentflow_0-input-formTitle-string",
"display": false
},
{
"label": "Form Description",
"name": "formDescription",
"type": "string",
"placeholder": "Complete all fields below to continue",
"show": {
"startInputType": "formInput"
},
"id": "startAgentflow_0-input-formDescription-string",
"display": false
},
{
"label": "Form Input Types",
"name": "formInputTypes",
"description": "Specify the type of form input",
"type": "array",
"show": {
"startInputType": "formInput"
},
"array": [
{
"label": "Type",
"name": "type",
"type": "options",
"options": [
{
"label": "String",
"name": "string"
},
{
"label": "Number",
"name": "number"
},
{
"label": "Boolean",
"name": "boolean"
},
{
"label": "Options",
"name": "options"
}
],
"default": "string"
},
{
"label": "Label",
"name": "label",
"type": "string",
"placeholder": "Label for the input"
},
{
"label": "Variable Name",
"name": "name",
"type": "string",
"placeholder": "Variable name for the input (must be camel case)",
"description": "Variable name must be camel case. For example: firstName, lastName, etc."
},
{
"label": "Add Options",
"name": "addOptions",
"type": "array",
"show": {
"formInputTypes[$index].type": "options"
},
"array": [
{
"label": "Option",
"name": "option",
"type": "string"
}
]
}
],
"id": "startAgentflow_0-input-formInputTypes-array",
"display": false
},
{
"label": "Ephemeral Memory",
"name": "startEphemeralMemory",
"type": "boolean",
"description": "Start fresh for every execution without past chat history",
"optional": true
},
{
"label": "Flow State",
"name": "startState",
"description": "Runtime state during the execution of the workflow",
"type": "array",
"optional": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "string",
"placeholder": "Foo"
},
{
"label": "Value",
"name": "value",
"type": "string",
"placeholder": "Bar"
}
],
"id": "startAgentflow_0-input-startState-array",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"startInputType": "chatInput",
"formTitle": "",
"formDescription": "",
"formInputTypes": "",
"startState": ""
},
"outputAnchors": [
{
"id": "startAgentflow_0-output-startAgentflow",
"label": "Start",
"name": "startAgentflow"
}
],
"outputs": {},
"selected": false
},
"width": 101,
"height": 65,
"selected": false,
"positionAbsolute": {
"x": -192.5,
"y": 68
},
"dragging": false
},
{
"id": "llmAgentflow_0",
"position": {
"x": -31.25,
"y": 64.5
},
"data": {
"id": "llmAgentflow_0",
"label": "General Agent",
"version": 1,
"name": "llmAgentflow",
"type": "LLM",
"color": "#64B5F6",
"baseClasses": ["LLM"],
"category": "Agent Flows",
"description": "Large language models to analyze user-provided inputs and generate responses",
"inputParams": [
{
"label": "Model",
"name": "llmModel",
"type": "asyncOptions",
"loadMethod": "listModels",
"loadConfig": true,
"id": "llmAgentflow_0-input-llmModel-asyncOptions",
"display": true
},
{
"label": "Messages",
"name": "llmMessages",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Role",
"name": "role",
"type": "options",
"options": [
{
"label": "System",
"name": "system"
},
{
"label": "Assistant",
"name": "assistant"
},
{
"label": "Developer",
"name": "developer"
},
{
"label": "User",
"name": "user"
}
]
},
{
"label": "Content",
"name": "content",
"type": "string",
"acceptVariable": true,
"generateInstruction": true,
"rows": 4
}
],
"id": "llmAgentflow_0-input-llmMessages-array",
"display": true
},
{
"label": "Enable Memory",
"name": "llmEnableMemory",
"type": "boolean",
"description": "Enable memory for the conversation thread",
"default": true,
"optional": true,
"id": "llmAgentflow_0-input-llmEnableMemory-boolean",
"display": true
},
{
"label": "Memory Type",
"name": "llmMemoryType",
"type": "options",
"options": [
{
"label": "All Messages",
"name": "allMessages",
"description": "Retrieve all messages from the conversation"
},
{
"label": "Window Size",
"name": "windowSize",
"description": "Uses a fixed window size to surface the last N messages"
},
{
"label": "Conversation Summary",
"name": "conversationSummary",
"description": "Summarizes the whole conversation"
},
{
"label": "Conversation Summary Buffer",
"name": "conversationSummaryBuffer",
"description": "Summarize conversations once token limit is reached. Default to 2000"
}
],
"optional": true,
"default": "allMessages",
"show": {
"llmEnableMemory": true
},
"id": "llmAgentflow_0-input-llmMemoryType-options",
"display": true
},
{
"label": "Window Size",
"name": "llmMemoryWindowSize",
"type": "number",
"default": "20",
"description": "Uses a fixed window size to surface the last N messages",
"show": {
"llmMemoryType": "windowSize"
},
"id": "llmAgentflow_0-input-llmMemoryWindowSize-number",
"display": false
},
{
"label": "Max Token Limit",
"name": "llmMemoryMaxTokenLimit",
"type": "number",
"default": "2000",
"description": "Summarize conversations once token limit is reached. Default to 2000",
"show": {
"llmMemoryType": "conversationSummaryBuffer"
},
"id": "llmAgentflow_0-input-llmMemoryMaxTokenLimit-number",
"display": false
},
{
"label": "Input Message",
"name": "llmUserMessage",
"type": "string",
"description": "Add an input message as user message at the end of the conversation",
"rows": 4,
"optional": true,
"acceptVariable": true,
"show": {
"llmEnableMemory": true
},
"id": "llmAgentflow_0-input-llmUserMessage-string",
"display": true
},
{
"label": "Return Response As",
"name": "llmReturnResponseAs",
"type": "options",
"options": [
{
"label": "User Message",
"name": "userMessage"
},
{
"label": "Assistant Message",
"name": "assistantMessage"
}
],
"default": "userMessage",
"id": "llmAgentflow_0-input-llmReturnResponseAs-options",
"display": true
},
{
"label": "JSON Structured Output",
"name": "llmStructuredOutput",
"description": "Instruct the LLM to give output in a JSON structured schema",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "string"
},
{
"label": "Type",
"name": "type",
"type": "options",
"options": [
{
"label": "String",
"name": "string"
},
{
"label": "String Array",
"name": "stringArray"
},
{
"label": "Number",
"name": "number"
},
{
"label": "Boolean",
"name": "boolean"
},
{
"label": "Enum",
"name": "enum"
},
{
"label": "JSON Array",
"name": "jsonArray"
}
]
},
{
"label": "Enum Values",
"name": "enumValues",
"type": "string",
"placeholder": "value1, value2, value3",
"description": "Enum values. Separated by comma",
"optional": true,
"show": {
"llmStructuredOutput[$index].type": "enum"
}
},
{
"label": "JSON Schema",
"name": "jsonSchema",
"type": "code",
"placeholder": "{\n \"answer\": {\n \"type\": \"string\",\n \"description\": \"Value of the answer\"\n },\n \"reason\": {\n \"type\": \"string\",\n \"description\": \"Reason for the answer\"\n },\n \"optional\": {\n \"type\": \"boolean\"\n },\n \"count\": {\n \"type\": \"number\"\n },\n \"children\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"object\",\n \"properties\": {\n \"value\": {\n \"type\": \"string\",\n \"description\": \"Value of the children's answer\"\n }\n }\n }\n }\n}",
"description": "JSON schema for the structured output",
"optional": true,
"show": {
"llmStructuredOutput[$index].type": "jsonArray"
}
},
{
"label": "Description",
"name": "description",
"type": "string",
"placeholder": "Description of the key"
}
],
"id": "llmAgentflow_0-input-llmStructuredOutput-array",
"display": true
},
{
"label": "Update Flow State",
"name": "llmUpdateState",
"description": "Update runtime state during the execution of the workflow",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "asyncOptions",
"loadMethod": "listRuntimeStateKeys",
"freeSolo": true
},
{
"label": "Value",
"name": "value",
"type": "string",
"acceptVariable": true,
"acceptNodeOutputAsVariable": true
}
],
"id": "llmAgentflow_0-input-llmUpdateState-array",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"llmModel": "chatOpenAI",
"llmMessages": "",
"llmEnableMemory": true,
"llmMemoryType": "allMessages",
"llmUserMessage": "",
"llmReturnResponseAs": "userMessage",
"llmStructuredOutput": "",
"llmUpdateState": "",
"llmModelConfig": {
"credential": "",
"modelName": "gpt-4o-mini",
"temperature": 0.9,
"streaming": true,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"strictToolCalling": "",
"stopSequence": "",
"basepath": "",
"proxyUrl": "",
"baseOptions": "",
"allowImageUploads": "",
"imageResolution": "low",
"reasoningEffort": "medium",
"llmModel": "chatOpenAI"
}
},
"outputAnchors": [
{
"id": "llmAgentflow_0-output-llmAgentflow",
"label": "LLM",
"name": "llmAgentflow"
}
],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 168,
"height": 71,
"selected": false,
"positionAbsolute": {
"x": -31.25,
"y": 64.5
},
"dragging": false
},
{
"id": "toolAgentflow_0",
"position": {
"x": 182.75,
"y": 64.5
},
"data": {
"id": "toolAgentflow_0",
"label": "Slack Reply",
"version": 1,
"name": "toolAgentflow",
"type": "Tool",
"color": "#d4a373",
"baseClasses": ["Tool"],
"category": "Agent Flows",
"description": "Tools allow LLM to interact with external systems",
"inputParams": [
{
"label": "Tool",
"name": "selectedTool",
"type": "asyncOptions",
"loadMethod": "listTools",
"loadConfig": true,
"id": "toolAgentflow_0-input-selectedTool-asyncOptions",
"display": true
},
{
"label": "Tool Input Arguments",
"name": "toolInputArgs",
"type": "array",
"acceptVariable": true,
"refresh": true,
"array": [
{
"label": "Input Argument Name",
"name": "inputArgName",
"type": "asyncOptions",
"loadMethod": "listToolInputArgs",
"refresh": true
},
{
"label": "Input Argument Value",
"name": "inputArgValue",
"type": "string",
"acceptVariable": true
}
],
"show": {
"selectedTool": ".+"
},
"id": "toolAgentflow_0-input-toolInputArgs-array",
"display": true
},
{
"label": "Update Flow State",
"name": "toolUpdateState",
"description": "Update runtime state during the execution of the workflow",
"type": "array",
"optional": true,
"acceptVariable": true,
"array": [
{
"label": "Key",
"name": "key",
"type": "asyncOptions",
"loadMethod": "listRuntimeStateKeys",
"freeSolo": true
},
{
"label": "Value",
"name": "value",
"type": "string",
"acceptVariable": true,
"acceptNodeOutputAsVariable": true
}
],
"id": "toolAgentflow_0-input-toolUpdateState-array",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"selectedTool": "slackMCP",
"toolInputArgs": [
{
"inputArgName": "channel_id",
"inputArgValue": "<p>ABCDEFG</p>"
},
{
"inputArgName": "text",
"inputArgValue": "<p><span class=\"variable\" data-type=\"mention\" data-id=\"llmAgentflow_0\" data-label=\"llmAgentflow_0\">{{ llmAgentflow_0 }}</span> </p>"
}
],
"toolUpdateState": "",
"selectedToolConfig": {
"mcpActions": "[\"slack_post_message\"]",
"selectedTool": "slackMCP"
}
},
"outputAnchors": [
{
"id": "toolAgentflow_0-output-toolAgentflow",
"label": "Tool",
"name": "toolAgentflow"
}
],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 142,
"height": 71,
"selected": false,
"positionAbsolute": {
"x": 182.75,
"y": 64.5
},
"dragging": false
},
{
"id": "directReplyAgentflow_0",
"position": {
"x": 366.75,
"y": 67.5
},
"data": {
"id": "directReplyAgentflow_0",
"label": "Direct Reply To Chat",
"version": 1,
"name": "directReplyAgentflow",
"type": "DirectReply",
"color": "#4DDBBB",
"hideOutput": true,
"baseClasses": ["DirectReply"],
"category": "Agent Flows",
"description": "Directly reply to the user with a message",
"inputParams": [
{
"label": "Message",
"name": "directReplyMessage",
"type": "string",
"rows": 4,
"acceptVariable": true,
"id": "directReplyAgentflow_0-input-directReplyMessage-string",
"display": true
}
],
"inputAnchors": [],
"inputs": {
"directReplyMessage": "<p><span class=\"variable\" data-type=\"mention\" data-id=\"llmAgentflow_0\" data-label=\"llmAgentflow_0\">{{ llmAgentflow_0 }}</span> </p>"
},
"outputAnchors": [],
"outputs": {},
"selected": false
},
"type": "agentFlow",
"width": 194,
"height": 65,
"selected": false,
"positionAbsolute": {
"x": 366.75,
"y": 67.5
},
"dragging": false
}
],
"edges": [
{
"source": "startAgentflow_0",
"sourceHandle": "startAgentflow_0-output-startAgentflow",
"target": "llmAgentflow_0",
"targetHandle": "llmAgentflow_0",
"data": {
"sourceColor": "#7EE787",
"targetColor": "#64B5F6",
"isHumanInput": false
},
"type": "agentFlow",
"id": "startAgentflow_0-startAgentflow_0-output-startAgentflow-llmAgentflow_0-llmAgentflow_0"
},
{
"source": "llmAgentflow_0",
"sourceHandle": "llmAgentflow_0-output-llmAgentflow",
"target": "toolAgentflow_0",
"targetHandle": "toolAgentflow_0",
"data": {
"sourceColor": "#64B5F6",
"targetColor": "#d4a373",
"isHumanInput": false
},
"type": "agentFlow",
"id": "llmAgentflow_0-llmAgentflow_0-output-llmAgentflow-toolAgentflow_0-toolAgentflow_0"
},
{
"source": "toolAgentflow_0",
"sourceHandle": "toolAgentflow_0-output-toolAgentflow",
"target": "directReplyAgentflow_0",
"targetHandle": "directReplyAgentflow_0",
"data": {
"sourceColor": "#d4a373",
"targetColor": "#4DDBBB",
"isHumanInput": false
},
"type": "agentFlow",
"id": "toolAgentflow_0-toolAgentflow_0-output-toolAgentflow-directReplyAgentflow_0-directReplyAgentflow_0"
}
]
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,701 +0,0 @@
{
"description": "AutoGPT - Autonomous agent with chain of thoughts for self-guided task completion",
"framework": ["Langchain"],
"usecases": ["Reflective Agent"],
"nodes": [
{
"width": 300,
"height": 679,
"id": "autoGPT_0",
"position": {
"x": 1566.5228556278,
"y": 48.800017192230115
},
"type": "customNode",
"data": {
"id": "autoGPT_0",
"label": "AutoGPT",
"version": 2,
"name": "autoGPT",
"type": "AutoGPT",
"baseClasses": ["AutoGPT"],
"category": "Agents",
"description": "Autonomous agent with chain of thoughts by GPT4",
"inputParams": [
{
"label": "AutoGPT Name",
"name": "aiName",
"type": "string",
"placeholder": "Tom",
"optional": true,
"id": "autoGPT_0-input-aiName-string"
},
{
"label": "AutoGPT Role",
"name": "aiRole",
"type": "string",
"placeholder": "Assistant",
"optional": true,
"id": "autoGPT_0-input-aiRole-string"
},
{
"label": "Maximum Loop",
"name": "maxLoop",
"type": "number",
"default": 5,
"optional": true,
"id": "autoGPT_0-input-maxLoop-number"
}
],
"inputAnchors": [
{
"label": "Allowed Tools",
"name": "tools",
"type": "Tool",
"list": true,
"id": "autoGPT_0-input-tools-Tool"
},
{
"label": "Chat Model",
"name": "model",
"type": "BaseChatModel",
"id": "autoGPT_0-input-model-BaseChatModel"
},
{
"label": "Vector Store Retriever",
"name": "vectorStoreRetriever",
"type": "BaseRetriever",
"id": "autoGPT_0-input-vectorStoreRetriever-BaseRetriever"
},
{
"label": "Input Moderation",
"description": "Detect text that could generate harmful output and prevent it from being sent to the language model",
"name": "inputModeration",
"type": "Moderation",
"optional": true,
"list": true,
"id": "autoGPT_0-input-inputModeration-Moderation"
}
],
"inputs": {
"inputModeration": "",
"tools": ["{{serpAPI_0.data.instance}}"],
"model": "{{chatOpenAI_0.data.instance}}",
"vectorStoreRetriever": "{{pinecone_0.data.instance}}",
"aiName": "",
"aiRole": "",
"maxLoop": 5
},
"outputAnchors": [
{
"id": "autoGPT_0-output-autoGPT-AutoGPT",
"name": "autoGPT",
"label": "AutoGPT",
"type": "AutoGPT"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 1566.5228556278,
"y": 48.800017192230115
},
"dragging": false
},
{
"width": 300,
"height": 276,
"id": "serpAPI_0",
"position": {
"x": 1207.9685973743674,
"y": -216.77363417201138
},
"type": "customNode",
"data": {
"id": "serpAPI_0",
"label": "Serp API",
"version": 1,
"name": "serpAPI",
"type": "SerpAPI",
"baseClasses": ["SerpAPI", "Tool", "StructuredTool"],
"category": "Tools",
"description": "Wrapper around SerpAPI - a real-time API to access Google search results",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["serpApi"],
"id": "serpAPI_0-input-credential-credential"
}
],
"inputAnchors": [],
"inputs": {},
"outputAnchors": [
{
"id": "serpAPI_0-output-serpAPI-SerpAPI|Tool|StructuredTool",
"name": "serpAPI",
"label": "SerpAPI",
"type": "SerpAPI | Tool | StructuredTool"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 1207.9685973743674,
"y": -216.77363417201138
},
"dragging": false
},
{
"width": 300,
"height": 670,
"id": "chatOpenAI_0",
"position": {
"x": 861.5955028972123,
"y": -322.72984118549857
},
"type": "customNode",
"data": {
"id": "chatOpenAI_0",
"label": "ChatOpenAI",
"version": 6,
"name": "chatOpenAI",
"type": "ChatOpenAI",
"baseClasses": ["ChatOpenAI", "BaseChatModel", "BaseLanguageModel"],
"category": "Chat Models",
"description": "Wrapper around OpenAI large language models that use the Chat endpoint",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "chatOpenAI_0-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "gpt-3.5-turbo",
"id": "chatOpenAI_0-input-modelName-options"
},
{
"label": "Temperature",
"name": "temperature",
"type": "number",
"default": 0.9,
"optional": true,
"id": "chatOpenAI_0-input-temperature-number"
},
{
"label": "Max Tokens",
"name": "maxTokens",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-maxTokens-number"
},
{
"label": "Top Probability",
"name": "topP",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-topP-number"
},
{
"label": "Frequency Penalty",
"name": "frequencyPenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-frequencyPenalty-number"
},
{
"label": "Presence Penalty",
"name": "presencePenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-presencePenalty-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-basepath-string"
},
{
"label": "BaseOptions",
"name": "baseOptions",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-baseOptions-json"
},
{
"label": "Allow Image Uploads",
"name": "allowImageUploads",
"type": "boolean",
"description": "Automatically uses gpt-4-vision-preview when image is being uploaded from chat. Only works with LLMChain, Conversation Chain, ReAct Agent, and Conversational Agent",
"default": false,
"optional": true,
"id": "chatOpenAI_0-input-allowImageUploads-boolean"
},
{
"label": "Image Resolution",
"description": "This parameter controls the resolution in which the model views the image.",
"name": "imageResolution",
"type": "options",
"options": [
{
"label": "Low",
"name": "low"
},
{
"label": "High",
"name": "high"
},
{
"label": "Auto",
"name": "auto"
}
],
"default": "low",
"optional": false,
"additionalParams": true,
"id": "chatOpenAI_0-input-imageResolution-options"
}
],
"inputAnchors": [
{
"label": "Cache",
"name": "cache",
"type": "BaseCache",
"optional": true,
"id": "chatOpenAI_0-input-cache-BaseCache"
}
],
"inputs": {
"modelName": "gpt-3.5-turbo",
"temperature": 0.9,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"basepath": "",
"baseOptions": "",
"allowImageUploads": true,
"imageResolution": "low"
},
"outputAnchors": [
{
"id": "chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"name": "chatOpenAI",
"label": "ChatOpenAI",
"type": "ChatOpenAI | BaseChatModel | BaseLanguageModel"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 861.5955028972123,
"y": -322.72984118549857
},
"dragging": false
},
{
"width": 300,
"height": 424,
"id": "openAIEmbeddings_0",
"position": {
"x": 116.62153412789377,
"y": 52.465581131402246
},
"type": "customNode",
"data": {
"id": "openAIEmbeddings_0",
"label": "OpenAI Embeddings",
"version": 4,
"name": "openAIEmbeddings",
"type": "OpenAIEmbeddings",
"baseClasses": ["OpenAIEmbeddings", "Embeddings"],
"category": "Embeddings",
"description": "OpenAI API to generate embeddings for a given text",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "openAIEmbeddings_0-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "text-embedding-ada-002",
"id": "openAIEmbeddings_0-input-modelName-asyncOptions"
},
{
"label": "Strip New Lines",
"name": "stripNewLines",
"type": "boolean",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-stripNewLines-boolean"
},
{
"label": "Batch Size",
"name": "batchSize",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-batchSize-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-basepath-string"
},
{
"label": "Dimensions",
"name": "dimensions",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-dimensions-number"
}
],
"inputAnchors": [],
"inputs": {
"modelName": "text-embedding-ada-002",
"stripNewLines": "",
"batchSize": "",
"timeout": "",
"basepath": "",
"dimensions": ""
},
"outputAnchors": [
{
"id": "openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings",
"name": "openAIEmbeddings",
"label": "OpenAIEmbeddings",
"description": "OpenAI API to generate embeddings for a given text",
"type": "OpenAIEmbeddings | Embeddings"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 116.62153412789377,
"y": 52.465581131402246
},
"dragging": false
},
{
"width": 300,
"height": 606,
"id": "pinecone_0",
"position": {
"x": 512.2389361920059,
"y": -36.80102752360557
},
"type": "customNode",
"data": {
"id": "pinecone_0",
"label": "Pinecone",
"version": 3,
"name": "pinecone",
"type": "Pinecone",
"baseClasses": ["Pinecone", "VectorStoreRetriever", "BaseRetriever"],
"category": "Vector Stores",
"description": "Upsert embedded data and perform similarity or mmr search using Pinecone, a leading fully managed hosted vector database",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["pineconeApi"],
"id": "pinecone_0-input-credential-credential"
},
{
"label": "Pinecone Index",
"name": "pineconeIndex",
"type": "string",
"id": "pinecone_0-input-pineconeIndex-string"
},
{
"label": "Pinecone Namespace",
"name": "pineconeNamespace",
"type": "string",
"placeholder": "my-first-namespace",
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-pineconeNamespace-string"
},
{
"label": "Pinecone Metadata Filter",
"name": "pineconeMetadataFilter",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "pinecone_0-input-pineconeMetadataFilter-json"
},
{
"label": "Top K",
"name": "topK",
"description": "Number of top results to fetch. Default to 4",
"placeholder": "4",
"type": "number",
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-topK-number"
},
{
"label": "Search Type",
"name": "searchType",
"type": "options",
"default": "similarity",
"options": [
{
"label": "Similarity",
"name": "similarity"
},
{
"label": "Max Marginal Relevance",
"name": "mmr"
}
],
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-searchType-options"
},
{
"label": "Fetch K (for MMR Search)",
"name": "fetchK",
"description": "Number of initial documents to fetch for MMR reranking. Default to 20. Used only when the search type is MMR",
"placeholder": "20",
"type": "number",
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-fetchK-number"
},
{
"label": "Lambda (for MMR Search)",
"name": "lambda",
"description": "Number between 0 and 1 that determines the degree of diversity among the results, where 0 corresponds to maximum diversity and 1 to minimum diversity. Used only when the search type is MMR",
"placeholder": "0.5",
"type": "number",
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-lambda-number"
}
],
"inputAnchors": [
{
"label": "Document",
"name": "document",
"type": "Document",
"list": true,
"optional": true,
"id": "pinecone_0-input-document-Document"
},
{
"label": "Embeddings",
"name": "embeddings",
"type": "Embeddings",
"id": "pinecone_0-input-embeddings-Embeddings"
},
{
"label": "Record Manager",
"name": "recordManager",
"type": "RecordManager",
"description": "Keep track of the record to prevent duplication",
"optional": true,
"id": "pinecone_0-input-recordManager-RecordManager"
}
],
"inputs": {
"document": "",
"embeddings": "{{openAIEmbeddings_0.data.instance}}",
"recordManager": "",
"pineconeIndex": "",
"pineconeNamespace": "",
"pineconeMetadataFilter": "",
"topK": "",
"searchType": "similarity",
"fetchK": "",
"lambda": ""
},
"outputAnchors": [
{
"name": "output",
"label": "Output",
"type": "options",
"description": "",
"options": [
{
"id": "pinecone_0-output-retriever-Pinecone|VectorStoreRetriever|BaseRetriever",
"name": "retriever",
"label": "Pinecone Retriever",
"description": "",
"type": "Pinecone | VectorStoreRetriever | BaseRetriever"
},
{
"id": "pinecone_0-output-vectorStore-Pinecone|VectorStore",
"name": "vectorStore",
"label": "Pinecone Vector Store",
"description": "",
"type": "Pinecone | VectorStore"
}
],
"default": "retriever"
}
],
"outputs": {
"output": "retriever"
},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 512.2389361920059,
"y": -36.80102752360557
},
"dragging": false
},
{
"id": "stickyNote_0",
"position": {
"x": 1565.5672914362437,
"y": -138.9994972608436
},
"type": "stickyNote",
"data": {
"id": "stickyNote_0",
"label": "Sticky Note",
"version": 2,
"name": "stickyNote",
"type": "StickyNote",
"baseClasses": ["StickyNote"],
"tags": ["Utilities"],
"category": "Utilities",
"description": "Add a sticky note",
"inputParams": [
{
"label": "",
"name": "note",
"type": "string",
"rows": 1,
"placeholder": "Type something here",
"optional": true,
"id": "stickyNote_0-input-note-string"
}
],
"inputAnchors": [],
"inputs": {
"note": "An agent that uses long-term memory (Pinecone in this example) together with a prompt for self-guided task completion.\n\nAgent has access to Serp API tool to search the web, and store the continuous results to Pinecone"
},
"outputAnchors": [
{
"id": "stickyNote_0-output-stickyNote-StickyNote",
"name": "stickyNote",
"label": "StickyNote",
"description": "Add a sticky note",
"type": "StickyNote"
}
],
"outputs": {},
"selected": false
},
"width": 300,
"height": 163,
"selected": false,
"positionAbsolute": {
"x": 1565.5672914362437,
"y": -138.9994972608436
},
"dragging": false
}
],
"edges": [
{
"source": "chatOpenAI_0",
"sourceHandle": "chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"target": "autoGPT_0",
"targetHandle": "autoGPT_0-input-model-BaseChatModel",
"type": "buttonedge",
"id": "chatOpenAI_0-chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel-autoGPT_0-autoGPT_0-input-model-BaseChatModel",
"data": {
"label": ""
}
},
{
"source": "serpAPI_0",
"sourceHandle": "serpAPI_0-output-serpAPI-SerpAPI|Tool|StructuredTool",
"target": "autoGPT_0",
"targetHandle": "autoGPT_0-input-tools-Tool",
"type": "buttonedge",
"id": "serpAPI_0-serpAPI_0-output-serpAPI-SerpAPI|Tool|StructuredTool-autoGPT_0-autoGPT_0-input-tools-Tool",
"data": {
"label": ""
}
},
{
"source": "openAIEmbeddings_0",
"sourceHandle": "openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings",
"target": "pinecone_0",
"targetHandle": "pinecone_0-input-embeddings-Embeddings",
"type": "buttonedge",
"id": "openAIEmbeddings_0-openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings-pinecone_0-pinecone_0-input-embeddings-Embeddings",
"data": {
"label": ""
}
},
{
"source": "pinecone_0",
"sourceHandle": "pinecone_0-output-retriever-Pinecone|VectorStoreRetriever|BaseRetriever",
"target": "autoGPT_0",
"targetHandle": "autoGPT_0-input-vectorStoreRetriever-BaseRetriever",
"type": "buttonedge",
"id": "pinecone_0-pinecone_0-output-retriever-Pinecone|VectorStoreRetriever|BaseRetriever-autoGPT_0-autoGPT_0-input-vectorStoreRetriever-BaseRetriever",
"data": {
"label": ""
}
}
]
}

View File

@ -1,623 +0,0 @@
{
"description": "Use BabyAGI to create tasks and reprioritize for a given objective",
"framework": ["Langchain"],
"usecases": ["Reflective Agent"],
"nodes": [
{
"width": 300,
"height": 431,
"id": "babyAGI_1",
"position": {
"x": 950.8042093214954,
"y": 66.00028106865324
},
"type": "customNode",
"data": {
"id": "babyAGI_1",
"label": "BabyAGI",
"version": 2,
"name": "babyAGI",
"type": "BabyAGI",
"baseClasses": ["BabyAGI"],
"category": "Agents",
"description": "Task Driven Autonomous Agent which creates new task and reprioritizes task list based on objective",
"inputParams": [
{
"label": "Task Loop",
"name": "taskLoop",
"type": "number",
"default": 3,
"id": "babyAGI_1-input-taskLoop-number"
}
],
"inputAnchors": [
{
"label": "Chat Model",
"name": "model",
"type": "BaseChatModel",
"id": "babyAGI_1-input-model-BaseChatModel"
},
{
"label": "Vector Store",
"name": "vectorStore",
"type": "VectorStore",
"id": "babyAGI_1-input-vectorStore-VectorStore"
},
{
"label": "Input Moderation",
"description": "Detect text that could generate harmful output and prevent it from being sent to the language model",
"name": "inputModeration",
"type": "Moderation",
"optional": true,
"list": true,
"id": "babyAGI_1-input-inputModeration-Moderation"
}
],
"inputs": {
"inputModeration": "",
"model": "{{chatOpenAI_0.data.instance}}",
"vectorStore": "{{pinecone_0.data.instance}}",
"taskLoop": 3
},
"outputAnchors": [
{
"id": "babyAGI_1-output-babyAGI-BabyAGI",
"name": "babyAGI",
"label": "BabyAGI",
"type": "BabyAGI"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"dragging": false,
"positionAbsolute": {
"x": 950.8042093214954,
"y": 66.00028106865324
}
},
{
"width": 300,
"height": 424,
"id": "openAIEmbeddings_0",
"position": {
"x": -111.82510263637522,
"y": -224.88655030419665
},
"type": "customNode",
"data": {
"id": "openAIEmbeddings_0",
"label": "OpenAI Embeddings",
"version": 4,
"name": "openAIEmbeddings",
"type": "OpenAIEmbeddings",
"baseClasses": ["OpenAIEmbeddings", "Embeddings"],
"category": "Embeddings",
"description": "OpenAI API to generate embeddings for a given text",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "openAIEmbeddings_0-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "text-embedding-ada-002",
"id": "openAIEmbeddings_0-input-modelName-asyncOptions"
},
{
"label": "Strip New Lines",
"name": "stripNewLines",
"type": "boolean",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-stripNewLines-boolean"
},
{
"label": "Batch Size",
"name": "batchSize",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-batchSize-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-basepath-string"
},
{
"label": "Dimensions",
"name": "dimensions",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-dimensions-number"
}
],
"inputAnchors": [],
"inputs": {
"modelName": "text-embedding-ada-002",
"stripNewLines": "",
"batchSize": "",
"timeout": "",
"basepath": "",
"dimensions": ""
},
"outputAnchors": [
{
"id": "openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings",
"name": "openAIEmbeddings",
"label": "OpenAIEmbeddings",
"description": "OpenAI API to generate embeddings for a given text",
"type": "OpenAIEmbeddings | Embeddings"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": -111.82510263637522,
"y": -224.88655030419665
},
"dragging": false
},
{
"width": 300,
"height": 606,
"id": "pinecone_0",
"position": {
"x": 245.707825551803,
"y": -176.9243551667388
},
"type": "customNode",
"data": {
"id": "pinecone_0",
"label": "Pinecone",
"version": 3,
"name": "pinecone",
"type": "Pinecone",
"baseClasses": ["Pinecone", "VectorStoreRetriever", "BaseRetriever"],
"category": "Vector Stores",
"description": "Upsert embedded data and perform similarity or mmr search using Pinecone, a leading fully managed hosted vector database",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["pineconeApi"],
"id": "pinecone_0-input-credential-credential"
},
{
"label": "Pinecone Index",
"name": "pineconeIndex",
"type": "string",
"id": "pinecone_0-input-pineconeIndex-string"
},
{
"label": "Pinecone Namespace",
"name": "pineconeNamespace",
"type": "string",
"placeholder": "my-first-namespace",
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-pineconeNamespace-string"
},
{
"label": "Pinecone Metadata Filter",
"name": "pineconeMetadataFilter",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "pinecone_0-input-pineconeMetadataFilter-json"
},
{
"label": "Top K",
"name": "topK",
"description": "Number of top results to fetch. Default to 4",
"placeholder": "4",
"type": "number",
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-topK-number"
},
{
"label": "Search Type",
"name": "searchType",
"type": "options",
"default": "similarity",
"options": [
{
"label": "Similarity",
"name": "similarity"
},
{
"label": "Max Marginal Relevance",
"name": "mmr"
}
],
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-searchType-options"
},
{
"label": "Fetch K (for MMR Search)",
"name": "fetchK",
"description": "Number of initial documents to fetch for MMR reranking. Default to 20. Used only when the search type is MMR",
"placeholder": "20",
"type": "number",
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-fetchK-number"
},
{
"label": "Lambda (for MMR Search)",
"name": "lambda",
"description": "Number between 0 and 1 that determines the degree of diversity among the results, where 0 corresponds to maximum diversity and 1 to minimum diversity. Used only when the search type is MMR",
"placeholder": "0.5",
"type": "number",
"additionalParams": true,
"optional": true,
"id": "pinecone_0-input-lambda-number"
}
],
"inputAnchors": [
{
"label": "Document",
"name": "document",
"type": "Document",
"list": true,
"optional": true,
"id": "pinecone_0-input-document-Document"
},
{
"label": "Embeddings",
"name": "embeddings",
"type": "Embeddings",
"id": "pinecone_0-input-embeddings-Embeddings"
},
{
"label": "Record Manager",
"name": "recordManager",
"type": "RecordManager",
"description": "Keep track of the record to prevent duplication",
"optional": true,
"id": "pinecone_0-input-recordManager-RecordManager"
}
],
"inputs": {
"document": "",
"embeddings": "{{openAIEmbeddings_0.data.instance}}",
"recordManager": "",
"pineconeIndex": "",
"pineconeNamespace": "",
"pineconeMetadataFilter": "",
"topK": "",
"searchType": "similarity",
"fetchK": "",
"lambda": ""
},
"outputAnchors": [
{
"name": "output",
"label": "Output",
"type": "options",
"description": "",
"options": [
{
"id": "pinecone_0-output-retriever-Pinecone|VectorStoreRetriever|BaseRetriever",
"name": "retriever",
"label": "Pinecone Retriever",
"description": "",
"type": "Pinecone | VectorStoreRetriever | BaseRetriever"
},
{
"id": "pinecone_0-output-vectorStore-Pinecone|VectorStore",
"name": "vectorStore",
"label": "Pinecone Vector Store",
"description": "",
"type": "Pinecone | VectorStore"
}
],
"default": "retriever"
}
],
"outputs": {
"output": "vectorStore"
},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 245.707825551803,
"y": -176.9243551667388
},
"dragging": false
},
{
"width": 300,
"height": 670,
"id": "chatOpenAI_0",
"position": {
"x": 597.7565040390853,
"y": -381.01461408909825
},
"type": "customNode",
"data": {
"id": "chatOpenAI_0",
"label": "ChatOpenAI",
"version": 6,
"name": "chatOpenAI",
"type": "ChatOpenAI",
"baseClasses": ["ChatOpenAI", "BaseChatModel", "BaseLanguageModel", "Runnable"],
"category": "Chat Models",
"description": "Wrapper around OpenAI large language models that use the Chat endpoint",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "chatOpenAI_0-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "gpt-3.5-turbo",
"id": "chatOpenAI_0-input-modelName-options"
},
{
"label": "Temperature",
"name": "temperature",
"type": "number",
"step": 0.1,
"default": 0.9,
"optional": true,
"id": "chatOpenAI_0-input-temperature-number"
},
{
"label": "Max Tokens",
"name": "maxTokens",
"type": "number",
"step": 1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-maxTokens-number"
},
{
"label": "Top Probability",
"name": "topP",
"type": "number",
"step": 0.1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-topP-number"
},
{
"label": "Frequency Penalty",
"name": "frequencyPenalty",
"type": "number",
"step": 0.1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-frequencyPenalty-number"
},
{
"label": "Presence Penalty",
"name": "presencePenalty",
"type": "number",
"step": 0.1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-presencePenalty-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"step": 1,
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-basepath-string"
},
{
"label": "BaseOptions",
"name": "baseOptions",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-baseOptions-json"
},
{
"label": "Allow Image Uploads",
"name": "allowImageUploads",
"type": "boolean",
"description": "Automatically uses gpt-4-vision-preview when image is being uploaded from chat. Only works with LLMChain, Conversation Chain, ReAct Agent, and Conversational Agent",
"default": false,
"optional": true,
"id": "chatOpenAI_0-input-allowImageUploads-boolean"
},
{
"label": "Image Resolution",
"description": "This parameter controls the resolution in which the model views the image.",
"name": "imageResolution",
"type": "options",
"options": [
{
"label": "Low",
"name": "low"
},
{
"label": "High",
"name": "high"
},
{
"label": "Auto",
"name": "auto"
}
],
"default": "low",
"optional": false,
"additionalParams": true,
"id": "chatOpenAI_0-input-imageResolution-options"
}
],
"inputAnchors": [
{
"label": "Cache",
"name": "cache",
"type": "BaseCache",
"optional": true,
"id": "chatOpenAI_0-input-cache-BaseCache"
}
],
"inputs": {
"cache": "",
"modelName": "gpt-3.5-turbo",
"temperature": 0.9,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"basepath": "",
"baseOptions": "",
"allowImageUploads": true,
"imageResolution": "low"
},
"outputAnchors": [
{
"id": "chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel|Runnable",
"name": "chatOpenAI",
"label": "ChatOpenAI",
"type": "ChatOpenAI | BaseChatModel | BaseLanguageModel | Runnable"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 597.7565040390853,
"y": -381.01461408909825
},
"dragging": false
},
{
"id": "stickyNote_0",
"position": {
"x": 949.0763123880214,
"y": -172.0310628893923
},
"type": "stickyNote",
"data": {
"id": "stickyNote_0",
"label": "Sticky Note",
"version": 2,
"name": "stickyNote",
"type": "StickyNote",
"baseClasses": ["StickyNote"],
"tags": ["Utilities"],
"category": "Utilities",
"description": "Add a sticky note",
"inputParams": [
{
"label": "",
"name": "note",
"type": "string",
"rows": 1,
"placeholder": "Type something here",
"optional": true,
"id": "stickyNote_0-input-note-string"
}
],
"inputAnchors": [],
"inputs": {
"note": "BabyAGI is made up of 3 components:\n\n- A chain responsible for creating tasks\n- A chain responsible for prioritising tasks\n- A chain responsible for executing tasks\n\nThese chains are executed in sequence until the task list is empty or the maximum number of iterations is reached"
},
"outputAnchors": [
{
"id": "stickyNote_0-output-stickyNote-StickyNote",
"name": "stickyNote",
"label": "StickyNote",
"description": "Add a sticky note",
"type": "StickyNote"
}
],
"outputs": {},
"selected": false
},
"width": 300,
"height": 203,
"selected": false,
"positionAbsolute": {
"x": 949.0763123880214,
"y": -172.0310628893923
},
"dragging": false
}
],
"edges": [
{
"source": "openAIEmbeddings_0",
"sourceHandle": "openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings",
"target": "pinecone_0",
"targetHandle": "pinecone_0-input-embeddings-Embeddings",
"type": "buttonedge",
"id": "openAIEmbeddings_0-openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings-pinecone_0-pinecone_0-input-embeddings-Embeddings",
"data": {
"label": ""
}
},
{
"source": "chatOpenAI_0",
"sourceHandle": "chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel|Runnable",
"target": "babyAGI_1",
"targetHandle": "babyAGI_1-input-model-BaseChatModel",
"type": "buttonedge",
"id": "chatOpenAI_0-chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel|Runnable-babyAGI_1-babyAGI_1-input-model-BaseChatModel",
"data": {
"label": ""
}
},
{
"source": "pinecone_0",
"sourceHandle": "pinecone_0-output-vectorStore-Pinecone|VectorStore",
"target": "babyAGI_1",
"targetHandle": "babyAGI_1-input-vectorStore-VectorStore",
"type": "buttonedge",
"id": "pinecone_0-pinecone_0-output-vectorStore-Pinecone|VectorStore-babyAGI_1-babyAGI_1-input-vectorStore-VectorStore",
"data": {
"label": ""
}
}
]
}

View File

@ -1,767 +0,0 @@
{
"description": "Tool Agent using OpenAPI yaml to automatically decide which API to call, generating url and body request from conversation",
"framework": ["Langchain"],
"usecases": ["Interacting with API"],
"nodes": [
{
"width": 300,
"height": 544,
"id": "openApiChain_1",
"position": {
"x": 1203.1825726424859,
"y": 300.7226683414998
},
"type": "customNode",
"data": {
"id": "openApiChain_1",
"label": "OpenAPI Chain",
"version": 2,
"name": "openApiChain",
"type": "OpenAPIChain",
"baseClasses": ["OpenAPIChain", "BaseChain"],
"category": "Chains",
"description": "Chain that automatically select and call APIs based only on an OpenAPI spec",
"inputParams": [
{
"label": "YAML Link",
"name": "yamlLink",
"type": "string",
"placeholder": "https://api.speak.com/openapi.yaml",
"description": "If YAML link is provided, uploaded YAML File will be ignored and YAML link will be used instead",
"id": "openApiChain_1-input-yamlLink-string"
},
{
"label": "YAML File",
"name": "yamlFile",
"type": "file",
"fileType": ".yaml",
"description": "If YAML link is provided, uploaded YAML File will be ignored and YAML link will be used instead",
"id": "openApiChain_1-input-yamlFile-file"
},
{
"label": "Headers",
"name": "headers",
"type": "json",
"additionalParams": true,
"optional": true,
"id": "openApiChain_1-input-headers-json"
}
],
"inputAnchors": [
{
"label": "ChatOpenAI Model",
"name": "model",
"type": "ChatOpenAI",
"id": "openApiChain_1-input-model-ChatOpenAI"
},
{
"label": "Input Moderation",
"description": "Detect text that could generate harmful output and prevent it from being sent to the language model",
"name": "inputModeration",
"type": "Moderation",
"optional": true,
"list": true,
"id": "openApiChain_1-input-inputModeration-Moderation"
}
],
"inputs": {
"inputModeration": "",
"model": "{{chatOpenAI_1.data.instance}}",
"yamlLink": "https://gist.githubusercontent.com/HenryHengZJ/b60f416c42cb9bcd3160fe797421119a/raw/0ef05b3aaf142e0423f71c19dec866178487dc10/klarna.yml",
"headers": ""
},
"outputAnchors": [
{
"id": "openApiChain_1-output-openApiChain-OpenAPIChain|BaseChain",
"name": "openApiChain",
"label": "OpenAPIChain",
"type": "OpenAPIChain | BaseChain"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 1203.1825726424859,
"y": 300.7226683414998
},
"dragging": false
},
{
"width": 300,
"height": 670,
"id": "chatOpenAI_1",
"position": {
"x": 792.3201947594027,
"y": 293.61889966751846
},
"type": "customNode",
"data": {
"id": "chatOpenAI_1",
"label": "ChatOpenAI",
"version": 6,
"name": "chatOpenAI",
"type": "ChatOpenAI",
"baseClasses": ["ChatOpenAI", "BaseChatModel", "BaseLanguageModel"],
"category": "Chat Models",
"description": "Wrapper around OpenAI large language models that use the Chat endpoint",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "chatOpenAI_1-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "gpt-3.5-turbo",
"id": "chatOpenAI_1-input-modelName-options"
},
{
"label": "Temperature",
"name": "temperature",
"type": "number",
"default": 0.9,
"optional": true,
"id": "chatOpenAI_1-input-temperature-number"
},
{
"label": "Max Tokens",
"name": "maxTokens",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-maxTokens-number"
},
{
"label": "Top Probability",
"name": "topP",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-topP-number"
},
{
"label": "Frequency Penalty",
"name": "frequencyPenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-frequencyPenalty-number"
},
{
"label": "Presence Penalty",
"name": "presencePenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-presencePenalty-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-basepath-string"
},
{
"label": "BaseOptions",
"name": "baseOptions",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-baseOptions-json"
},
{
"label": "Allow Image Uploads",
"name": "allowImageUploads",
"type": "boolean",
"description": "Automatically uses gpt-4-vision-preview when image is being uploaded from chat. Only works with LLMChain, Conversation Chain, ReAct Agent, and Conversational Agent",
"default": false,
"optional": true,
"id": "chatOpenAI_1-input-allowImageUploads-boolean"
},
{
"label": "Image Resolution",
"description": "This parameter controls the resolution in which the model views the image.",
"name": "imageResolution",
"type": "options",
"options": [
{
"label": "Low",
"name": "low"
},
{
"label": "High",
"name": "high"
},
{
"label": "Auto",
"name": "auto"
}
],
"default": "low",
"optional": false,
"additionalParams": true,
"id": "chatOpenAI_1-input-imageResolution-options"
}
],
"inputAnchors": [
{
"label": "Cache",
"name": "cache",
"type": "BaseCache",
"optional": true,
"id": "chatOpenAI_1-input-cache-BaseCache"
}
],
"inputs": {
"modelName": "gpt-3.5-turbo",
"temperature": 0.9,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"basepath": "",
"baseOptions": "",
"allowImageUploads": true,
"imageResolution": "low"
},
"outputAnchors": [
{
"id": "chatOpenAI_1-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"name": "chatOpenAI",
"label": "ChatOpenAI",
"type": "ChatOpenAI | BaseChatModel | BaseLanguageModel"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 792.3201947594027,
"y": 293.61889966751846
},
"dragging": false
},
{
"width": 300,
"height": 603,
"id": "chainTool_0",
"position": {
"x": 1635.3466862861876,
"y": 272.3189405402944
},
"type": "customNode",
"data": {
"id": "chainTool_0",
"label": "Chain Tool",
"version": 1,
"name": "chainTool",
"type": "ChainTool",
"baseClasses": ["ChainTool", "DynamicTool", "Tool", "StructuredTool"],
"category": "Tools",
"description": "Use a chain as allowed tool for agent",
"inputParams": [
{
"label": "Chain Name",
"name": "name",
"type": "string",
"placeholder": "state-of-union-qa",
"id": "chainTool_0-input-name-string"
},
{
"label": "Chain Description",
"name": "description",
"type": "string",
"rows": 3,
"placeholder": "State of the Union QA - useful for when you need to ask questions about the most recent state of the union address.",
"id": "chainTool_0-input-description-string"
},
{
"label": "Return Direct",
"name": "returnDirect",
"type": "boolean",
"optional": true,
"id": "chainTool_0-input-returnDirect-boolean"
}
],
"inputAnchors": [
{
"label": "Base Chain",
"name": "baseChain",
"type": "BaseChain",
"id": "chainTool_0-input-baseChain-BaseChain"
}
],
"inputs": {
"name": "shopping-qa",
"description": "useful for when you need to search for e-commerce products like shirt, pants, dress, glasses, etc.",
"returnDirect": false,
"baseChain": "{{openApiChain_1.data.instance}}"
},
"outputAnchors": [
{
"id": "chainTool_0-output-chainTool-ChainTool|DynamicTool|Tool|StructuredTool",
"name": "chainTool",
"label": "ChainTool",
"type": "ChainTool | DynamicTool | Tool | StructuredTool"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 1635.3466862861876,
"y": 272.3189405402944
},
"dragging": false
},
{
"width": 300,
"height": 670,
"id": "chatOpenAI_2",
"position": {
"x": 1566.5049234393214,
"y": 920.3787183665902
},
"type": "customNode",
"data": {
"id": "chatOpenAI_2",
"label": "ChatOpenAI",
"version": 6,
"name": "chatOpenAI",
"type": "ChatOpenAI",
"baseClasses": ["ChatOpenAI", "BaseChatModel", "BaseLanguageModel"],
"category": "Chat Models",
"description": "Wrapper around OpenAI large language models that use the Chat endpoint",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "chatOpenAI_2-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "gpt-3.5-turbo",
"id": "chatOpenAI_2-input-modelName-options"
},
{
"label": "Temperature",
"name": "temperature",
"type": "number",
"default": 0.9,
"optional": true,
"id": "chatOpenAI_2-input-temperature-number"
},
{
"label": "Max Tokens",
"name": "maxTokens",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_2-input-maxTokens-number"
},
{
"label": "Top Probability",
"name": "topP",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_2-input-topP-number"
},
{
"label": "Frequency Penalty",
"name": "frequencyPenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_2-input-frequencyPenalty-number"
},
{
"label": "Presence Penalty",
"name": "presencePenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_2-input-presencePenalty-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_2-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_2-input-basepath-string"
},
{
"label": "BaseOptions",
"name": "baseOptions",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_2-input-baseOptions-json"
},
{
"label": "Allow Image Uploads",
"name": "allowImageUploads",
"type": "boolean",
"description": "Automatically uses gpt-4-vision-preview when image is being uploaded from chat. Only works with LLMChain, Conversation Chain, ReAct Agent, and Conversational Agent",
"default": false,
"optional": true,
"id": "chatOpenAI_2-input-allowImageUploads-boolean"
},
{
"label": "Image Resolution",
"description": "This parameter controls the resolution in which the model views the image.",
"name": "imageResolution",
"type": "options",
"options": [
{
"label": "Low",
"name": "low"
},
{
"label": "High",
"name": "high"
},
{
"label": "Auto",
"name": "auto"
}
],
"default": "low",
"optional": false,
"additionalParams": true,
"id": "chatOpenAI_2-input-imageResolution-options"
}
],
"inputAnchors": [
{
"label": "Cache",
"name": "cache",
"type": "BaseCache",
"optional": true,
"id": "chatOpenAI_2-input-cache-BaseCache"
}
],
"inputs": {
"modelName": "gpt-3.5-turbo",
"temperature": 0.9,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"basepath": "",
"baseOptions": "",
"allowImageUploads": true,
"imageResolution": "low"
},
"outputAnchors": [
{
"id": "chatOpenAI_2-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"name": "chatOpenAI",
"label": "ChatOpenAI",
"type": "ChatOpenAI | BaseChatModel | BaseLanguageModel"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 1566.5049234393214,
"y": 920.3787183665902
},
"dragging": false
},
{
"width": 300,
"height": 253,
"id": "bufferMemory_0",
"position": {
"x": 1148.8461056155377,
"y": 967.8215757228843
},
"type": "customNode",
"data": {
"id": "bufferMemory_0",
"label": "Buffer Memory",
"version": 2,
"name": "bufferMemory",
"type": "BufferMemory",
"baseClasses": ["BufferMemory", "BaseChatMemory", "BaseMemory"],
"category": "Memory",
"description": "Retrieve chat messages stored in database",
"inputParams": [
{
"label": "Session Id",
"name": "sessionId",
"type": "string",
"description": "If not specified, a random id will be used. Learn <a target=\"_blank\" href=\"https://docs.flowiseai.com/memory#ui-and-embedded-chat\">more</a>",
"default": "",
"additionalParams": true,
"optional": true,
"id": "bufferMemory_0-input-sessionId-string"
},
{
"label": "Memory Key",
"name": "memoryKey",
"type": "string",
"default": "chat_history",
"additionalParams": true,
"id": "bufferMemory_0-input-memoryKey-string"
}
],
"inputAnchors": [],
"inputs": {
"sessionId": "",
"memoryKey": "chat_history"
},
"outputAnchors": [
{
"id": "bufferMemory_0-output-bufferMemory-BufferMemory|BaseChatMemory|BaseMemory",
"name": "bufferMemory",
"label": "BufferMemory",
"type": "BufferMemory | BaseChatMemory | BaseMemory"
}
],
"outputs": {},
"selected": false
},
"positionAbsolute": {
"x": 1148.8461056155377,
"y": 967.8215757228843
},
"selected": false
},
{
"id": "toolAgent_0",
"position": {
"x": 2054.7555242376347,
"y": 710.4140533942601
},
"type": "customNode",
"data": {
"id": "toolAgent_0",
"label": "Tool Agent",
"version": 1,
"name": "toolAgent",
"type": "AgentExecutor",
"baseClasses": ["AgentExecutor", "BaseChain", "Runnable"],
"category": "Agents",
"description": "Agent that uses Function Calling to pick the tools and args to call",
"inputParams": [
{
"label": "System Message",
"name": "systemMessage",
"type": "string",
"default": "You are a helpful AI assistant.",
"rows": 4,
"optional": true,
"additionalParams": true,
"id": "toolAgent_0-input-systemMessage-string"
},
{
"label": "Max Iterations",
"name": "maxIterations",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "toolAgent_0-input-maxIterations-number"
}
],
"inputAnchors": [
{
"label": "Tools",
"name": "tools",
"type": "Tool",
"list": true,
"id": "toolAgent_0-input-tools-Tool"
},
{
"label": "Memory",
"name": "memory",
"type": "BaseChatMemory",
"id": "toolAgent_0-input-memory-BaseChatMemory"
},
{
"label": "Tool Calling Chat Model",
"name": "model",
"type": "BaseChatModel",
"description": "Only compatible with models that are capable of function calling: ChatOpenAI, ChatMistral, ChatAnthropic, ChatGoogleGenerativeAI, ChatVertexAI, GroqChat",
"id": "toolAgent_0-input-model-BaseChatModel"
},
{
"label": "Input Moderation",
"description": "Detect text that could generate harmful output and prevent it from being sent to the language model",
"name": "inputModeration",
"type": "Moderation",
"optional": true,
"list": true,
"id": "toolAgent_0-input-inputModeration-Moderation"
}
],
"inputs": {
"tools": ["{{chainTool_0.data.instance}}"],
"memory": "{{bufferMemory_0.data.instance}}",
"model": "{{chatOpenAI_2.data.instance}}",
"systemMessage": "You are a helpful AI assistant.",
"inputModeration": "",
"maxIterations": ""
},
"outputAnchors": [
{
"id": "toolAgent_0-output-toolAgent-AgentExecutor|BaseChain|Runnable",
"name": "toolAgent",
"label": "AgentExecutor",
"description": "Agent that uses Function Calling to pick the tools and args to call",
"type": "AgentExecutor | BaseChain | Runnable"
}
],
"outputs": {},
"selected": false
},
"width": 300,
"height": 435,
"selected": false,
"positionAbsolute": {
"x": 2054.7555242376347,
"y": 710.4140533942601
},
"dragging": false
},
{
"id": "stickyNote_0",
"position": {
"x": 2046.8203973748023,
"y": 399.1483966834255
},
"type": "stickyNote",
"data": {
"id": "stickyNote_0",
"label": "Sticky Note",
"version": 2,
"name": "stickyNote",
"type": "StickyNote",
"baseClasses": ["StickyNote"],
"tags": ["Utilities"],
"category": "Utilities",
"description": "Add a sticky note",
"inputParams": [
{
"label": "",
"name": "note",
"type": "string",
"rows": 1,
"placeholder": "Type something here",
"optional": true,
"id": "stickyNote_0-input-note-string"
}
],
"inputAnchors": [],
"inputs": {
"note": "Using agent, we give it a tool that is attached to an OpenAPI Chain.\n\nOpenAPI Chain uses a LLM to automatically figure out what is the correct URL and params to call given the YML spec file.\n\nResults are then fetched back to agent.\n\nExample question:\nI am looking for some blue tshirt, can u help me find some?"
},
"outputAnchors": [
{
"id": "stickyNote_0-output-stickyNote-StickyNote",
"name": "stickyNote",
"label": "StickyNote",
"description": "Add a sticky note",
"type": "StickyNote"
}
],
"outputs": {},
"selected": false
},
"width": 300,
"height": 284,
"selected": false,
"positionAbsolute": {
"x": 2046.8203973748023,
"y": 399.1483966834255
},
"dragging": false
}
],
"edges": [
{
"source": "chatOpenAI_1",
"sourceHandle": "chatOpenAI_1-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"target": "openApiChain_1",
"targetHandle": "openApiChain_1-input-model-ChatOpenAI",
"type": "buttonedge",
"id": "chatOpenAI_1-chatOpenAI_1-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel-openApiChain_1-openApiChain_1-input-model-ChatOpenAI",
"data": {
"label": ""
}
},
{
"source": "openApiChain_1",
"sourceHandle": "openApiChain_1-output-openApiChain-OpenAPIChain|BaseChain",
"target": "chainTool_0",
"targetHandle": "chainTool_0-input-baseChain-BaseChain",
"type": "buttonedge",
"id": "openApiChain_1-openApiChain_1-output-openApiChain-OpenAPIChain|BaseChain-chainTool_0-chainTool_0-input-baseChain-BaseChain",
"data": {
"label": ""
}
},
{
"source": "chainTool_0",
"sourceHandle": "chainTool_0-output-chainTool-ChainTool|DynamicTool|Tool|StructuredTool",
"target": "toolAgent_0",
"targetHandle": "toolAgent_0-input-tools-Tool",
"type": "buttonedge",
"id": "chainTool_0-chainTool_0-output-chainTool-ChainTool|DynamicTool|Tool|StructuredTool-toolAgent_0-toolAgent_0-input-tools-Tool"
},
{
"source": "chatOpenAI_2",
"sourceHandle": "chatOpenAI_2-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"target": "toolAgent_0",
"targetHandle": "toolAgent_0-input-model-BaseChatModel",
"type": "buttonedge",
"id": "chatOpenAI_2-chatOpenAI_2-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel-toolAgent_0-toolAgent_0-input-model-BaseChatModel"
},
{
"source": "bufferMemory_0",
"sourceHandle": "bufferMemory_0-output-bufferMemory-BufferMemory|BaseChatMemory|BaseMemory",
"target": "toolAgent_0",
"targetHandle": "toolAgent_0-input-memory-BaseChatMemory",
"type": "buttonedge",
"id": "bufferMemory_0-bufferMemory_0-output-bufferMemory-BufferMemory|BaseChatMemory|BaseMemory-toolAgent_0-toolAgent_0-input-memory-BaseChatMemory"
}
]
}

View File

@ -1,773 +0,0 @@
{
"description": "Conversational Agent with ability to visit a website and extract information",
"usecases": ["Agent"],
"framework": ["Langchain"],
"nodes": [
{
"width": 300,
"height": 253,
"id": "bufferMemory_0",
"position": {
"x": 457.04304716743604,
"y": 362.4048129799687
},
"type": "customNode",
"data": {
"id": "bufferMemory_0",
"label": "Buffer Memory",
"version": 2,
"name": "bufferMemory",
"type": "BufferMemory",
"baseClasses": ["BufferMemory", "BaseChatMemory", "BaseMemory"],
"category": "Memory",
"description": "Retrieve chat messages stored in database",
"inputParams": [
{
"label": "Session Id",
"name": "sessionId",
"type": "string",
"description": "If not specified, a random id will be used. Learn <a target=\"_blank\" href=\"https://docs.flowiseai.com/memory#ui-and-embedded-chat\">more</a>",
"default": "",
"additionalParams": true,
"optional": true,
"id": "bufferMemory_0-input-sessionId-string"
},
{
"label": "Memory Key",
"name": "memoryKey",
"type": "string",
"default": "chat_history",
"additionalParams": true,
"id": "bufferMemory_0-input-memoryKey-string"
}
],
"inputAnchors": [],
"inputs": {
"sessionId": "",
"memoryKey": "chat_history"
},
"outputAnchors": [
{
"id": "bufferMemory_0-output-bufferMemory-BufferMemory|BaseChatMemory|BaseMemory",
"name": "bufferMemory",
"label": "BufferMemory",
"type": "BufferMemory | BaseChatMemory | BaseMemory"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 457.04304716743604,
"y": 362.4048129799687
},
"dragging": false
},
{
"width": 300,
"height": 281,
"id": "webBrowser_0",
"position": {
"x": 1091.0866823400172,
"y": -16.43806989958216
},
"type": "customNode",
"data": {
"id": "webBrowser_0",
"label": "Web Browser",
"version": 1,
"name": "webBrowser",
"type": "WebBrowser",
"baseClasses": ["WebBrowser", "Tool", "StructuredTool", "BaseLangChain"],
"category": "Tools",
"description": "Gives agent the ability to visit a website and extract information",
"inputParams": [],
"inputAnchors": [
{
"label": "Language Model",
"name": "model",
"type": "BaseLanguageModel",
"id": "webBrowser_0-input-model-BaseLanguageModel"
},
{
"label": "Embeddings",
"name": "embeddings",
"type": "Embeddings",
"id": "webBrowser_0-input-embeddings-Embeddings"
}
],
"inputs": {
"model": "{{chatOpenAI_0.data.instance}}",
"embeddings": "{{openAIEmbeddings_0.data.instance}}"
},
"outputAnchors": [
{
"id": "webBrowser_0-output-webBrowser-WebBrowser|Tool|StructuredTool|BaseLangChain",
"name": "webBrowser",
"label": "WebBrowser",
"type": "WebBrowser | Tool | StructuredTool | BaseLangChain"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 1091.0866823400172,
"y": -16.43806989958216
},
"dragging": false
},
{
"width": 300,
"height": 670,
"id": "chatOpenAI_0",
"position": {
"x": 741.9540879250319,
"y": -534.6535148852278
},
"type": "customNode",
"data": {
"id": "chatOpenAI_0",
"label": "ChatOpenAI",
"version": 6,
"name": "chatOpenAI",
"type": "ChatOpenAI",
"baseClasses": ["ChatOpenAI", "BaseChatModel", "BaseLanguageModel"],
"category": "Chat Models",
"description": "Wrapper around OpenAI large language models that use the Chat endpoint",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "chatOpenAI_0-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "gpt-3.5-turbo",
"id": "chatOpenAI_0-input-modelName-options"
},
{
"label": "Temperature",
"name": "temperature",
"type": "number",
"default": 0.9,
"optional": true,
"id": "chatOpenAI_0-input-temperature-number"
},
{
"label": "Max Tokens",
"name": "maxTokens",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-maxTokens-number"
},
{
"label": "Top Probability",
"name": "topP",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-topP-number"
},
{
"label": "Frequency Penalty",
"name": "frequencyPenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-frequencyPenalty-number"
},
{
"label": "Presence Penalty",
"name": "presencePenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-presencePenalty-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-basepath-string"
},
{
"label": "BaseOptions",
"name": "baseOptions",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_0-input-baseOptions-json"
},
{
"label": "Allow Image Uploads",
"name": "allowImageUploads",
"type": "boolean",
"description": "Automatically uses gpt-4-vision-preview when image is being uploaded from chat. Only works with LLMChain, Conversation Chain, ReAct Agent, and Conversational Agent",
"default": false,
"optional": true,
"id": "chatOpenAI_0-input-allowImageUploads-boolean"
},
{
"label": "Image Resolution",
"description": "This parameter controls the resolution in which the model views the image.",
"name": "imageResolution",
"type": "options",
"options": [
{
"label": "Low",
"name": "low"
},
{
"label": "High",
"name": "high"
},
{
"label": "Auto",
"name": "auto"
}
],
"default": "low",
"optional": false,
"additionalParams": true,
"id": "chatOpenAI_0-input-imageResolution-options"
}
],
"inputAnchors": [
{
"label": "Cache",
"name": "cache",
"type": "BaseCache",
"optional": true,
"id": "chatOpenAI_0-input-cache-BaseCache"
}
],
"inputs": {
"modelName": "gpt-3.5-turbo",
"temperature": 0.9,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"basepath": "",
"baseOptions": "",
"allowImageUploads": true,
"imageResolution": "low"
},
"outputAnchors": [
{
"id": "chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"name": "chatOpenAI",
"label": "ChatOpenAI",
"type": "ChatOpenAI | BaseChatModel | BaseLanguageModel"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 741.9540879250319,
"y": -534.6535148852278
},
"dragging": false
},
{
"width": 300,
"height": 424,
"id": "openAIEmbeddings_0",
"position": {
"x": 403.72014625628697,
"y": -103.82540449681527
},
"type": "customNode",
"data": {
"id": "openAIEmbeddings_0",
"label": "OpenAI Embeddings",
"version": 4,
"name": "openAIEmbeddings",
"type": "OpenAIEmbeddings",
"baseClasses": ["OpenAIEmbeddings", "Embeddings"],
"category": "Embeddings",
"description": "OpenAI API to generate embeddings for a given text",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "openAIEmbeddings_0-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "text-embedding-ada-002",
"id": "openAIEmbeddings_0-input-modelName-asyncOptions"
},
{
"label": "Strip New Lines",
"name": "stripNewLines",
"type": "boolean",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-stripNewLines-boolean"
},
{
"label": "Batch Size",
"name": "batchSize",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-batchSize-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-basepath-string"
},
{
"label": "Dimensions",
"name": "dimensions",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "openAIEmbeddings_0-input-dimensions-number"
}
],
"inputAnchors": [],
"inputs": {
"modelName": "text-embedding-ada-002",
"stripNewLines": "",
"batchSize": "",
"timeout": "",
"basepath": "",
"dimensions": ""
},
"outputAnchors": [
{
"id": "openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings",
"name": "openAIEmbeddings",
"label": "OpenAIEmbeddings",
"description": "OpenAI API to generate embeddings for a given text",
"type": "OpenAIEmbeddings | Embeddings"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 403.72014625628697,
"y": -103.82540449681527
},
"dragging": false
},
{
"width": 300,
"height": 670,
"id": "chatOpenAI_1",
"position": {
"x": 68.312124033115,
"y": -239.65476709991256
},
"type": "customNode",
"data": {
"id": "chatOpenAI_1",
"label": "ChatOpenAI",
"version": 6,
"name": "chatOpenAI",
"type": "ChatOpenAI",
"baseClasses": ["ChatOpenAI", "BaseChatModel", "BaseLanguageModel"],
"category": "Chat Models",
"description": "Wrapper around OpenAI large language models that use the Chat endpoint",
"inputParams": [
{
"label": "Connect Credential",
"name": "credential",
"type": "credential",
"credentialNames": ["openAIApi"],
"id": "chatOpenAI_1-input-credential-credential"
},
{
"label": "Model Name",
"name": "modelName",
"type": "asyncOptions",
"loadMethod": "listModels",
"default": "gpt-3.5-turbo",
"id": "chatOpenAI_1-input-modelName-options"
},
{
"label": "Temperature",
"name": "temperature",
"type": "number",
"default": 0.9,
"optional": true,
"id": "chatOpenAI_1-input-temperature-number"
},
{
"label": "Max Tokens",
"name": "maxTokens",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-maxTokens-number"
},
{
"label": "Top Probability",
"name": "topP",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-topP-number"
},
{
"label": "Frequency Penalty",
"name": "frequencyPenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-frequencyPenalty-number"
},
{
"label": "Presence Penalty",
"name": "presencePenalty",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-presencePenalty-number"
},
{
"label": "Timeout",
"name": "timeout",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-timeout-number"
},
{
"label": "BasePath",
"name": "basepath",
"type": "string",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-basepath-string"
},
{
"label": "BaseOptions",
"name": "baseOptions",
"type": "json",
"optional": true,
"additionalParams": true,
"id": "chatOpenAI_1-input-baseOptions-json"
},
{
"label": "Allow Image Uploads",
"name": "allowImageUploads",
"type": "boolean",
"description": "Automatically uses gpt-4-vision-preview when image is being uploaded from chat. Only works with LLMChain, Conversation Chain, ReAct Agent, and Conversational Agent",
"default": false,
"optional": true,
"id": "chatOpenAI_1-input-allowImageUploads-boolean"
},
{
"label": "Image Resolution",
"description": "This parameter controls the resolution in which the model views the image.",
"name": "imageResolution",
"type": "options",
"options": [
{
"label": "Low",
"name": "low"
},
{
"label": "High",
"name": "high"
},
{
"label": "Auto",
"name": "auto"
}
],
"default": "low",
"optional": false,
"additionalParams": true,
"id": "chatOpenAI_1-input-imageResolution-options"
}
],
"inputAnchors": [
{
"label": "Cache",
"name": "cache",
"type": "BaseCache",
"optional": true,
"id": "chatOpenAI_1-input-cache-BaseCache"
}
],
"inputs": {
"modelName": "gpt-3.5-turbo-16k",
"temperature": 0.9,
"maxTokens": "",
"topP": "",
"frequencyPenalty": "",
"presencePenalty": "",
"timeout": "",
"basepath": "",
"baseOptions": "",
"allowImageUploads": true,
"imageResolution": "low"
},
"outputAnchors": [
{
"id": "chatOpenAI_1-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"name": "chatOpenAI",
"label": "ChatOpenAI",
"type": "ChatOpenAI | BaseChatModel | BaseLanguageModel"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 68.312124033115,
"y": -239.65476709991256
},
"dragging": false
},
{
"width": 300,
"height": 435,
"id": "conversationalAgent_0",
"position": {
"x": 1518.944765840293,
"y": 212.2513364217197
},
"type": "customNode",
"data": {
"id": "conversationalAgent_0",
"label": "Conversational Agent",
"version": 3,
"name": "conversationalAgent",
"type": "AgentExecutor",
"baseClasses": ["AgentExecutor", "BaseChain", "Runnable"],
"category": "Agents",
"description": "Conversational agent for a chat model. It will utilize chat specific prompts",
"inputParams": [
{
"label": "System Message",
"name": "systemMessage",
"type": "string",
"rows": 4,
"default": "Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.",
"optional": true,
"additionalParams": true,
"id": "conversationalAgent_0-input-systemMessage-string"
},
{
"label": "Max Iterations",
"name": "maxIterations",
"type": "number",
"optional": true,
"additionalParams": true,
"id": "conversationalAgent_0-input-maxIterations-number"
}
],
"inputAnchors": [
{
"label": "Allowed Tools",
"name": "tools",
"type": "Tool",
"list": true,
"id": "conversationalAgent_0-input-tools-Tool"
},
{
"label": "Chat Model",
"name": "model",
"type": "BaseChatModel",
"id": "conversationalAgent_0-input-model-BaseChatModel"
},
{
"label": "Memory",
"name": "memory",
"type": "BaseChatMemory",
"id": "conversationalAgent_0-input-memory-BaseChatMemory"
},
{
"label": "Input Moderation",
"description": "Detect text that could generate harmful output and prevent it from being sent to the language model",
"name": "inputModeration",
"type": "Moderation",
"optional": true,
"list": true,
"id": "conversationalAgent_0-input-inputModeration-Moderation"
}
],
"inputs": {
"inputModeration": "",
"tools": ["{{webBrowser_0.data.instance}}"],
"model": "{{chatOpenAI_1.data.instance}}",
"memory": "{{bufferMemory_0.data.instance}}",
"systemMessage": "Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist."
},
"outputAnchors": [
{
"id": "conversationalAgent_0-output-conversationalAgent-AgentExecutor|BaseChain|Runnable",
"name": "conversationalAgent",
"label": "AgentExecutor",
"type": "AgentExecutor | BaseChain | Runnable"
}
],
"outputs": {},
"selected": false
},
"selected": false,
"positionAbsolute": {
"x": 1518.944765840293,
"y": 212.2513364217197
},
"dragging": false
},
{
"id": "stickyNote_0",
"position": {
"x": 1086.284843942572,
"y": -110.93321070573408
},
"type": "stickyNote",
"data": {
"id": "stickyNote_0",
"label": "Sticky Note",
"version": 2,
"name": "stickyNote",
"type": "StickyNote",
"baseClasses": ["StickyNote"],
"tags": ["Utilities"],
"category": "Utilities",
"description": "Add a sticky note",
"inputParams": [
{
"label": "",
"name": "note",
"type": "string",
"rows": 1,
"placeholder": "Type something here",
"optional": true,
"id": "stickyNote_0-input-note-string"
}
],
"inputAnchors": [],
"inputs": {
"note": "Web Browser Tool gives agent the ability to visit a website and extract information"
},
"outputAnchors": [
{
"id": "stickyNote_0-output-stickyNote-StickyNote",
"name": "stickyNote",
"label": "StickyNote",
"description": "Add a sticky note",
"type": "StickyNote"
}
],
"outputs": {},
"selected": false
},
"width": 300,
"height": 62,
"selected": false,
"positionAbsolute": {
"x": 1086.284843942572,
"y": -110.93321070573408
},
"dragging": false
}
],
"edges": [
{
"source": "openAIEmbeddings_0",
"sourceHandle": "openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings",
"target": "webBrowser_0",
"targetHandle": "webBrowser_0-input-embeddings-Embeddings",
"type": "buttonedge",
"id": "openAIEmbeddings_0-openAIEmbeddings_0-output-openAIEmbeddings-OpenAIEmbeddings|Embeddings-webBrowser_0-webBrowser_0-input-embeddings-Embeddings",
"data": {
"label": ""
}
},
{
"source": "chatOpenAI_0",
"sourceHandle": "chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"target": "webBrowser_0",
"targetHandle": "webBrowser_0-input-model-BaseLanguageModel",
"type": "buttonedge",
"id": "chatOpenAI_0-chatOpenAI_0-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel-webBrowser_0-webBrowser_0-input-model-BaseLanguageModel",
"data": {
"label": ""
}
},
{
"source": "webBrowser_0",
"sourceHandle": "webBrowser_0-output-webBrowser-WebBrowser|Tool|StructuredTool|BaseLangChain",
"target": "conversationalAgent_0",
"targetHandle": "conversationalAgent_0-input-tools-Tool",
"type": "buttonedge",
"id": "webBrowser_0-webBrowser_0-output-webBrowser-WebBrowser|Tool|StructuredTool|BaseLangChain-conversationalAgent_0-conversationalAgent_0-input-tools-Tool",
"data": {
"label": ""
}
},
{
"source": "chatOpenAI_1",
"sourceHandle": "chatOpenAI_1-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel",
"target": "conversationalAgent_0",
"targetHandle": "conversationalAgent_0-input-model-BaseChatModel",
"type": "buttonedge",
"id": "chatOpenAI_1-chatOpenAI_1-output-chatOpenAI-ChatOpenAI|BaseChatModel|BaseLanguageModel-conversationalAgent_0-conversationalAgent_0-input-model-BaseChatModel",
"data": {
"label": ""
}
},
{
"source": "bufferMemory_0",
"sourceHandle": "bufferMemory_0-output-bufferMemory-BufferMemory|BaseChatMemory|BaseMemory",
"target": "conversationalAgent_0",
"targetHandle": "conversationalAgent_0-input-memory-BaseChatMemory",
"type": "buttonedge",
"id": "bufferMemory_0-bufferMemory_0-output-bufferMemory-BufferMemory|BaseChatMemory|BaseMemory-conversationalAgent_0-conversationalAgent_0-input-memory-BaseChatMemory",
"data": {
"label": ""
}
}
]
}

View File

@ -100,7 +100,7 @@
"multer-s3": "^3.0.1", "multer-s3": "^3.0.1",
"mysql2": "^3.11.3", "mysql2": "^3.11.3",
"flowise-nim-container-manager": "^1.0.11", "flowise-nim-container-manager": "^1.0.11",
"openai": "^4.82.0", "openai": "^4.96.0",
"pg": "^8.11.1", "pg": "^8.11.1",
"posthog-node": "^3.5.0", "posthog-node": "^3.5.0",
"prom-client": "^15.1.3", "prom-client": "^15.1.3",
@ -109,6 +109,7 @@
"s3-streamlogger": "^1.11.0", "s3-streamlogger": "^1.11.0",
"sanitize-html": "^2.11.0", "sanitize-html": "^2.11.0",
"sqlite3": "^5.1.6", "sqlite3": "^5.1.6",
"turndown": "^7.2.0",
"typeorm": "^0.3.6", "typeorm": "^0.3.6",
"uuid": "^9.0.1", "uuid": "^9.0.1",
"winston": "^3.9.0" "winston": "^3.9.0"
@ -120,6 +121,7 @@
"@types/multer": "^1.4.7", "@types/multer": "^1.4.7",
"@types/multer-s3": "^3.0.3", "@types/multer-s3": "^3.0.3",
"@types/sanitize-html": "^2.9.5", "@types/sanitize-html": "^2.9.5",
"@types/turndown": "^5.0.5",
"concurrently": "^7.1.0", "concurrently": "^7.1.0",
"cypress": "^13.13.0", "cypress": "^13.13.0",
"nodemon": "^2.0.22", "nodemon": "^2.0.22",

View File

@ -2,8 +2,10 @@ import {
IAction, IAction,
ICommonObject, ICommonObject,
IFileUpload, IFileUpload,
IHumanInput,
INode, INode,
INodeData as INodeDataFromComponent, INodeData as INodeDataFromComponent,
INodeExecutionData,
INodeParams, INodeParams,
IServerSideEventStreamer IServerSideEventStreamer
} from 'flowise-components' } from 'flowise-components'
@ -13,10 +15,12 @@ import { Telemetry } from './utils/telemetry'
export type MessageType = 'apiMessage' | 'userMessage' export type MessageType = 'apiMessage' | 'userMessage'
export type ChatflowType = 'CHATFLOW' | 'MULTIAGENT' | 'ASSISTANT' export type ChatflowType = 'CHATFLOW' | 'MULTIAGENT' | 'ASSISTANT' | 'AGENTFLOW'
export type AssistantType = 'CUSTOM' | 'OPENAI' | 'AZURE' export type AssistantType = 'CUSTOM' | 'OPENAI' | 'AZURE'
export type ExecutionState = 'INPROGRESS' | 'FINISHED' | 'ERROR' | 'TERMINATED' | 'TIMEOUT' | 'STOPPED'
export enum MODE { export enum MODE {
QUEUE = 'queue', QUEUE = 'queue',
MAIN = 'main' MAIN = 'main'
@ -57,6 +61,7 @@ export interface IChatMessage {
role: MessageType role: MessageType
content: string content: string
chatflowid: string chatflowid: string
executionId?: string
sourceDocuments?: string sourceDocuments?: string
usedTools?: string usedTools?: string
fileAnnotations?: string fileAnnotations?: string
@ -140,6 +145,19 @@ export interface IUpsertHistory {
date: Date date: Date
} }
export interface IExecution {
id: string
executionData: string
state: ExecutionState
agentflowId: string
sessionId: string
isPublic?: boolean
action?: string
createdDate: Date
updatedDate: Date
stoppedDate: Date
}
export interface IComponentNodes { export interface IComponentNodes {
[key: string]: INode [key: string]: INode
} }
@ -187,6 +205,8 @@ export interface IReactFlowNode {
height: number height: number
selected: boolean selected: boolean
dragging: boolean dragging: boolean
parentNode?: string
extent?: string
} }
export interface IReactFlowEdge { export interface IReactFlowEdge {
@ -227,6 +247,14 @@ export interface IDepthQueue {
[key: string]: number [key: string]: number
} }
export interface IAgentflowExecutedData {
nodeLabel: string
nodeId: string
data: INodeExecutionData
previousNodeIds: string[]
status?: ExecutionState
}
export interface IMessage { export interface IMessage {
message: string message: string
type: MessageType type: MessageType
@ -238,6 +266,7 @@ export interface IncomingInput {
question: string question: string
overrideConfig?: ICommonObject overrideConfig?: ICommonObject
chatId?: string chatId?: string
sessionId?: string
stopNodeId?: string stopNodeId?: string
uploads?: IFileUpload[] uploads?: IFileUpload[]
leadEmail?: string leadEmail?: string
@ -246,6 +275,12 @@ export interface IncomingInput {
streaming?: boolean streaming?: boolean
} }
export interface IncomingAgentflowInput extends Omit<IncomingInput, 'question'> {
question?: string
form?: Record<string, any>
humanInput?: IHumanInput
}
export interface IActiveChatflows { export interface IActiveChatflows {
[key: string]: { [key: string]: {
startingNodes: IReactFlowNode[] startingNodes: IReactFlowNode[]
@ -266,6 +301,7 @@ export interface IOverrideConfig {
label: string label: string
name: string name: string
type: string type: string
schema?: ICommonObject[]
} }
export type ICredentialDataDecrypted = ICommonObject export type ICredentialDataDecrypted = ICommonObject
@ -315,6 +351,8 @@ export interface IFlowConfig {
chatHistory: IMessage[] chatHistory: IMessage[]
apiMessageId: string apiMessageId: string
overrideConfig?: ICommonObject overrideConfig?: ICommonObject
state?: ICommonObject
runtimeChatHistoryLength?: number
} }
export interface IPredictionQueueAppServer { export interface IPredictionQueueAppServer {
@ -333,7 +371,13 @@ export interface IExecuteFlowParams extends IPredictionQueueAppServer {
isInternal: boolean isInternal: boolean
signal?: AbortController signal?: AbortController
files?: Express.Multer.File[] files?: Express.Multer.File[]
fileUploads?: IFileUpload[]
uploadedFilesContent?: string
isUpsert?: boolean isUpsert?: boolean
isRecursive?: boolean
parentExecutionId?: string
iterationContext?: ICommonObject
isTool?: boolean
} }
export interface INodeOverrides { export interface INodeOverrides {

View File

@ -0,0 +1,18 @@
import { Request, Response, NextFunction } from 'express'
import agentflowv2Service from '../../services/agentflowv2-generator'
const generateAgentflowv2 = async (req: Request, res: Response, next: NextFunction) => {
try {
if (!req.body.question || !req.body.selectedChatModel) {
throw new Error('Question and selectedChatModel are required')
}
const apiResponse = await agentflowv2Service.generateAgentflowv2(req.body.question, req.body.selectedChatModel)
return res.json(apiResponse)
} catch (error) {
next(error)
}
}
export default {
generateAgentflowv2
}

View File

@ -0,0 +1,114 @@
import { Request, Response, NextFunction } from 'express'
import executionsService from '../../services/executions'
import { ExecutionState } from '../../Interface'
const getExecutionById = async (req: Request, res: Response, next: NextFunction) => {
try {
const executionId = req.params.id
const execution = await executionsService.getExecutionById(executionId)
return res.json(execution)
} catch (error) {
next(error)
}
}
const getPublicExecutionById = async (req: Request, res: Response, next: NextFunction) => {
try {
const executionId = req.params.id
const execution = await executionsService.getPublicExecutionById(executionId)
return res.json(execution)
} catch (error) {
next(error)
}
}
const updateExecution = async (req: Request, res: Response, next: NextFunction) => {
try {
const executionId = req.params.id
const execution = await executionsService.updateExecution(executionId, req.body)
return res.json(execution)
} catch (error) {
next(error)
}
}
const getAllExecutions = async (req: Request, res: Response, next: NextFunction) => {
try {
// Extract all possible filters from query params
const filters: any = {}
// ID filter
if (req.query.id) filters.id = req.query.id as string
// Flow and session filters
if (req.query.agentflowId) filters.agentflowId = req.query.agentflowId as string
if (req.query.sessionId) filters.sessionId = req.query.sessionId as string
// State filter
if (req.query.state) {
const stateValue = req.query.state as string
if (['INPROGRESS', 'FINISHED', 'ERROR', 'TERMINATED', 'TIMEOUT', 'STOPPED'].includes(stateValue)) {
filters.state = stateValue as ExecutionState
}
}
// Date filters
if (req.query.startDate) {
filters.startDate = new Date(req.query.startDate as string)
}
if (req.query.endDate) {
filters.endDate = new Date(req.query.endDate as string)
}
// Pagination
if (req.query.page) {
filters.page = parseInt(req.query.page as string, 10)
}
if (req.query.limit) {
filters.limit = parseInt(req.query.limit as string, 10)
}
const apiResponse = await executionsService.getAllExecutions(filters)
return res.json(apiResponse)
} catch (error) {
next(error)
}
}
/**
* Delete multiple executions by their IDs
* If a single ID is provided in the URL params, it will delete that execution
* If an array of IDs is provided in the request body, it will delete all those executions
*/
const deleteExecutions = async (req: Request, res: Response, next: NextFunction) => {
try {
let executionIds: string[] = []
// Check if we're deleting a single execution from URL param
if (req.params.id) {
executionIds = [req.params.id]
}
// Check if we're deleting multiple executions from request body
else if (req.body.executionIds && Array.isArray(req.body.executionIds)) {
executionIds = req.body.executionIds
} else {
return res.status(400).json({ success: false, message: 'No execution IDs provided' })
}
const result = await executionsService.deleteExecutions(executionIds)
return res.json(result)
} catch (error) {
next(error)
}
}
export default {
getAllExecutions,
deleteExecutions,
getExecutionById,
getPublicExecutionById,
updateExecution
}

View File

@ -0,0 +1,24 @@
import { Request, Response, NextFunction } from 'express'
import validationService from '../../services/validation'
import { InternalFlowiseError } from '../../errors/internalFlowiseError'
import { StatusCodes } from 'http-status-codes'
const checkFlowValidation = async (req: Request, res: Response, next: NextFunction) => {
try {
const flowId = req.params?.id as string | undefined
if (!flowId) {
throw new InternalFlowiseError(
StatusCodes.PRECONDITION_FAILED,
`Error: validationController.checkFlowValidation - id not provided!`
)
}
const apiResponse = await validationService.checkFlowValidation(flowId)
return res.json(apiResponse)
} catch (error) {
next(error)
}
}
export default {
checkFlowValidation
}

View File

@ -1,6 +1,7 @@
/* eslint-disable */ /* eslint-disable */
import { Entity, Column, CreateDateColumn, PrimaryGeneratedColumn, Index } from 'typeorm' import { Entity, Column, CreateDateColumn, PrimaryGeneratedColumn, Index, JoinColumn, OneToOne } from 'typeorm'
import { IChatMessage, MessageType } from '../../Interface' import { IChatMessage, MessageType } from '../../Interface'
import { Execution } from './Execution'
@Entity() @Entity()
export class ChatMessage implements IChatMessage { export class ChatMessage implements IChatMessage {
@ -14,6 +15,13 @@ export class ChatMessage implements IChatMessage {
@Column({ type: 'uuid' }) @Column({ type: 'uuid' })
chatflowid: string chatflowid: string
@Column({ nullable: true, type: 'uuid' })
executionId?: string
@OneToOne(() => Execution)
@JoinColumn({ name: 'executionId' })
execution: Execution
@Column({ type: 'text' }) @Column({ type: 'text' })
content: string content: string

View File

@ -0,0 +1,44 @@
import { Entity, Column, Index, PrimaryGeneratedColumn, CreateDateColumn, UpdateDateColumn, ManyToOne, JoinColumn } from 'typeorm'
import { IExecution, ExecutionState } from '../../Interface'
import { ChatFlow } from './ChatFlow'
@Entity()
export class Execution implements IExecution {
@PrimaryGeneratedColumn('uuid')
id: string
@Column({ type: 'text' })
executionData: string
@Column()
state: ExecutionState
@Index()
@Column({ type: 'uuid' })
agentflowId: string
@Index()
@Column({ type: 'uuid' })
sessionId: string
@Column({ nullable: true, type: 'text' })
action?: string
@Column({ nullable: true })
isPublic?: boolean
@Column({ type: 'timestamp' })
@CreateDateColumn()
createdDate: Date
@Column({ type: 'timestamp' })
@UpdateDateColumn()
updatedDate: Date
@Column()
stoppedDate: Date
@ManyToOne(() => ChatFlow)
@JoinColumn({ name: 'agentflowId' })
agentflow: ChatFlow
}

View File

@ -11,6 +11,7 @@ import { Lead } from './Lead'
import { UpsertHistory } from './UpsertHistory' import { UpsertHistory } from './UpsertHistory'
import { ApiKey } from './ApiKey' import { ApiKey } from './ApiKey'
import { CustomTemplate } from './CustomTemplate' import { CustomTemplate } from './CustomTemplate'
import { Execution } from './Execution'
export const entities = { export const entities = {
ChatFlow, ChatFlow,
@ -25,5 +26,6 @@ export const entities = {
Lead, Lead,
UpsertHistory, UpsertHistory,
ApiKey, ApiKey,
CustomTemplate CustomTemplate,
Execution
} }

View File

@ -0,0 +1,31 @@
import { MigrationInterface, QueryRunner } from 'typeorm'
export class AddExecutionEntity1738090872625 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`CREATE TABLE IF NOT EXISTS \`execution\` (
\`id\` varchar(36) NOT NULL,
\`executionData\` text NOT NULL,
\`action\` text,
\`state\` varchar(255) NOT NULL,
\`agentflowId\` varchar(255) NOT NULL,
\`sessionId\` varchar(255) NOT NULL,
\`isPublic\` boolean,
\`createdDate\` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
\`updatedDate\` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6),
\`stoppedDate\` datetime(6),
PRIMARY KEY (\`id\`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci;`
)
const columnExists = await queryRunner.hasColumn('chat_message', 'executionId')
if (!columnExists) {
await queryRunner.query(`ALTER TABLE \`chat_message\` ADD COLUMN \`executionId\` TEXT;`)
}
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`DROP TABLE IF EXISTS \`execution\``)
await queryRunner.query(`ALTER TABLE \`chat_message\` DROP COLUMN \`executionId\`;`)
}
}

View File

@ -28,6 +28,7 @@ import { AddCustomTemplate1725629836652 } from './1725629836652-AddCustomTemplat
import { AddArtifactsToChatMessage1726156258465 } from './1726156258465-AddArtifactsToChatMessage' import { AddArtifactsToChatMessage1726156258465 } from './1726156258465-AddArtifactsToChatMessage'
import { AddFollowUpPrompts1726666318346 } from './1726666318346-AddFollowUpPrompts' import { AddFollowUpPrompts1726666318346 } from './1726666318346-AddFollowUpPrompts'
import { AddTypeToAssistant1733011290987 } from './1733011290987-AddTypeToAssistant' import { AddTypeToAssistant1733011290987 } from './1733011290987-AddTypeToAssistant'
import { AddExecutionEntity1738090872625 } from './1738090872625-AddExecutionEntity'
export const mariadbMigrations = [ export const mariadbMigrations = [
Init1693840429259, Init1693840429259,
@ -59,5 +60,6 @@ export const mariadbMigrations = [
AddCustomTemplate1725629836652, AddCustomTemplate1725629836652,
AddArtifactsToChatMessage1726156258465, AddArtifactsToChatMessage1726156258465,
AddFollowUpPrompts1726666318346, AddFollowUpPrompts1726666318346,
AddTypeToAssistant1733011290987 AddTypeToAssistant1733011290987,
AddExecutionEntity1738090872625
] ]

View File

@ -0,0 +1,31 @@
import { MigrationInterface, QueryRunner } from 'typeorm'
export class AddExecutionEntity1738090872625 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`CREATE TABLE IF NOT EXISTS \`execution\` (
\`id\` varchar(36) NOT NULL,
\`executionData\` text NOT NULL,
\`action\` text,
\`state\` varchar(255) NOT NULL,
\`agentflowId\` varchar(255) NOT NULL,
\`sessionId\` varchar(255) NOT NULL,
\`isPublic\` boolean,
\`createdDate\` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6),
\`updatedDate\` datetime(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6),
\`stoppedDate\` datetime(6),
PRIMARY KEY (\`id\`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;`
)
const columnExists = await queryRunner.hasColumn('chat_message', 'executionId')
if (!columnExists) {
await queryRunner.query(`ALTER TABLE \`chat_message\` ADD COLUMN \`executionId\` TEXT;`)
}
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`DROP TABLE IF EXISTS \`execution\``)
await queryRunner.query(`ALTER TABLE \`chat_message\` DROP COLUMN \`executionId\`;`)
}
}

View File

@ -28,6 +28,7 @@ import { AddCustomTemplate1725629836652 } from './1725629836652-AddCustomTemplat
import { AddArtifactsToChatMessage1726156258465 } from './1726156258465-AddArtifactsToChatMessage' import { AddArtifactsToChatMessage1726156258465 } from './1726156258465-AddArtifactsToChatMessage'
import { AddFollowUpPrompts1726666302024 } from './1726666302024-AddFollowUpPrompts' import { AddFollowUpPrompts1726666302024 } from './1726666302024-AddFollowUpPrompts'
import { AddTypeToAssistant1733011290987 } from './1733011290987-AddTypeToAssistant' import { AddTypeToAssistant1733011290987 } from './1733011290987-AddTypeToAssistant'
import { AddExecutionEntity1738090872625 } from './1738090872625-AddExecutionEntity'
export const mysqlMigrations = [ export const mysqlMigrations = [
Init1693840429259, Init1693840429259,
@ -59,5 +60,6 @@ export const mysqlMigrations = [
AddCustomTemplate1725629836652, AddCustomTemplate1725629836652,
AddArtifactsToChatMessage1726156258465, AddArtifactsToChatMessage1726156258465,
AddFollowUpPrompts1726666302024, AddFollowUpPrompts1726666302024,
AddTypeToAssistant1733011290987 AddTypeToAssistant1733011290987,
AddExecutionEntity1738090872625
] ]

View File

@ -0,0 +1,31 @@
import { MigrationInterface, QueryRunner } from 'typeorm'
export class AddExecutionEntity1738090872625 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`CREATE TABLE IF NOT EXISTS execution (
id uuid NOT NULL DEFAULT uuid_generate_v4(),
"executionData" text NOT NULL,
"action" text,
"state" varchar NOT NULL,
"agentflowId" uuid NOT NULL,
"sessionId" uuid NOT NULL,
"isPublic" boolean,
"createdDate" timestamp NOT NULL DEFAULT now(),
"updatedDate" timestamp NOT NULL DEFAULT now(),
"stoppedDate" timestamp,
CONSTRAINT "PK_936a419c3b8044598d72d95da61" PRIMARY KEY (id)
);`
)
const columnExists = await queryRunner.hasColumn('chat_message', 'executionId')
if (!columnExists) {
await queryRunner.query(`ALTER TABLE "chat_message" ADD COLUMN "executionId" uuid;`)
}
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`DROP TABLE execution`)
await queryRunner.query(`ALTER TABLE "chat_message" DROP COLUMN "executionId";`)
}
}

View File

@ -28,6 +28,7 @@ import { AddCustomTemplate1725629836652 } from './1725629836652-AddCustomTemplat
import { AddArtifactsToChatMessage1726156258465 } from './1726156258465-AddArtifactsToChatMessage' import { AddArtifactsToChatMessage1726156258465 } from './1726156258465-AddArtifactsToChatMessage'
import { AddFollowUpPrompts1726666309552 } from './1726666309552-AddFollowUpPrompts' import { AddFollowUpPrompts1726666309552 } from './1726666309552-AddFollowUpPrompts'
import { AddTypeToAssistant1733011290987 } from './1733011290987-AddTypeToAssistant' import { AddTypeToAssistant1733011290987 } from './1733011290987-AddTypeToAssistant'
import { AddExecutionEntity1738090872625 } from './1738090872625-AddExecutionEntity'
export const postgresMigrations = [ export const postgresMigrations = [
Init1693891895163, Init1693891895163,
@ -59,5 +60,6 @@ export const postgresMigrations = [
AddCustomTemplate1725629836652, AddCustomTemplate1725629836652,
AddArtifactsToChatMessage1726156258465, AddArtifactsToChatMessage1726156258465,
AddFollowUpPrompts1726666309552, AddFollowUpPrompts1726666309552,
AddTypeToAssistant1733011290987 AddTypeToAssistant1733011290987,
AddExecutionEntity1738090872625
] ]

View File

@ -0,0 +1,15 @@
import { MigrationInterface, QueryRunner } from 'typeorm'
export class AddExecutionEntity1738090872625 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(
`CREATE TABLE IF NOT EXISTS "execution" ("id" varchar PRIMARY KEY NOT NULL, "executionData" text NOT NULL, "action" text, "state" varchar NOT NULL, "agentflowId" varchar NOT NULL, "sessionId" varchar NOT NULL, "isPublic" boolean, "createdDate" datetime NOT NULL DEFAULT (datetime('now')), "updatedDate" datetime NOT NULL DEFAULT (datetime('now')), "stoppedDate" datetime);`
)
await queryRunner.query(`ALTER TABLE "chat_message" ADD COLUMN "executionId" varchar;`)
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`DROP TABLE execution`)
await queryRunner.query(`ALTER TABLE "chat_message" DROP COLUMN "executionId";`)
}
}

View File

@ -27,6 +27,7 @@ import { AddArtifactsToChatMessage1726156258465 } from './1726156258465-AddArtif
import { AddCustomTemplate1725629836652 } from './1725629836652-AddCustomTemplate' import { AddCustomTemplate1725629836652 } from './1725629836652-AddCustomTemplate'
import { AddFollowUpPrompts1726666294213 } from './1726666294213-AddFollowUpPrompts' import { AddFollowUpPrompts1726666294213 } from './1726666294213-AddFollowUpPrompts'
import { AddTypeToAssistant1733011290987 } from './1733011290987-AddTypeToAssistant' import { AddTypeToAssistant1733011290987 } from './1733011290987-AddTypeToAssistant'
import { AddExecutionEntity1738090872625 } from './1738090872625-AddExecutionEntity'
export const sqliteMigrations = [ export const sqliteMigrations = [
Init1693835579790, Init1693835579790,
@ -57,5 +58,6 @@ export const sqliteMigrations = [
AddArtifactsToChatMessage1726156258465, AddArtifactsToChatMessage1726156258465,
AddCustomTemplate1725629836652, AddCustomTemplate1725629836652,
AddFollowUpPrompts1726666294213, AddFollowUpPrompts1726666294213,
AddTypeToAssistant1733011290987 AddTypeToAssistant1733011290987,
AddExecutionEntity1738090872625
] ]

View File

@ -7,6 +7,9 @@ import { RedisEventPublisher } from './RedisEventPublisher'
import { AbortControllerPool } from '../AbortControllerPool' import { AbortControllerPool } from '../AbortControllerPool'
import { BaseQueue } from './BaseQueue' import { BaseQueue } from './BaseQueue'
import { RedisOptions } from 'bullmq' import { RedisOptions } from 'bullmq'
import logger from '../utils/logger'
import { generateAgentflowv2 as generateAgentflowv2_json } from 'flowise-components'
import { databaseEntities } from '../utils'
interface PredictionQueueOptions { interface PredictionQueueOptions {
appDataSource: DataSource appDataSource: DataSource
@ -16,6 +19,15 @@ interface PredictionQueueOptions {
abortControllerPool: AbortControllerPool abortControllerPool: AbortControllerPool
} }
interface IGenerateAgentflowv2Params extends IExecuteFlowParams {
prompt: string
componentNodes: IComponentNodes
toolNodes: IComponentNodes
selectedChatModel: Record<string, any>
question: string
isAgentFlowGenerator: boolean
}
export class PredictionQueue extends BaseQueue { export class PredictionQueue extends BaseQueue {
private componentNodes: IComponentNodes private componentNodes: IComponentNodes
private telemetry: Telemetry private telemetry: Telemetry
@ -45,13 +57,24 @@ export class PredictionQueue extends BaseQueue {
return this.queue return this.queue
} }
async processJob(data: IExecuteFlowParams) { async processJob(data: IExecuteFlowParams | IGenerateAgentflowv2Params) {
if (this.appDataSource) data.appDataSource = this.appDataSource if (this.appDataSource) data.appDataSource = this.appDataSource
if (this.telemetry) data.telemetry = this.telemetry if (this.telemetry) data.telemetry = this.telemetry
if (this.cachePool) data.cachePool = this.cachePool if (this.cachePool) data.cachePool = this.cachePool
if (this.componentNodes) data.componentNodes = this.componentNodes if (this.componentNodes) data.componentNodes = this.componentNodes
if (this.redisPublisher) data.sseStreamer = this.redisPublisher if (this.redisPublisher) data.sseStreamer = this.redisPublisher
if (Object.prototype.hasOwnProperty.call(data, 'isAgentFlowGenerator')) {
logger.info('Generating Agentflow...')
const { prompt, componentNodes, toolNodes, selectedChatModel, question } = data as IGenerateAgentflowv2Params
const options: Record<string, any> = {
appDataSource: this.appDataSource,
databaseEntities: databaseEntities,
logger: logger
}
return await generateAgentflowv2_json({ prompt, componentNodes, toolNodes, selectedChatModel }, question, options)
}
if (this.abortControllerPool) { if (this.abortControllerPool) {
const abortControllerId = `${data.chatflow.id}_${data.chatId}` const abortControllerId = `${data.chatflow.id}_${data.chatId}`
const signal = new AbortController() const signal = new AbortController()

View File

@ -119,6 +119,21 @@ export class RedisEventPublisher implements IServerSideEventStreamer {
} }
} }
streamCalledToolsEvent(chatId: string, data: any) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'calledTools',
data
})
)
} catch (error) {
console.error('Error streaming calledTools event:', error)
}
}
streamFileAnnotationsEvent(chatId: string, data: any) { streamFileAnnotationsEvent(chatId: string, data: any) {
try { try {
this.redisPublisher.publish( this.redisPublisher.publish(
@ -164,6 +179,36 @@ export class RedisEventPublisher implements IServerSideEventStreamer {
} }
} }
streamAgentFlowEvent(chatId: string, data: any): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'agentFlowEvent',
data
})
)
} catch (error) {
console.error('Error streaming agentFlow event:', error)
}
}
streamAgentFlowExecutedDataEvent(chatId: string, data: any): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'agentFlowExecutedData',
data
})
)
} catch (error) {
console.error('Error streaming agentFlowExecutedData event:', error)
}
}
streamNextAgentEvent(chatId: string, data: any): void { streamNextAgentEvent(chatId: string, data: any): void {
try { try {
this.redisPublisher.publish( this.redisPublisher.publish(
@ -179,6 +224,21 @@ export class RedisEventPublisher implements IServerSideEventStreamer {
} }
} }
streamNextAgentFlowEvent(chatId: string, data: any): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'nextAgentFlow',
data
})
)
} catch (error) {
console.error('Error streaming nextAgentFlow event:', error)
}
}
streamActionEvent(chatId: string, data: any): void { streamActionEvent(chatId: string, data: any): void {
try { try {
this.redisPublisher.publish( this.redisPublisher.publish(
@ -254,6 +314,21 @@ export class RedisEventPublisher implements IServerSideEventStreamer {
} }
} }
streamUsageMetadataEvent(chatId: string, data: any): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'usageMetadata',
data
})
)
} catch (error) {
console.error('Error streaming usage metadata event:', error)
}
}
async disconnect() { async disconnect() {
if (this.redisPublisher) { if (this.redisPublisher) {
await this.redisPublisher.quit() await this.redisPublisher.quit()

View File

@ -0,0 +1,7 @@
import express from 'express'
import agentflowv2GeneratorController from '../../controllers/agentflowv2-generator'
const router = express.Router()
router.post('/generate', agentflowv2GeneratorController.generateAgentflowv2)
export default router

View File

@ -0,0 +1,16 @@
import express from 'express'
import executionController from '../../controllers/executions'
const router = express.Router()
// READ
router.get('/', executionController.getAllExecutions)
router.get(['/', '/:id'], executionController.getExecutionById)
// PUT
router.put(['/', '/:id'], executionController.updateExecution)
// DELETE - single execution or multiple executions
router.delete('/:id', executionController.deleteExecutions)
router.delete('/', executionController.deleteExecutions)
export default router

View File

@ -35,6 +35,7 @@ import predictionRouter from './predictions'
import promptListsRouter from './prompts-lists' import promptListsRouter from './prompts-lists'
import publicChatbotRouter from './public-chatbots' import publicChatbotRouter from './public-chatbots'
import publicChatflowsRouter from './public-chatflows' import publicChatflowsRouter from './public-chatflows'
import publicExecutionsRouter from './public-executions'
import statsRouter from './stats' import statsRouter from './stats'
import toolsRouter from './tools' import toolsRouter from './tools'
import upsertHistoryRouter from './upsert-history' import upsertHistoryRouter from './upsert-history'
@ -43,6 +44,9 @@ import vectorRouter from './vectors'
import verifyRouter from './verify' import verifyRouter from './verify'
import versionRouter from './versions' import versionRouter from './versions'
import nvidiaNimRouter from './nvidia-nim' import nvidiaNimRouter from './nvidia-nim'
import executionsRouter from './executions'
import validationRouter from './validation'
import agentflowv2GeneratorRouter from './agentflowv2-generator'
const router = express.Router() const router = express.Router()
@ -82,6 +86,7 @@ router.use('/prediction', predictionRouter)
router.use('/prompts-list', promptListsRouter) router.use('/prompts-list', promptListsRouter)
router.use('/public-chatbotConfig', publicChatbotRouter) router.use('/public-chatbotConfig', publicChatbotRouter)
router.use('/public-chatflows', publicChatflowsRouter) router.use('/public-chatflows', publicChatflowsRouter)
router.use('/public-executions', publicExecutionsRouter)
router.use('/stats', statsRouter) router.use('/stats', statsRouter)
router.use('/tools', toolsRouter) router.use('/tools', toolsRouter)
router.use('/variables', variablesRouter) router.use('/variables', variablesRouter)
@ -90,5 +95,8 @@ router.use('/verify', verifyRouter)
router.use('/version', versionRouter) router.use('/version', versionRouter)
router.use('/upsert-history', upsertHistoryRouter) router.use('/upsert-history', upsertHistoryRouter)
router.use('/nvidia-nim', nvidiaNimRouter) router.use('/nvidia-nim', nvidiaNimRouter)
router.use('/executions', executionsRouter)
router.use('/validation', validationRouter)
router.use('/agentflowv2-generator', agentflowv2GeneratorRouter)
export default router export default router

View File

@ -0,0 +1,14 @@
import express from 'express'
import executionController from '../../controllers/executions'
const router = express.Router()
// CREATE
// READ
router.get(['/', '/:id'], executionController.getPublicExecutionById)
// UPDATE
// DELETE
export default router

View File

@ -0,0 +1,8 @@
import express from 'express'
import validationController from '../../controllers/validation'
const router = express.Router()
// READ
router.get('/:id', validationController.checkFlowValidation)
export default router

View File

@ -0,0 +1,248 @@
import { StatusCodes } from 'http-status-codes'
import { InternalFlowiseError } from '../../errors/internalFlowiseError'
import { getErrorMessage } from '../../errors/utils'
import { getRunningExpressApp } from '../../utils/getRunningExpressApp'
import path from 'path'
import * as fs from 'fs'
import { generateAgentflowv2 as generateAgentflowv2_json } from 'flowise-components'
import { z } from 'zod'
import { sysPrompt } from './prompt'
import { databaseEntities } from '../../utils'
import logger from '../../utils/logger'
import { MODE } from '../../Interface'
// Define the Zod schema for Agentflowv2 data structure
const NodeType = z.object({
id: z.string(),
type: z.string(),
position: z.object({
x: z.number(),
y: z.number()
}),
width: z.number(),
height: z.number(),
selected: z.boolean().optional(),
positionAbsolute: z
.object({
x: z.number(),
y: z.number()
})
.optional(),
dragging: z.boolean().optional(),
data: z.any().optional(),
parentNode: z.string().optional()
})
const EdgeType = z.object({
source: z.string(),
sourceHandle: z.string(),
target: z.string(),
targetHandle: z.string(),
data: z
.object({
sourceColor: z.string().optional(),
targetColor: z.string().optional(),
edgeLabel: z.string().optional(),
isHumanInput: z.boolean().optional()
})
.optional(),
type: z.string().optional(),
id: z.string()
})
const AgentFlowV2Type = z
.object({
description: z.string().optional(),
usecases: z.array(z.string()).optional(),
nodes: z.array(NodeType),
edges: z.array(EdgeType)
})
.describe('Generate Agentflowv2 nodes and edges')
// Type for the templates array
type AgentFlowV2Template = z.infer<typeof AgentFlowV2Type>
const getAllAgentFlow2Nodes = async () => {
const appServer = getRunningExpressApp()
const nodes = appServer.nodesPool.componentNodes
const agentFlow2Nodes = []
for (const node in nodes) {
if (nodes[node].category === 'Agent Flows') {
agentFlow2Nodes.push({
name: nodes[node].name,
label: nodes[node].label,
description: nodes[node].description
})
}
}
return JSON.stringify(agentFlow2Nodes, null, 2)
}
const getAllToolNodes = async () => {
const appServer = getRunningExpressApp()
const nodes = appServer.nodesPool.componentNodes
const toolNodes = []
const disabled_nodes = process.env.DISABLED_NODES ? process.env.DISABLED_NODES.split(',') : []
const removeTools = ['chainTool', 'retrieverTool', 'webBrowser', ...disabled_nodes]
for (const node in nodes) {
if (nodes[node].category.includes('Tools')) {
if (removeTools.includes(nodes[node].name)) {
continue
}
toolNodes.push({
name: nodes[node].name,
description: nodes[node].description
})
}
}
return JSON.stringify(toolNodes, null, 2)
}
const getAllAgentflowv2Marketplaces = async () => {
const templates: AgentFlowV2Template[] = []
let marketplaceDir = path.join(__dirname, '..', '..', '..', 'marketplaces', 'agentflowsv2')
let jsonsInDir = fs.readdirSync(marketplaceDir).filter((file) => path.extname(file) === '.json')
jsonsInDir.forEach((file) => {
try {
const filePath = path.join(__dirname, '..', '..', '..', 'marketplaces', 'agentflowsv2', file)
const fileData = fs.readFileSync(filePath)
const fileDataObj = JSON.parse(fileData.toString())
// get rid of the node.data, remain all other properties
const filteredNodes = fileDataObj.nodes.map((node: any) => {
return {
...node,
data: undefined
}
})
const template = {
title: file.split('.json')[0],
description: fileDataObj.description || `Template from ${file}`,
usecases: fileDataObj.usecases || [],
nodes: filteredNodes,
edges: fileDataObj.edges
}
// Validate template against schema
const validatedTemplate = AgentFlowV2Type.parse(template)
templates.push(validatedTemplate)
} catch (error) {
console.error(`Error processing template file ${file}:`, error)
// Continue with next file instead of failing completely
}
})
// Format templates into the requested string format
let formattedTemplates = ''
templates.forEach((template: AgentFlowV2Template, index: number) => {
formattedTemplates += `Example ${index + 1}: <<${(template as any).title}>> - ${template.description}\n`
formattedTemplates += `"nodes": [\n`
// Format nodes with proper indentation
const nodesJson = JSON.stringify(template.nodes, null, 3)
// Split by newlines and add 3 spaces to the beginning of each line except the first and last
const nodesLines = nodesJson.split('\n')
if (nodesLines.length > 2) {
formattedTemplates += ` ${nodesLines[0]}\n`
for (let i = 1; i < nodesLines.length - 1; i++) {
formattedTemplates += ` ${nodesLines[i]}\n`
}
formattedTemplates += ` ${nodesLines[nodesLines.length - 1]}\n`
} else {
formattedTemplates += ` ${nodesJson}\n`
}
formattedTemplates += `]\n`
formattedTemplates += `"edges": [\n`
// Format edges with proper indentation
const edgesJson = JSON.stringify(template.edges, null, 3)
// Split by newlines and add tab to the beginning of each line except the first and last
const edgesLines = edgesJson.split('\n')
if (edgesLines.length > 2) {
formattedTemplates += `\t${edgesLines[0]}\n`
for (let i = 1; i < edgesLines.length - 1; i++) {
formattedTemplates += `\t${edgesLines[i]}\n`
}
formattedTemplates += `\t${edgesLines[edgesLines.length - 1]}\n`
} else {
formattedTemplates += `\t${edgesJson}\n`
}
formattedTemplates += `]\n\n`
})
return formattedTemplates
}
const generateAgentflowv2 = async (question: string, selectedChatModel: Record<string, any>) => {
try {
const agentFlow2Nodes = await getAllAgentFlow2Nodes()
const toolNodes = await getAllToolNodes()
const marketplaceTemplates = await getAllAgentflowv2Marketplaces()
const prompt = sysPrompt
.replace('{agentFlow2Nodes}', agentFlow2Nodes)
.replace('{marketplaceTemplates}', marketplaceTemplates)
.replace('{userRequest}', question)
const options: Record<string, any> = {
appDataSource: getRunningExpressApp().AppDataSource,
databaseEntities: databaseEntities,
logger: logger
}
let response
if (process.env.MODE === MODE.QUEUE) {
const predictionQueue = getRunningExpressApp().queueManager.getQueue('prediction')
const job = await predictionQueue.addJob({
prompt,
question,
toolNodes,
selectedChatModel,
isAgentFlowGenerator: true
})
logger.debug(`[server]: Generated Agentflowv2 Job added to queue: ${job.id}`)
const queueEvents = predictionQueue.getQueueEvents()
response = await job.waitUntilFinished(queueEvents)
} else {
response = await generateAgentflowv2_json(
{ prompt, componentNodes: getRunningExpressApp().nodesPool.componentNodes, toolNodes, selectedChatModel },
question,
options
)
}
try {
// Try to parse and validate the response if it's a string
if (typeof response === 'string') {
const parsedResponse = JSON.parse(response)
const validatedResponse = AgentFlowV2Type.parse(parsedResponse)
return validatedResponse
}
// If response is already an object
else if (typeof response === 'object') {
const validatedResponse = AgentFlowV2Type.parse(response)
return validatedResponse
}
// Unexpected response type
else {
throw new Error(`Unexpected response type: ${typeof response}`)
}
} catch (parseError) {
console.error('Failed to parse or validate response:', parseError)
// If parsing fails, return an error object
return {
error: 'Failed to validate response format',
rawResponse: response
} as any // Type assertion to avoid type errors
}
} catch (error) {
throw new InternalFlowiseError(StatusCodes.INTERNAL_SERVER_ERROR, `Error: generateAgentflowv2 - ${getErrorMessage(error)}`)
}
}
export default {
generateAgentflowv2
}

View File

@ -0,0 +1,66 @@
export const sysPromptBackup = `You are a workflow orchestrator that is designed to make agent coordination and execution easy. Workflow consists of nodes and edges. Your goal is to generate nodes and edges needed for the workflow to achieve the given task.
Here are the nodes to choose from:
{agentFlow2Nodes}
Here's some examples of workflows, take a look at which nodes are most relevant to the task and how the nodes and edges are connected:
{marketplaceTemplates}
Now, let's generate the nodes and edges for the user's request.
The response should be in JSON format with "nodes" and "edges" arrays, following the structure shown in the examples.
Think carefully, break down the task into smaller steps and think about which nodes are needed for each step.
1. First, take a look at the examples and use them as references to think about which nodes are needed to achieve the task. It must always start with startAgentflow node, and have at least 2 nodes in total. You MUST only use nodes that are in the list of nodes above. Each node must have a unique incrementing id.
2. Then, think about the edges between the nodes.
3. An agentAgentflow is an AI Agent that can use tools to accomplish goals, executing decisions, automating tasks, and interacting with the real world autonomously such as web search, interact with database and API, send messages, book appointments, etc. Always place higher priority to this and see if the tasks can be accomplished by this node. Use this node if you are asked to create an agent that can perform multiple tasks autonomously.
4. A llmAgentflow is excel at processing, understanding, and generating human-like language. It can be used for generating text, summarizing, translating, returning JSON outputs, etc.
5. If you need to execute the tool sequentially after another, you can use the toolAgentflow node.
6. If you need to iterate over a set of data, you can use the iteration node. You must have at least 1 node inside the iteration node. The children nodes will be executed N times, where N is the number of items in the iterationInput array. The children nodes must have the property "parentNode" and the value must be the id of the iteration node.
7. If you can't find a node that fits the task, you can use the httpAgentflow node to execute a http request. For example, to retrieve data from 3rd party APIs, or to send data to a webhook
8. If you need to dynamically choose between user intention, for example classifying the user's intent, you can use the conditionAgentAgentflow node. For defined conditions, you can use the conditionAgentflow node.
`
export const sysPrompt = `You are an advanced workflow orchestrator designed to generate nodes and edges for complex tasks. Your goal is to create a workflow that accomplishes the given user request efficiently and effectively.
Your task is to generate a workflow for the following user request:
<user_request>
{userRequest}
</user_request>
First, review the available nodes for this system:
<available_nodes>
{agentFlow2Nodes}
</available_nodes>
Now, examine these workflow examples to understand how nodes are typically connected and which are most relevant for different tasks:
<workflow_examples>
{marketplaceTemplates}
</workflow_examples>
To create this workflow, follow these steps and wrap your thought process in <workflow_planning> tags inside your thinking block:
1. List out all the key components of the user request.
2. Analyze the user request and break it down into smaller steps.
3. For each step, consider which nodes are most appropriate and match each component with potential nodes. Remember:
- Always start with a startAgentflow node.
- Include at least 2 nodes in total.
- Only use nodes from the available nodes list.
- Assign each node a unique, incrementing ID.
4. Outline the overall structure of the workflow.
5. Determine the logical connections (edges) between the nodes.
6. Consider special cases:
- Use agentAgentflow for multiple autonomous tasks.
- Use llmAgentflow for language processing tasks.
- Use toolAgentflow for sequential tool execution.
- Use iteration node when you need to iterate over a set of data (must include at least one child node with a "parentNode" property).
- Use httpAgentflow for API requests or webhooks.
- Use conditionAgentAgentflow for dynamic choices or conditionAgentflow for defined conditions.
- Use humanInputAgentflow for human input and review.
- Use loopAgentflow for repetitive tasks, or when back and forth communication is needed such as hierarchical workflows.
After your analysis, provide the final workflow as a JSON object with "nodes" and "edges" arrays.
Begin your analysis and workflow creation process now. Your final output should consist only of the JSON object with the workflow and should not duplicate or rehash any of the work you did in the workflow planning section.`

View File

@ -433,9 +433,10 @@ const getDocumentStores = async (): Promise<any> => {
const getTools = async (): Promise<any> => { const getTools = async (): Promise<any> => {
try { try {
const tools = await nodesService.getAllNodesForCategory('Tools') const tools = await nodesService.getAllNodesForCategory('Tools')
const mcpTools = await nodesService.getAllNodesForCategory('Tools (MCP)')
// filter out those tools that input params type are not in the list // filter out those tools that input params type are not in the list
const filteredTools = tools.filter((tool) => { const filteredTools = [...tools, ...mcpTools].filter((tool) => {
const inputs = tool.inputs || [] const inputs = tool.inputs || []
return inputs.every((input) => INPUT_PARAMS_TYPE.includes(input.type)) return inputs.every((input) => INPUT_PARAMS_TYPE.includes(input.type))
}) })

View File

@ -118,6 +118,7 @@ const removeAllChatMessages = async (
logger.error(`[server]: Error deleting file storage for chatflow ${chatflowid}, chatId ${chatId}: ${e}`) logger.error(`[server]: Error deleting file storage for chatflow ${chatflowid}, chatId ${chatId}: ${e}`)
} }
} }
const dbResponse = await appServer.AppDataSource.getRepository(ChatMessage).delete(deleteOptions) const dbResponse = await appServer.AppDataSource.getRepository(ChatMessage).delete(deleteOptions)
return dbResponse return dbResponse
} catch (error) { } catch (error) {
@ -136,6 +137,10 @@ const removeChatMessagesByMessageIds = async (
try { try {
const appServer = getRunningExpressApp() const appServer = getRunningExpressApp()
// Get messages before deletion to check for executionId
const messages = await appServer.AppDataSource.getRepository(ChatMessage).findByIds(messageIds)
const executionIds = messages.map((msg) => msg.executionId).filter(Boolean)
for (const [composite_key] of chatIdMap) { for (const [composite_key] of chatIdMap) {
const [chatId] = composite_key.split('_') const [chatId] = composite_key.split('_')
@ -147,6 +152,11 @@ const removeChatMessagesByMessageIds = async (
await removeFilesFromStorage(chatflowid, chatId) await removeFilesFromStorage(chatflowid, chatId)
} }
// Delete executions if they exist
if (executionIds.length > 0) {
await appServer.AppDataSource.getRepository('Execution').delete(executionIds)
}
const dbResponse = await appServer.AppDataSource.getRepository(ChatMessage).delete(messageIds) const dbResponse = await appServer.AppDataSource.getRepository(ChatMessage).delete(messageIds)
return dbResponse return dbResponse
} catch (error) { } catch (error) {

View File

@ -38,6 +38,10 @@ const checkIfChatflowIsValidForStreaming = async (chatflowId: string): Promise<a
} }
} }
if (chatflow.type === 'AGENTFLOW') {
return { isStreaming: true }
}
/*** Get Ending Node with Directed Graph ***/ /*** Get Ending Node with Directed Graph ***/
const flowData = chatflow.flowData const flowData = chatflow.flowData
const parsedFlowData: IReactFlowObject = JSON.parse(flowData) const parsedFlowData: IReactFlowObject = JSON.parse(flowData)
@ -121,6 +125,8 @@ const getAllChatflows = async (type?: ChatflowType): Promise<ChatFlow[]> => {
const dbResponse = await appServer.AppDataSource.getRepository(ChatFlow).find() const dbResponse = await appServer.AppDataSource.getRepository(ChatFlow).find()
if (type === 'MULTIAGENT') { if (type === 'MULTIAGENT') {
return dbResponse.filter((chatflow) => chatflow.type === 'MULTIAGENT') return dbResponse.filter((chatflow) => chatflow.type === 'MULTIAGENT')
} else if (type === 'AGENTFLOW') {
return dbResponse.filter((chatflow) => chatflow.type === 'AGENTFLOW')
} else if (type === 'ASSISTANT') { } else if (type === 'ASSISTANT') {
return dbResponse.filter((chatflow) => chatflow.type === 'ASSISTANT') return dbResponse.filter((chatflow) => chatflow.type === 'ASSISTANT')
} else if (type === 'CHATFLOW') { } else if (type === 'CHATFLOW') {
@ -336,7 +342,7 @@ const getSinglePublicChatbotConfig = async (chatflowId: string): Promise<any> =>
if (dbResponse.chatbotConfig || uploadsConfig) { if (dbResponse.chatbotConfig || uploadsConfig) {
try { try {
const parsedConfig = dbResponse.chatbotConfig ? JSON.parse(dbResponse.chatbotConfig) : {} const parsedConfig = dbResponse.chatbotConfig ? JSON.parse(dbResponse.chatbotConfig) : {}
return { ...parsedConfig, uploads: uploadsConfig } return { ...parsedConfig, uploads: uploadsConfig, flowData: dbResponse.flowData }
} catch (e) { } catch (e) {
throw new InternalFlowiseError(StatusCodes.INTERNAL_SERVER_ERROR, `Error parsing Chatbot Config for Chatflow ${chatflowId}`) throw new InternalFlowiseError(StatusCodes.INTERNAL_SERVER_ERROR, `Error parsing Chatbot Config for Chatflow ${chatflowId}`)
} }

View File

@ -0,0 +1,156 @@
import { StatusCodes } from 'http-status-codes'
import { InternalFlowiseError } from '../../errors/internalFlowiseError'
import { getErrorMessage } from '../../errors/utils'
import { getRunningExpressApp } from '../../utils/getRunningExpressApp'
import { Execution } from '../../database/entities/Execution'
import { ExecutionState, IAgentflowExecutedData } from '../../Interface'
import { In } from 'typeorm'
import { ChatMessage } from '../../database/entities/ChatMessage'
import { _removeCredentialId } from '../../utils/buildAgentflow'
interface ExecutionFilters {
id?: string
agentflowId?: string
sessionId?: string
state?: ExecutionState
startDate?: Date
endDate?: Date
page?: number
limit?: number
}
const getExecutionById = async (executionId: string): Promise<Execution | null> => {
try {
const appServer = getRunningExpressApp()
const executionRepository = appServer.AppDataSource.getRepository(Execution)
const res = await executionRepository.findOne({ where: { id: executionId } })
if (!res) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Execution ${executionId} not found`)
}
return res
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: executionsService.getExecutionById - ${getErrorMessage(error)}`
)
}
}
const getPublicExecutionById = async (executionId: string): Promise<Execution | null> => {
try {
const appServer = getRunningExpressApp()
const executionRepository = appServer.AppDataSource.getRepository(Execution)
const res = await executionRepository.findOne({ where: { id: executionId, isPublic: true } })
if (!res) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Execution ${executionId} not found`)
}
const executionData = typeof res?.executionData === 'string' ? JSON.parse(res?.executionData) : res?.executionData
const executionDataWithoutCredentialId = executionData.map((data: IAgentflowExecutedData) => _removeCredentialId(data))
const stringifiedExecutionData = JSON.stringify(executionDataWithoutCredentialId)
return { ...res, executionData: stringifiedExecutionData }
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: executionsService.getPublicExecutionById - ${getErrorMessage(error)}`
)
}
}
const getAllExecutions = async (filters: ExecutionFilters = {}): Promise<{ data: Execution[]; total: number }> => {
try {
const appServer = getRunningExpressApp()
const { id, agentflowId, sessionId, state, startDate, endDate, page = 1, limit = 10 } = filters
// Handle UUID fields properly using raw parameters to avoid type conversion issues
// This uses the query builder instead of direct objects for compatibility with UUID fields
const queryBuilder = appServer.AppDataSource.getRepository(Execution)
.createQueryBuilder('execution')
.leftJoinAndSelect('execution.agentflow', 'agentflow')
.orderBy('execution.createdDate', 'DESC')
.skip((page - 1) * limit)
.take(limit)
if (id) queryBuilder.andWhere('execution.id = :id', { id })
if (agentflowId) queryBuilder.andWhere('execution.agentflowId = :agentflowId', { agentflowId })
if (sessionId) queryBuilder.andWhere('execution.sessionId = :sessionId', { sessionId })
if (state) queryBuilder.andWhere('execution.state = :state', { state })
// Date range conditions
if (startDate && endDate) {
queryBuilder.andWhere('execution.createdDate BETWEEN :startDate AND :endDate', { startDate, endDate })
} else if (startDate) {
queryBuilder.andWhere('execution.createdDate >= :startDate', { startDate })
} else if (endDate) {
queryBuilder.andWhere('execution.createdDate <= :endDate', { endDate })
}
const [data, total] = await queryBuilder.getManyAndCount()
return { data, total }
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: executionsService.getAllExecutions - ${getErrorMessage(error)}`
)
}
}
const updateExecution = async (executionId: string, data: Partial<Execution>): Promise<Execution | null> => {
try {
const appServer = getRunningExpressApp()
const execution = await appServer.AppDataSource.getRepository(Execution).findOneBy({
id: executionId
})
if (!execution) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Execution ${executionId} not found`)
}
const updateExecution = new Execution()
Object.assign(updateExecution, data)
await appServer.AppDataSource.getRepository(Execution).merge(execution, updateExecution)
const dbResponse = await appServer.AppDataSource.getRepository(Execution).save(execution)
return dbResponse
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: executionsService.updateExecution - ${getErrorMessage(error)}`
)
}
}
/**
* Delete multiple executions by their IDs
* @param executionIds Array of execution IDs to delete
* @returns Object with success status and count of deleted executions
*/
const deleteExecutions = async (executionIds: string[]): Promise<{ success: boolean; deletedCount: number }> => {
try {
const appServer = getRunningExpressApp()
const executionRepository = appServer.AppDataSource.getRepository(Execution)
// Delete executions where id is in the provided array
const result = await executionRepository.delete({
id: In(executionIds)
})
// Update chat message executionId column to NULL
await appServer.AppDataSource.getRepository(ChatMessage).update({ executionId: In(executionIds) }, { executionId: null as any })
return {
success: true,
deletedCount: result.affected || 0
}
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: executionsService.deleteExecutions - ${getErrorMessage(error)}`
)
}
}
export default {
getExecutionById,
getAllExecutions,
deleteExecutions,
getPublicExecutionById,
updateExecution
}

View File

@ -8,6 +8,7 @@ import { ChatMessageFeedback } from '../../database/entities/ChatMessageFeedback
import { CustomTemplate } from '../../database/entities/CustomTemplate' import { CustomTemplate } from '../../database/entities/CustomTemplate'
import { DocumentStore } from '../../database/entities/DocumentStore' import { DocumentStore } from '../../database/entities/DocumentStore'
import { DocumentStoreFileChunk } from '../../database/entities/DocumentStoreFileChunk' import { DocumentStoreFileChunk } from '../../database/entities/DocumentStoreFileChunk'
import { Execution } from '../../database/entities/Execution'
import { Tool } from '../../database/entities/Tool' import { Tool } from '../../database/entities/Tool'
import { Variable } from '../../database/entities/Variable' import { Variable } from '../../database/entities/Variable'
import { InternalFlowiseError } from '../../errors/internalFlowiseError' import { InternalFlowiseError } from '../../errors/internalFlowiseError'
@ -17,12 +18,14 @@ import assistantService from '../assistants'
import chatMessagesService from '../chat-messages' import chatMessagesService from '../chat-messages'
import chatflowService from '../chatflows' import chatflowService from '../chatflows'
import documenStoreService from '../documentstore' import documenStoreService from '../documentstore'
import executionService from '../executions'
import marketplacesService from '../marketplaces' import marketplacesService from '../marketplaces'
import toolsService from '../tools' import toolsService from '../tools'
import variableService from '../variables' import variableService from '../variables'
type ExportInput = { type ExportInput = {
agentflow: boolean agentflow: boolean
agentflowv2: boolean
assistantCustom: boolean assistantCustom: boolean
assistantOpenAI: boolean assistantOpenAI: boolean
assistantAzure: boolean assistantAzure: boolean
@ -31,12 +34,14 @@ type ExportInput = {
chat_feedback: boolean chat_feedback: boolean
custom_template: boolean custom_template: boolean
document_store: boolean document_store: boolean
execution: boolean
tool: boolean tool: boolean
variable: boolean variable: boolean
} }
type ExportData = { type ExportData = {
AgentFlow: ChatFlow[] AgentFlow: ChatFlow[]
AgentFlowV2: ChatFlow[]
AssistantCustom: Assistant[] AssistantCustom: Assistant[]
AssistantFlow: ChatFlow[] AssistantFlow: ChatFlow[]
AssistantOpenAI: Assistant[] AssistantOpenAI: Assistant[]
@ -47,6 +52,7 @@ type ExportData = {
CustomTemplate: CustomTemplate[] CustomTemplate: CustomTemplate[]
DocumentStore: DocumentStore[] DocumentStore: DocumentStore[]
DocumentStoreFileChunk: DocumentStoreFileChunk[] DocumentStoreFileChunk: DocumentStoreFileChunk[]
Execution: Execution[]
Tool: Tool[] Tool: Tool[]
Variable: Variable[] Variable: Variable[]
} }
@ -55,6 +61,7 @@ const convertExportInput = (body: any): ExportInput => {
try { try {
if (!body || typeof body !== 'object') throw new Error('Invalid ExportInput object in request body') if (!body || typeof body !== 'object') throw new Error('Invalid ExportInput object in request body')
if (body.agentflow && typeof body.agentflow !== 'boolean') throw new Error('Invalid agentflow property in ExportInput object') if (body.agentflow && typeof body.agentflow !== 'boolean') throw new Error('Invalid agentflow property in ExportInput object')
if (body.agentflowv2 && typeof body.agentflowv2 !== 'boolean') throw new Error('Invalid agentflowv2 property in ExportInput object')
if (body.assistant && typeof body.assistant !== 'boolean') throw new Error('Invalid assistant property in ExportInput object') if (body.assistant && typeof body.assistant !== 'boolean') throw new Error('Invalid assistant property in ExportInput object')
if (body.chatflow && typeof body.chatflow !== 'boolean') throw new Error('Invalid chatflow property in ExportInput object') if (body.chatflow && typeof body.chatflow !== 'boolean') throw new Error('Invalid chatflow property in ExportInput object')
if (body.chat_message && typeof body.chat_message !== 'boolean') if (body.chat_message && typeof body.chat_message !== 'boolean')
@ -65,6 +72,7 @@ const convertExportInput = (body: any): ExportInput => {
throw new Error('Invalid custom_template property in ExportInput object') throw new Error('Invalid custom_template property in ExportInput object')
if (body.document_store && typeof body.document_store !== 'boolean') if (body.document_store && typeof body.document_store !== 'boolean')
throw new Error('Invalid document_store property in ExportInput object') throw new Error('Invalid document_store property in ExportInput object')
if (body.execution && typeof body.execution !== 'boolean') throw new Error('Invalid execution property in ExportInput object')
if (body.tool && typeof body.tool !== 'boolean') throw new Error('Invalid tool property in ExportInput object') if (body.tool && typeof body.tool !== 'boolean') throw new Error('Invalid tool property in ExportInput object')
if (body.variable && typeof body.variable !== 'boolean') throw new Error('Invalid variable property in ExportInput object') if (body.variable && typeof body.variable !== 'boolean') throw new Error('Invalid variable property in ExportInput object')
return body as ExportInput return body as ExportInput
@ -80,6 +88,7 @@ const FileDefaultName = 'ExportData.json'
const exportData = async (exportInput: ExportInput): Promise<{ FileDefaultName: string } & ExportData> => { const exportData = async (exportInput: ExportInput): Promise<{ FileDefaultName: string } & ExportData> => {
try { try {
let AgentFlow: ChatFlow[] = exportInput.agentflow === true ? await chatflowService.getAllChatflows('MULTIAGENT') : [] let AgentFlow: ChatFlow[] = exportInput.agentflow === true ? await chatflowService.getAllChatflows('MULTIAGENT') : []
let AgentFlowV2: ChatFlow[] = exportInput.agentflowv2 === true ? await chatflowService.getAllChatflows('AGENTFLOW') : []
let AssistantCustom: Assistant[] = exportInput.assistantCustom === true ? await assistantService.getAllAssistants('CUSTOM') : [] let AssistantCustom: Assistant[] = exportInput.assistantCustom === true ? await assistantService.getAllAssistants('CUSTOM') : []
let AssistantFlow: ChatFlow[] = exportInput.assistantCustom === true ? await chatflowService.getAllChatflows('ASSISTANT') : [] let AssistantFlow: ChatFlow[] = exportInput.assistantCustom === true ? await chatflowService.getAllChatflows('ASSISTANT') : []
@ -103,6 +112,9 @@ const exportData = async (exportInput: ExportInput): Promise<{ FileDefaultName:
let DocumentStoreFileChunk: DocumentStoreFileChunk[] = let DocumentStoreFileChunk: DocumentStoreFileChunk[] =
exportInput.document_store === true ? await documenStoreService.getAllDocumentFileChunks() : [] exportInput.document_store === true ? await documenStoreService.getAllDocumentFileChunks() : []
const { data: totalExecutions } = exportInput.execution === true ? await executionService.getAllExecutions() : { data: [] }
let Execution: Execution[] = exportInput.execution === true ? totalExecutions : []
let Tool: Tool[] = exportInput.tool === true ? await toolsService.getAllTools() : [] let Tool: Tool[] = exportInput.tool === true ? await toolsService.getAllTools() : []
let Variable: Variable[] = exportInput.variable === true ? await variableService.getAllVariables() : [] let Variable: Variable[] = exportInput.variable === true ? await variableService.getAllVariables() : []
@ -110,6 +122,7 @@ const exportData = async (exportInput: ExportInput): Promise<{ FileDefaultName:
return { return {
FileDefaultName, FileDefaultName,
AgentFlow, AgentFlow,
AgentFlowV2,
AssistantCustom, AssistantCustom,
AssistantFlow, AssistantFlow,
AssistantOpenAI, AssistantOpenAI,
@ -120,6 +133,7 @@ const exportData = async (exportInput: ExportInput): Promise<{ FileDefaultName:
CustomTemplate, CustomTemplate,
DocumentStore, DocumentStore,
DocumentStoreFileChunk, DocumentStoreFileChunk,
Execution,
Tool, Tool,
Variable Variable
} }
@ -180,8 +194,9 @@ async function replaceDuplicateIdsForChatMessage(queryRunner: QueryRunner, origi
}) })
const originalDataChatflowIds = [ const originalDataChatflowIds = [
...originalData.AssistantFlow.map((assistantFlow) => assistantFlow.id), ...originalData.AssistantFlow.map((assistantFlow) => assistantFlow.id),
...originalData.AgentFlow.map((agentflow) => agentflow.id), ...originalData.AgentFlow.map((agentFlow) => agentFlow.id),
...originalData.ChatFlow.map((chatflow) => chatflow.id) ...originalData.AgentFlowV2.map((agentFlowV2) => agentFlowV2.id),
...originalData.ChatFlow.map((chatFlow) => chatFlow.id)
] ]
chatmessageChatflowIds.forEach((item) => { chatmessageChatflowIds.forEach((item) => {
if (originalDataChatflowIds.includes(item.id)) { if (originalDataChatflowIds.includes(item.id)) {
@ -224,6 +239,54 @@ async function replaceDuplicateIdsForChatMessage(queryRunner: QueryRunner, origi
} }
} }
async function replaceExecutionIdForChatMessage(queryRunner: QueryRunner, originalData: ExportData, chatMessages: ChatMessage[]) {
try {
// step 1 - get all execution ids from chatMessages
const chatMessageExecutionIds = chatMessages
.map((chatMessage) => {
return { id: chatMessage.executionId, qty: 0 }
})
.filter((item): item is { id: string; qty: number } => item !== undefined)
// step 2 - increase qty if execution id is in importData.Execution
const originalDataExecutionIds = originalData.Execution.map((execution) => execution.id)
chatMessageExecutionIds.forEach((item) => {
if (originalDataExecutionIds.includes(item.id)) {
item.qty += 1
}
})
// step 3 - increase qty if execution id is in database
const databaseExecutionIds = await (
await queryRunner.manager.find(Execution, {
where: { id: In(chatMessageExecutionIds.map((chatMessageExecutionId) => chatMessageExecutionId.id)) }
})
).map((execution) => execution.id)
chatMessageExecutionIds.forEach((item) => {
if (databaseExecutionIds.includes(item.id)) {
item.qty += 1
}
})
// step 4 - if executionIds not found replace with NULL
const missingExecutionIds = chatMessageExecutionIds.filter((item) => item.qty === 0).map((item) => item.id)
chatMessages.forEach((chatMessage) => {
if (chatMessage.executionId && missingExecutionIds.includes(chatMessage.executionId)) {
delete chatMessage.executionId
}
})
originalData.ChatMessage = chatMessages
return originalData
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: exportImportService.replaceExecutionIdForChatMessage - ${getErrorMessage(error)}`
)
}
}
async function replaceDuplicateIdsForChatMessageFeedback( async function replaceDuplicateIdsForChatMessageFeedback(
queryRunner: QueryRunner, queryRunner: QueryRunner,
originalData: ExportData, originalData: ExportData,
@ -235,8 +298,9 @@ async function replaceDuplicateIdsForChatMessageFeedback(
}) })
const originalDataChatflowIds = [ const originalDataChatflowIds = [
...originalData.AssistantFlow.map((assistantFlow) => assistantFlow.id), ...originalData.AssistantFlow.map((assistantFlow) => assistantFlow.id),
...originalData.AgentFlow.map((agentflow) => agentflow.id), ...originalData.AgentFlow.map((agentFlow) => agentFlow.id),
...originalData.ChatFlow.map((chatflow) => chatflow.id) ...originalData.AgentFlowV2.map((agentFlowV2) => agentFlowV2.id),
...originalData.ChatFlow.map((chatFlow) => chatFlow.id)
] ]
feedbackChatflowIds.forEach((item) => { feedbackChatflowIds.forEach((item) => {
if (originalDataChatflowIds.includes(item.id)) { if (originalDataChatflowIds.includes(item.id)) {
@ -412,6 +476,27 @@ async function replaceDuplicateIdsForVariable(queryRunner: QueryRunner, original
} }
} }
async function replaceDuplicateIdsForExecution(queryRunner: QueryRunner, originalData: ExportData, executions: Execution[]) {
try {
const ids = executions.map((execution) => execution.id)
const records = await queryRunner.manager.find(Execution, {
where: { id: In(ids) }
})
if (records.length < 0) return originalData
for (let record of records) {
const oldId = record.id
const newId = uuidv4()
originalData = JSON.parse(JSON.stringify(originalData).replaceAll(oldId, newId))
}
return originalData
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: exportImportService.replaceDuplicateIdsForExecution - ${getErrorMessage(error)}`
)
}
}
function reduceSpaceForChatflowFlowData(chatflows: ChatFlow[]) { function reduceSpaceForChatflowFlowData(chatflows: ChatFlow[]) {
return chatflows.map((chatflow) => { return chatflows.map((chatflow) => {
return { ...chatflow, flowData: JSON.stringify(JSON.parse(chatflow.flowData)) } return { ...chatflow, flowData: JSON.stringify(JSON.parse(chatflow.flowData)) }
@ -429,6 +514,10 @@ const importData = async (importData: ExportData) => {
importData.AgentFlow = reduceSpaceForChatflowFlowData(importData.AgentFlow) importData.AgentFlow = reduceSpaceForChatflowFlowData(importData.AgentFlow)
importData = await replaceDuplicateIdsForChatFlow(queryRunner, importData, importData.AgentFlow) importData = await replaceDuplicateIdsForChatFlow(queryRunner, importData, importData.AgentFlow)
} }
if (importData.AgentFlowV2.length > 0) {
importData.AgentFlowV2 = reduceSpaceForChatflowFlowData(importData.AgentFlowV2)
importData = await replaceDuplicateIdsForChatFlow(queryRunner, importData, importData.AgentFlowV2)
}
if (importData.AssistantCustom.length > 0) if (importData.AssistantCustom.length > 0)
importData = await replaceDuplicateIdsForAssistant(queryRunner, importData, importData.AssistantCustom) importData = await replaceDuplicateIdsForAssistant(queryRunner, importData, importData.AssistantCustom)
if (importData.AssistantFlow.length > 0) { if (importData.AssistantFlow.length > 0) {
@ -443,8 +532,10 @@ const importData = async (importData: ExportData) => {
importData.ChatFlow = reduceSpaceForChatflowFlowData(importData.ChatFlow) importData.ChatFlow = reduceSpaceForChatflowFlowData(importData.ChatFlow)
importData = await replaceDuplicateIdsForChatFlow(queryRunner, importData, importData.ChatFlow) importData = await replaceDuplicateIdsForChatFlow(queryRunner, importData, importData.ChatFlow)
} }
if (importData.ChatMessage.length > 0) if (importData.ChatMessage.length > 0) {
importData = await replaceDuplicateIdsForChatMessage(queryRunner, importData, importData.ChatMessage) importData = await replaceDuplicateIdsForChatMessage(queryRunner, importData, importData.ChatMessage)
importData = await replaceExecutionIdForChatMessage(queryRunner, importData, importData.ChatMessage)
}
if (importData.ChatMessageFeedback.length > 0) if (importData.ChatMessageFeedback.length > 0)
importData = await replaceDuplicateIdsForChatMessageFeedback(queryRunner, importData, importData.ChatMessageFeedback) importData = await replaceDuplicateIdsForChatMessageFeedback(queryRunner, importData, importData.ChatMessageFeedback)
if (importData.CustomTemplate.length > 0) if (importData.CustomTemplate.length > 0)
@ -454,12 +545,15 @@ const importData = async (importData: ExportData) => {
if (importData.DocumentStoreFileChunk.length > 0) if (importData.DocumentStoreFileChunk.length > 0)
importData = await replaceDuplicateIdsForDocumentStoreFileChunk(queryRunner, importData, importData.DocumentStoreFileChunk) importData = await replaceDuplicateIdsForDocumentStoreFileChunk(queryRunner, importData, importData.DocumentStoreFileChunk)
if (importData.Tool.length > 0) importData = await replaceDuplicateIdsForTool(queryRunner, importData, importData.Tool) if (importData.Tool.length > 0) importData = await replaceDuplicateIdsForTool(queryRunner, importData, importData.Tool)
if (importData.Execution.length > 0)
importData = await replaceDuplicateIdsForExecution(queryRunner, importData, importData.Execution)
if (importData.Variable.length > 0) if (importData.Variable.length > 0)
importData = await replaceDuplicateIdsForVariable(queryRunner, importData, importData.Variable) importData = await replaceDuplicateIdsForVariable(queryRunner, importData, importData.Variable)
await queryRunner.startTransaction() await queryRunner.startTransaction()
if (importData.AgentFlow.length > 0) await queryRunner.manager.save(ChatFlow, importData.AgentFlow) if (importData.AgentFlow.length > 0) await queryRunner.manager.save(ChatFlow, importData.AgentFlow)
if (importData.AgentFlowV2.length > 0) await queryRunner.manager.save(ChatFlow, importData.AgentFlowV2)
if (importData.AssistantFlow.length > 0) await queryRunner.manager.save(ChatFlow, importData.AssistantFlow) if (importData.AssistantFlow.length > 0) await queryRunner.manager.save(ChatFlow, importData.AssistantFlow)
if (importData.AssistantCustom.length > 0) await queryRunner.manager.save(Assistant, importData.AssistantCustom) if (importData.AssistantCustom.length > 0) await queryRunner.manager.save(Assistant, importData.AssistantCustom)
if (importData.AssistantOpenAI.length > 0) await queryRunner.manager.save(Assistant, importData.AssistantOpenAI) if (importData.AssistantOpenAI.length > 0) await queryRunner.manager.save(Assistant, importData.AssistantOpenAI)
@ -473,6 +567,7 @@ const importData = async (importData: ExportData) => {
if (importData.DocumentStoreFileChunk.length > 0) if (importData.DocumentStoreFileChunk.length > 0)
await queryRunner.manager.save(DocumentStoreFileChunk, importData.DocumentStoreFileChunk) await queryRunner.manager.save(DocumentStoreFileChunk, importData.DocumentStoreFileChunk)
if (importData.Tool.length > 0) await queryRunner.manager.save(Tool, importData.Tool) if (importData.Tool.length > 0) await queryRunner.manager.save(Tool, importData.Tool)
if (importData.Execution.length > 0) await queryRunner.manager.save(Execution, importData.Execution)
if (importData.Variable.length > 0) await queryRunner.manager.save(Variable, importData.Variable) if (importData.Variable.length > 0) await queryRunner.manager.save(Variable, importData.Variable)
await queryRunner.commitTransaction() await queryRunner.commitTransaction()

View File

@ -7,6 +7,7 @@ import { IReactFlowEdge, IReactFlowNode } from '../../Interface'
import { getRunningExpressApp } from '../../utils/getRunningExpressApp' import { getRunningExpressApp } from '../../utils/getRunningExpressApp'
import { DeleteResult } from 'typeorm' import { DeleteResult } from 'typeorm'
import { CustomTemplate } from '../../database/entities/CustomTemplate' import { CustomTemplate } from '../../database/entities/CustomTemplate'
import { v4 as uuidv4 } from 'uuid'
import chatflowsService from '../chatflows' import chatflowsService from '../chatflows'
@ -29,13 +30,13 @@ const getAllTemplates = async () => {
let marketplaceDir = path.join(__dirname, '..', '..', '..', 'marketplaces', 'chatflows') let marketplaceDir = path.join(__dirname, '..', '..', '..', 'marketplaces', 'chatflows')
let jsonsInDir = fs.readdirSync(marketplaceDir).filter((file) => path.extname(file) === '.json') let jsonsInDir = fs.readdirSync(marketplaceDir).filter((file) => path.extname(file) === '.json')
let templates: any[] = [] let templates: any[] = []
jsonsInDir.forEach((file, index) => { jsonsInDir.forEach((file) => {
const filePath = path.join(__dirname, '..', '..', '..', 'marketplaces', 'chatflows', file) const filePath = path.join(__dirname, '..', '..', '..', 'marketplaces', 'chatflows', file)
const fileData = fs.readFileSync(filePath) const fileData = fs.readFileSync(filePath)
const fileDataObj = JSON.parse(fileData.toString()) as ITemplate const fileDataObj = JSON.parse(fileData.toString()) as ITemplate
const template = { const template = {
id: index, id: uuidv4(),
templateName: file.split('.json')[0], templateName: file.split('.json')[0],
flowData: fileData.toString(), flowData: fileData.toString(),
badge: fileDataObj?.badge, badge: fileDataObj?.badge,
@ -50,13 +51,13 @@ const getAllTemplates = async () => {
marketplaceDir = path.join(__dirname, '..', '..', '..', 'marketplaces', 'tools') marketplaceDir = path.join(__dirname, '..', '..', '..', 'marketplaces', 'tools')
jsonsInDir = fs.readdirSync(marketplaceDir).filter((file) => path.extname(file) === '.json') jsonsInDir = fs.readdirSync(marketplaceDir).filter((file) => path.extname(file) === '.json')
jsonsInDir.forEach((file, index) => { jsonsInDir.forEach((file) => {
const filePath = path.join(__dirname, '..', '..', '..', 'marketplaces', 'tools', file) const filePath = path.join(__dirname, '..', '..', '..', 'marketplaces', 'tools', file)
const fileData = fs.readFileSync(filePath) const fileData = fs.readFileSync(filePath)
const fileDataObj = JSON.parse(fileData.toString()) const fileDataObj = JSON.parse(fileData.toString())
const template = { const template = {
...fileDataObj, ...fileDataObj,
id: index, id: uuidv4(),
type: 'Tool', type: 'Tool',
framework: fileDataObj?.framework, framework: fileDataObj?.framework,
badge: fileDataObj?.badge, badge: fileDataObj?.badge,
@ -69,12 +70,12 @@ const getAllTemplates = async () => {
marketplaceDir = path.join(__dirname, '..', '..', '..', 'marketplaces', 'agentflows') marketplaceDir = path.join(__dirname, '..', '..', '..', 'marketplaces', 'agentflows')
jsonsInDir = fs.readdirSync(marketplaceDir).filter((file) => path.extname(file) === '.json') jsonsInDir = fs.readdirSync(marketplaceDir).filter((file) => path.extname(file) === '.json')
jsonsInDir.forEach((file, index) => { jsonsInDir.forEach((file) => {
const filePath = path.join(__dirname, '..', '..', '..', 'marketplaces', 'agentflows', file) const filePath = path.join(__dirname, '..', '..', '..', 'marketplaces', 'agentflows', file)
const fileData = fs.readFileSync(filePath) const fileData = fs.readFileSync(filePath)
const fileDataObj = JSON.parse(fileData.toString()) const fileDataObj = JSON.parse(fileData.toString())
const template = { const template = {
id: index, id: uuidv4(),
templateName: file.split('.json')[0], templateName: file.split('.json')[0],
flowData: fileData.toString(), flowData: fileData.toString(),
badge: fileDataObj?.badge, badge: fileDataObj?.badge,
@ -86,6 +87,26 @@ const getAllTemplates = async () => {
} }
templates.push(template) templates.push(template)
}) })
marketplaceDir = path.join(__dirname, '..', '..', '..', 'marketplaces', 'agentflowsv2')
jsonsInDir = fs.readdirSync(marketplaceDir).filter((file) => path.extname(file) === '.json')
jsonsInDir.forEach((file) => {
const filePath = path.join(__dirname, '..', '..', '..', 'marketplaces', 'agentflowsv2', file)
const fileData = fs.readFileSync(filePath)
const fileDataObj = JSON.parse(fileData.toString())
const template = {
id: uuidv4(),
templateName: file.split('.json')[0],
flowData: fileData.toString(),
badge: fileDataObj?.badge,
framework: fileDataObj?.framework,
usecases: fileDataObj?.usecases,
categories: getCategories(fileDataObj),
type: 'AgentflowV2',
description: fileDataObj?.description || ''
}
templates.push(template)
})
const sortedTemplates = templates.sort((a, b) => a.templateName.localeCompare(b.templateName)) const sortedTemplates = templates.sort((a, b) => a.templateName.localeCompare(b.templateName))
const FlowiseDocsQnAIndex = sortedTemplates.findIndex((tmp) => tmp.templateName === 'Flowise Docs QnA') const FlowiseDocsQnAIndex = sortedTemplates.findIndex((tmp) => tmp.templateName === 'Flowise Docs QnA')
if (FlowiseDocsQnAIndex > 0) { if (FlowiseDocsQnAIndex > 0) {
@ -200,6 +221,9 @@ const _generateExportFlowData = (flowData: any) => {
version: node.data.version, version: node.data.version,
name: node.data.name, name: node.data.name,
type: node.data.type, type: node.data.type,
color: node.data.color,
hideOutput: node.data.hideOutput,
hideInput: node.data.hideInput,
baseClasses: node.data.baseClasses, baseClasses: node.data.baseClasses,
tags: node.data.tags, tags: node.data.tags,
category: node.data.category, category: node.data.category,

View File

@ -97,7 +97,10 @@ const getSingleNodeAsyncOptions = async (nodeName: string, requestBody: any): Pr
const dbResponse: INodeOptionsValue[] = await nodeInstance.loadMethods![methodName]!.call(nodeInstance, nodeData, { const dbResponse: INodeOptionsValue[] = await nodeInstance.loadMethods![methodName]!.call(nodeInstance, nodeData, {
appDataSource: appServer.AppDataSource, appDataSource: appServer.AppDataSource,
databaseEntities: databaseEntities databaseEntities: databaseEntities,
componentNodes: appServer.nodesPool.componentNodes,
previousNodes: requestBody.previousNodes,
currentNode: requestBody.currentNode
}) })
return dbResponse return dbResponse

View File

@ -24,7 +24,7 @@ const getAssistantVectorStore = async (credentialId: string, vectorStoreId: stri
} }
const openai = new OpenAI({ apiKey: openAIApiKey }) const openai = new OpenAI({ apiKey: openAIApiKey })
const dbResponse = await openai.beta.vectorStores.retrieve(vectorStoreId) const dbResponse = await openai.vectorStores.retrieve(vectorStoreId)
return dbResponse return dbResponse
} catch (error) { } catch (error) {
throw new InternalFlowiseError( throw new InternalFlowiseError(
@ -51,7 +51,7 @@ const listAssistantVectorStore = async (credentialId: string) => {
} }
const openai = new OpenAI({ apiKey: openAIApiKey }) const openai = new OpenAI({ apiKey: openAIApiKey })
const dbResponse = await openai.beta.vectorStores.list() const dbResponse = await openai.vectorStores.list()
return dbResponse.data return dbResponse.data
} catch (error) { } catch (error) {
throw new InternalFlowiseError( throw new InternalFlowiseError(
@ -61,7 +61,7 @@ const listAssistantVectorStore = async (credentialId: string) => {
} }
} }
const createAssistantVectorStore = async (credentialId: string, obj: OpenAI.Beta.VectorStores.VectorStoreCreateParams) => { const createAssistantVectorStore = async (credentialId: string, obj: OpenAI.VectorStores.VectorStoreCreateParams) => {
try { try {
const appServer = getRunningExpressApp() const appServer = getRunningExpressApp()
const credential = await appServer.AppDataSource.getRepository(Credential).findOneBy({ const credential = await appServer.AppDataSource.getRepository(Credential).findOneBy({
@ -78,7 +78,7 @@ const createAssistantVectorStore = async (credentialId: string, obj: OpenAI.Beta
} }
const openai = new OpenAI({ apiKey: openAIApiKey }) const openai = new OpenAI({ apiKey: openAIApiKey })
const dbResponse = await openai.beta.vectorStores.create(obj) const dbResponse = await openai.vectorStores.create(obj)
return dbResponse return dbResponse
} catch (error) { } catch (error) {
throw new InternalFlowiseError( throw new InternalFlowiseError(
@ -91,7 +91,7 @@ const createAssistantVectorStore = async (credentialId: string, obj: OpenAI.Beta
const updateAssistantVectorStore = async ( const updateAssistantVectorStore = async (
credentialId: string, credentialId: string,
vectorStoreId: string, vectorStoreId: string,
obj: OpenAI.Beta.VectorStores.VectorStoreUpdateParams obj: OpenAI.VectorStores.VectorStoreUpdateParams
) => { ) => {
try { try {
const appServer = getRunningExpressApp() const appServer = getRunningExpressApp()
@ -109,8 +109,8 @@ const updateAssistantVectorStore = async (
} }
const openai = new OpenAI({ apiKey: openAIApiKey }) const openai = new OpenAI({ apiKey: openAIApiKey })
const dbResponse = await openai.beta.vectorStores.update(vectorStoreId, obj) const dbResponse = await openai.vectorStores.update(vectorStoreId, obj)
const vectorStoreFiles = await openai.beta.vectorStores.files.list(vectorStoreId) const vectorStoreFiles = await openai.vectorStores.files.list(vectorStoreId)
if (vectorStoreFiles.data?.length) { if (vectorStoreFiles.data?.length) {
const files = [] const files = []
for (const file of vectorStoreFiles.data) { for (const file of vectorStoreFiles.data) {
@ -145,7 +145,7 @@ const deleteAssistantVectorStore = async (credentialId: string, vectorStoreId: s
} }
const openai = new OpenAI({ apiKey: openAIApiKey }) const openai = new OpenAI({ apiKey: openAIApiKey })
const dbResponse = await openai.beta.vectorStores.del(vectorStoreId) const dbResponse = await openai.vectorStores.del(vectorStoreId)
return dbResponse return dbResponse
} catch (error) { } catch (error) {
throw new InternalFlowiseError( throw new InternalFlowiseError(
@ -190,7 +190,7 @@ const uploadFilesToAssistantVectorStore = async (
const file_ids = [...uploadedFiles.map((file) => file.id)] const file_ids = [...uploadedFiles.map((file) => file.id)]
const res = await openai.beta.vectorStores.fileBatches.createAndPoll(vectorStoreId, { const res = await openai.vectorStores.fileBatches.createAndPoll(vectorStoreId, {
file_ids file_ids
}) })
if (res.status === 'completed' && res.file_counts.completed === uploadedFiles.length) return uploadedFiles if (res.status === 'completed' && res.file_counts.completed === uploadedFiles.length) return uploadedFiles
@ -232,7 +232,7 @@ const deleteFilesFromAssistantVectorStore = async (credentialId: string, vectorS
const deletedFileIds = [] const deletedFileIds = []
let count = 0 let count = 0
for (const file of file_ids) { for (const file of file_ids) {
const res = await openai.beta.vectorStores.files.del(vectorStoreId, file) const res = await openai.vectorStores.files.del(vectorStoreId, file)
if (res.deleted) { if (res.deleted) {
deletedFileIds.push(file) deletedFileIds.push(file)
count += 1 count += 1

View File

@ -68,10 +68,10 @@ const getSingleOpenaiAssistant = async (credentialId: string, assistantId: strin
if (dbResponse.tool_resources?.file_search?.vector_store_ids?.length) { if (dbResponse.tool_resources?.file_search?.vector_store_ids?.length) {
// Since there can only be 1 vector store per assistant // Since there can only be 1 vector store per assistant
const vectorStoreId = dbResponse.tool_resources.file_search.vector_store_ids[0] const vectorStoreId = dbResponse.tool_resources.file_search.vector_store_ids[0]
const vectorStoreFiles = await openai.beta.vectorStores.files.list(vectorStoreId) const vectorStoreFiles = await openai.vectorStores.files.list(vectorStoreId)
const fileIds = vectorStoreFiles.data?.map((file) => file.id) ?? [] const fileIds = vectorStoreFiles.data?.map((file) => file.id) ?? []
;(dbResponse.tool_resources.file_search as any).files = [...existingFiles.filter((file) => fileIds.includes(file.id))] ;(dbResponse.tool_resources.file_search as any).files = [...existingFiles.filter((file) => fileIds.includes(file.id))]
;(dbResponse.tool_resources.file_search as any).vector_store_object = await openai.beta.vectorStores.retrieve(vectorStoreId) ;(dbResponse.tool_resources.file_search as any).vector_store_object = await openai.vectorStores.retrieve(vectorStoreId)
} }
return dbResponse return dbResponse
} catch (error) { } catch (error) {

View File

@ -0,0 +1,326 @@
import { StatusCodes } from 'http-status-codes'
import { InternalFlowiseError } from '../../errors/internalFlowiseError'
import { getErrorMessage } from '../../errors/utils'
import { getRunningExpressApp } from '../../utils/getRunningExpressApp'
import { ChatFlow } from '../../database/entities/ChatFlow'
import { INodeParams } from 'flowise-components'
import { IReactFlowEdge, IReactFlowNode } from '../../Interface'
interface IValidationResult {
id: string
label: string
name: string
issues: string[]
}
const checkFlowValidation = async (flowId: string): Promise<IValidationResult[]> => {
try {
const appServer = getRunningExpressApp()
const componentNodes = appServer.nodesPool.componentNodes
const flow = await appServer.AppDataSource.getRepository(ChatFlow).findOne({
where: {
id: flowId
}
})
if (!flow) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Error: validationService.checkFlowValidation - flow not found!`)
}
const flowData = JSON.parse(flow.flowData)
const nodes = flowData.nodes
const edges = flowData.edges
// Store validation results
const validationResults = []
// Create a map of connected nodes
const connectedNodes = new Set<string>()
edges.forEach((edge: IReactFlowEdge) => {
connectedNodes.add(edge.source)
connectedNodes.add(edge.target)
})
// Validate each node
for (const node of nodes) {
if (node.data.name === 'stickyNoteAgentflow') continue
const nodeIssues = []
// Check if node is connected
if (!connectedNodes.has(node.id)) {
nodeIssues.push('This node is not connected to anything')
}
// Validate input parameters
if (node.data && node.data.inputParams && node.data.inputs) {
for (const param of node.data.inputParams) {
// Skip validation if the parameter has show condition that doesn't match
if (param.show) {
let shouldShow = true
for (const [key, value] of Object.entries(param.show)) {
if (node.data.inputs[key] !== value) {
shouldShow = false
break
}
}
if (!shouldShow) continue
}
// Skip validation if the parameter has hide condition that matches
if (param.hide) {
let shouldHide = true
for (const [key, value] of Object.entries(param.hide)) {
if (node.data.inputs[key] !== value) {
shouldHide = false
break
}
}
if (shouldHide) continue
}
// Check if required parameter has a value
if (!param.optional) {
const inputValue = node.data.inputs[param.name]
if (inputValue === undefined || inputValue === null || inputValue === '') {
nodeIssues.push(`${param.label} is required`)
}
}
// Check array type parameters (even if the array itself is optional)
if (param.type === 'array' && Array.isArray(node.data.inputs[param.name])) {
const inputValue = node.data.inputs[param.name]
// Only validate non-empty arrays (if array is required but empty, it's caught above)
if (inputValue.length > 0) {
// Check each item in the array
inputValue.forEach((item: Record<string, any>, index: number) => {
if (param.array) {
param.array.forEach((arrayParam: INodeParams) => {
// Evaluate if this parameter should be shown based on current values
// First check show conditions
let shouldValidate = true
if (arrayParam.show) {
// Default to not showing unless conditions match
shouldValidate = false
// Each key in show is a condition that must be satisfied
for (const [conditionKey, expectedValue] of Object.entries(arrayParam.show)) {
const isIndexCondition = conditionKey.includes('$index')
let actualValue
if (isIndexCondition) {
// Replace $index with actual index and evaluate
const normalizedKey = conditionKey.replace(/conditions\[\$index\]\.(\w+)/, '$1')
actualValue = item[normalizedKey]
} else {
// Direct property in the current item
actualValue = item[conditionKey]
}
// Check if condition is satisfied
let conditionMet = false
if (Array.isArray(expectedValue)) {
conditionMet = expectedValue.includes(actualValue)
} else {
conditionMet = actualValue === expectedValue
}
if (conditionMet) {
shouldValidate = true
break // One matching condition is enough
}
}
}
// Then check hide conditions (they override show conditions)
if (shouldValidate && arrayParam.hide) {
for (const [conditionKey, expectedValue] of Object.entries(arrayParam.hide)) {
const isIndexCondition = conditionKey.includes('$index')
let actualValue
if (isIndexCondition) {
// Replace $index with actual index and evaluate
const normalizedKey = conditionKey.replace(/conditions\[\$index\]\.(\w+)/, '$1')
actualValue = item[normalizedKey]
} else {
// Direct property in the current item
actualValue = item[conditionKey]
}
// Check if hide condition is met
let shouldHide = false
if (Array.isArray(expectedValue)) {
shouldHide = expectedValue.includes(actualValue)
} else {
shouldHide = actualValue === expectedValue
}
if (shouldHide) {
shouldValidate = false
break // One matching hide condition is enough to hide
}
}
}
// Only validate if field should be shown
if (shouldValidate) {
// Check if value is required and missing
if (
(arrayParam.optional === undefined || !arrayParam.optional) &&
(item[arrayParam.name] === undefined ||
item[arrayParam.name] === null ||
item[arrayParam.name] === '' ||
item[arrayParam.name] === '<p></p>')
) {
nodeIssues.push(`${param.label} item #${index + 1}: ${arrayParam.label} is required`)
}
}
})
}
})
}
}
// Check for credential requirements
if (param.name === 'credential' && !param.optional) {
const credentialValue = node.data.inputs[param.name]
if (!credentialValue) {
nodeIssues.push(`Credential is required`)
}
}
// Check for nested config parameters
const configKey = `${param.name}Config`
if (node.data.inputs[configKey] && node.data.inputs[param.name]) {
const componentName = node.data.inputs[param.name]
const configValue = node.data.inputs[configKey]
// Check if the component exists in the componentNodes pool
if (componentNodes[componentName] && componentNodes[componentName].inputs) {
const componentInputParams = componentNodes[componentName].inputs
// Validate each required input parameter in the component
for (const componentParam of componentInputParams) {
// Skip validation if the parameter has show condition that doesn't match
if (componentParam.show) {
let shouldShow = true
for (const [key, value] of Object.entries(componentParam.show)) {
if (configValue[key] !== value) {
shouldShow = false
break
}
}
if (!shouldShow) continue
}
// Skip validation if the parameter has hide condition that matches
if (componentParam.hide) {
let shouldHide = true
for (const [key, value] of Object.entries(componentParam.hide)) {
if (configValue[key] !== value) {
shouldHide = false
break
}
}
if (shouldHide) continue
}
if (!componentParam.optional) {
const nestedValue = configValue[componentParam.name]
if (nestedValue === undefined || nestedValue === null || nestedValue === '') {
nodeIssues.push(`${param.label} configuration: ${componentParam.label} is required`)
}
}
}
// Check for credential requirement in the component
if (componentNodes[componentName].credential && !componentNodes[componentName].credential.optional) {
if (!configValue.FLOWISE_CREDENTIAL_ID && !configValue.credential) {
nodeIssues.push(`${param.label} requires a credential`)
}
}
}
}
}
}
// Add node to validation results if it has issues
if (nodeIssues.length > 0) {
validationResults.push({
id: node.id,
label: node.data.label,
name: node.data.name,
issues: nodeIssues
})
}
}
// Check for hanging edges
for (const edge of edges) {
const sourceExists = nodes.some((node: IReactFlowNode) => node.id === edge.source)
const targetExists = nodes.some((node: IReactFlowEdge) => node.id === edge.target)
if (!sourceExists || !targetExists) {
// Find the existing node that is connected to this hanging edge
if (!sourceExists && targetExists) {
// Target exists but source doesn't - add issue to target node
const targetNode = nodes.find((node: IReactFlowNode) => node.id === edge.target)
const targetNodeResult = validationResults.find((result) => result.id === edge.target)
if (targetNodeResult) {
// Add to existing validation result
targetNodeResult.issues.push(`Connected to non-existent source node ${edge.source}`)
} else {
// Create new validation result for this node
validationResults.push({
id: targetNode.id,
label: targetNode.data.label,
name: targetNode.data.name,
issues: [`Connected to non-existent source node ${edge.source}`]
})
}
} else if (sourceExists && !targetExists) {
// Source exists but target doesn't - add issue to source node
const sourceNode = nodes.find((node: IReactFlowNode) => node.id === edge.source)
const sourceNodeResult = validationResults.find((result) => result.id === edge.source)
if (sourceNodeResult) {
// Add to existing validation result
sourceNodeResult.issues.push(`Connected to non-existent target node ${edge.target}`)
} else {
// Create new validation result for this node
validationResults.push({
id: sourceNode.id,
label: sourceNode.data.label,
name: sourceNode.data.name,
issues: [`Connected to non-existent target node ${edge.target}`]
})
}
} else {
// Both source and target don't exist - create a generic edge issue
validationResults.push({
id: edge.id,
label: `Edge ${edge.id}`,
name: 'edge',
issues: ['Disconnected edge - both source and target nodes do not exist']
})
}
}
}
return validationResults
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: validationService.checkFlowValidation - ${getErrorMessage(error)}`
)
}
}
export default {
checkFlowValidation
}

View File

@ -99,6 +99,16 @@ export class SSEStreamer implements IServerSideEventStreamer {
client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n') client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n')
} }
} }
streamCalledToolsEvent(chatId: string, data: any): void {
const client = this.clients[chatId]
if (client) {
const clientResponse = {
event: 'calledTools',
data: data
}
client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n')
}
}
streamFileAnnotationsEvent(chatId: string, data: any): void { streamFileAnnotationsEvent(chatId: string, data: any): void {
const client = this.clients[chatId] const client = this.clients[chatId]
if (client) { if (client) {
@ -139,6 +149,36 @@ export class SSEStreamer implements IServerSideEventStreamer {
client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n') client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n')
} }
} }
streamAgentFlowEvent(chatId: string, data: any): void {
const client = this.clients[chatId]
if (client) {
const clientResponse = {
event: 'agentFlowEvent',
data: data
}
client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n')
}
}
streamAgentFlowExecutedDataEvent(chatId: string, data: any): void {
const client = this.clients[chatId]
if (client) {
const clientResponse = {
event: 'agentFlowExecutedData',
data: data
}
client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n')
}
}
streamNextAgentFlowEvent(chatId: string, data: any): void {
const client = this.clients[chatId]
if (client) {
const clientResponse = {
event: 'nextAgentFlow',
data: data
}
client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n')
}
}
streamActionEvent(chatId: string, data: any): void { streamActionEvent(chatId: string, data: any): void {
const client = this.clients[chatId] const client = this.clients[chatId]
if (client) { if (client) {
@ -206,4 +246,15 @@ export class SSEStreamer implements IServerSideEventStreamer {
this.streamCustomEvent(chatId, 'metadata', metadataJson) this.streamCustomEvent(chatId, 'metadata', metadataJson)
} }
} }
streamUsageMetadataEvent(chatId: string, data: any): void {
const client = this.clients[chatId]
if (client) {
const clientResponse = {
event: 'usageMetadata',
data: data
}
client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n')
}
}
} }

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More