diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml
new file mode 100644
index 000000000..fa9d527b8
--- /dev/null
+++ b/.github/FUNDING.yml
@@ -0,0 +1,13 @@
+# These are supported funding model platforms
+
+github: [FlowiseAI] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
+patreon: # Replace with a single Patreon username
+open_collective: # Replace with a single Open Collective username
+ko_fi: # Replace with a single Ko-fi username
+tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
+community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
+liberapay: # Replace with a single Liberapay username
+issuehunt: # Replace with a single IssueHunt username
+otechie: # Replace with a single Otechie username
+lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
+custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index b73185075..b8e2e8a5a 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -23,9 +23,14 @@ A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
+**Flow**
+If applicable, add exported flow in order to help replicating the problem.
+
**Setup**
-- OS: [e.g. iOS, Windows, Linux]
+- Installation [e.g. docker, `npx flowise start`, `yarn start`]
+- Flowise Version [e.g. 1.2.11]
+- OS: [e.g. macOS, Windows, Linux]
- Browser [e.g. chrome, safari]
**Additional context**
diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml
index a89846a62..759f195f7 100644
--- a/.github/workflows/main.yml
+++ b/.github/workflows/main.yml
@@ -3,7 +3,7 @@ name: Node CI
on:
push:
branches:
- - master
+ - main
pull_request:
branches:
@@ -19,7 +19,8 @@ jobs:
platform: [ubuntu-latest]
node-version: [18.15.0]
runs-on: ${{ matrix.platform }}
-
+ env:
+ PUPPETEER_SKIP_DOWNLOAD: true
steps:
- uses: actions/checkout@v3
- name: Use Node.js ${{ matrix.node-version }}
diff --git a/.github/workflows/test_docker_build.yml b/.github/workflows/test_docker_build.yml
new file mode 100644
index 000000000..a27cf22dd
--- /dev/null
+++ b/.github/workflows/test_docker_build.yml
@@ -0,0 +1,20 @@
+name: Test Docker Build
+
+on:
+ push:
+ branches:
+ - main
+
+ pull_request:
+ branches:
+ - '*'
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
+ env:
+ PUPPETEER_SKIP_DOWNLOAD: true
+ steps:
+ - uses: actions/checkout@v3
+
+ - run: docker build --no-cache -t flowise .
diff --git a/.gitignore b/.gitignore
index 9f5ef2e56..533f68a52 100644
--- a/.gitignore
+++ b/.gitignore
@@ -8,6 +8,7 @@
**/yarn.lock
## logs
+**/logs
**/*.log
## build
@@ -42,4 +43,4 @@
**/uploads
## compressed
-**/*.tgz
\ No newline at end of file
+**/*.tgz
diff --git a/CODE_OF_CONDUCT-ZH.md b/CODE_OF_CONDUCT-ZH.md
new file mode 100644
index 000000000..be6332ddb
--- /dev/null
+++ b/CODE_OF_CONDUCT-ZH.md
@@ -0,0 +1,49 @@
+
+
+# 贡献者公约行为准则
+
+[English](./CODE_OF_CONDUCT.md) | 中文
+
+## 我们的承诺
+
+为了促进一个开放和友好的环境,我们作为贡献者和维护者承诺,使参与我们的项目和社区的体验对每个人来说都是无骚扰的,无论年龄、体型、残疾、种族、性别认同和表达、经验水平、国籍、个人形象、种族、宗教或性取向如何。
+
+## 我们的标准
+
+有助于创建积极环境的行为示例包括:
+
+- 使用友好和包容性的语言
+- 尊重不同的观点和经验
+- 优雅地接受建设性的批评
+- 关注社区最有利的事情
+- 向其他社区成员表达同理心
+
+参与者不可接受的行为示例包括:
+
+- 使用性暗示的语言或图像和不受欢迎的性关注或进展
+- 恶作剧、侮辱/贬低的评论和个人或政治攻击
+- 公开或私下骚扰
+- 未经明确许可发布他人的私人信息,如实际或电子地址
+- 在专业环境中可能被合理认为不适当的其他行为
+
+## 我们的责任
+
+项目维护者有责任明确可接受行为的标准,并预期对任何不可接受行为的情况采取适当和公正的纠正措施。
+
+项目维护者有权和责任删除、编辑或拒绝不符合本行为准则的评论、提交、代码、维基编辑、问题和其他贡献,或者临时或永久禁止任何贡献者,如果他们认为其行为不适当、威胁、冒犯或有害。
+
+## 适用范围
+
+本行为准则适用于项目空间和公共空间,当个人代表项目或其社区时。代表项目或社区的示例包括使用官方项目电子邮件地址、通过官方社交媒体账号发布或在线或离线活动中担任指定代表。项目的代表可以由项目维护者进一步定义和澄清。
+
+## 执法
+
+可以通过联系项目团队 hello@flowiseai.com 来报告滥用、骚扰或其他不可接受的行为。所有投诉将经过审核和调查,并将得出视情况认为必要和适当的回应。项目团队有义务对事件举报人保持机密。具体执行政策的更多细节可能会单独发布。
+
+如果项目维护者不诚信地遵守或执行行为准则,可能会面临其他项目领导成员决定的临时或永久的后果。
+
+## 归属
+
+该行为准则的内容来自于[贡献者公约](http://contributor-covenant.org/)1.4 版,可在[http://contributor-covenant.org/version/1/4](http://contributor-covenant.org/version/1/4)上获取。
+
+[主页]: http://contributor-covenant.org
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
index 7865b84e0..da7a51c66 100644
--- a/CODE_OF_CONDUCT.md
+++ b/CODE_OF_CONDUCT.md
@@ -1,5 +1,7 @@
# Contributor Covenant Code of Conduct
+English | [中文](./CODE_OF_CONDUCT-ZH.md)
+
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
diff --git a/CONTRIBUTING-ZH.md b/CONTRIBUTING-ZH.md
new file mode 100644
index 000000000..bec081f4d
--- /dev/null
+++ b/CONTRIBUTING-ZH.md
@@ -0,0 +1,155 @@
+
+
+# 贡献给 Flowise
+
+[English](./CONTRIBUTING.md) | 中文
+
+我们欢迎任何形式的贡献。
+
+## ⭐ 点赞
+
+点赞并分享[Github 仓库](https://github.com/FlowiseAI/Flowise)。
+
+## 🙋 问题和回答
+
+在[问题和回答](https://github.com/FlowiseAI/Flowise/discussions/categories/q-a)部分搜索任何问题,如果找不到,可以毫不犹豫地创建一个。这可能会帮助到其他有类似问题的人。
+
+## 🙌 分享 Chatflow
+
+是的!分享你如何使用 Flowise 是一种贡献方式。将你的 Chatflow 导出为 JSON,附上截图并在[展示和分享](https://github.com/FlowiseAI/Flowise/discussions/categories/show-and-tell)部分分享。
+
+## 💡 想法
+
+欢迎各种想法,如新功能、应用集成和区块链网络。在[想法](https://github.com/FlowiseAI/Flowise/discussions/categories/ideas)部分提交。
+
+## 🐞 报告错误
+
+发现问题了吗?[报告它](https://github.com/FlowiseAI/Flowise/issues/new/choose)。
+
+## 👨💻 贡献代码
+
+不确定要贡献什么?一些想法:
+
+- 从 Langchain 创建新组件
+- 更新现有组件,如扩展功能、修复错误
+- 添加新的 Chatflow 想法
+
+### 开发人员
+
+Flowise 在一个单一的单体存储库中有 3 个不同的模块。
+
+- `server`:用于提供 API 逻辑的 Node 后端
+- `ui`:React 前端
+- `components`:Langchain 组件
+
+#### 先决条件
+
+- 安装 [Yarn v1](https://classic.yarnpkg.com/en/docs/install)
+ ```bash
+ npm i -g yarn
+ ```
+
+#### 逐步指南
+
+1. Fork 官方的[Flowise Github 仓库](https://github.com/FlowiseAI/Flowise)。
+
+2. 克隆你 fork 的存储库。
+
+3. 创建一个新的分支,参考[指南](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-and-deleting-branches-within-your-repository)。命名约定:
+
+ - 对于功能分支:`feature/<你的新功能>`
+ - 对于 bug 修复分支:`bugfix/<你的新bug修复>`。
+
+4. 切换到新创建的分支。
+
+5. 进入存储库文件夹
+
+ ```bash
+ cd Flowise
+ ```
+
+6. 安装所有模块的依赖项:
+
+ ```bash
+ yarn install
+ ```
+
+7. 构建所有代码:
+
+ ```bash
+ yarn build
+ ```
+
+8. 在[http://localhost:3000](http://localhost:3000)上启动应用程序
+
+ ```bash
+ yarn start
+ ```
+
+9. 开发时:
+
+ - 在`packages/ui`中创建`.env`文件并指定`PORT`(参考`.env.example`)
+ - 在`packages/server`中创建`.env`文件并指定`PORT`(参考`.env.example`)
+ - 运行
+
+ ```bash
+ yarn dev
+ ```
+
+ 对`packages/ui`或`packages/server`进行的任何更改都将反映在[http://localhost:8080](http://localhost:8080)上
+
+ 对于`packages/components`中进行的更改,再次运行`yarn build`以应用更改。
+
+10. 做完所有的更改后,运行以下命令来确保在生产环境中一切正常:
+
+ ```bash
+ yarn build
+ ```
+
+ 和
+
+ ```bash
+ yarn start
+ ```
+
+11. 提交代码并从指向 [Flowise 主分支](https://github.com/FlowiseAI/Flowise/tree/master) 的分叉分支上提交 Pull Request。
+
+## 🌱 环境变量
+
+Flowise 支持不同的环境变量来配置您的实例。您可以在 `packages/server` 文件夹中的 `.env` 文件中指定以下变量。阅读[更多信息](https://docs.flowiseai.com/environment-variables)
+
+| 变量名 | 描述 | 类型 | 默认值 |
+| -------------------------- | ------------------------------------------------------ | ----------------------------------------------- | ----------------------------------- |
+| PORT | Flowise 运行的 HTTP 端口 | 数字 | 3000 |
+| FLOWISE_USERNAME | 登录用户名 | 字符串 | |
+| FLOWISE_PASSWORD | 登录密码 | 字符串 | |
+| DEBUG | 打印组件的日志 | 布尔值 | |
+| LOG_PATH | 存储日志文件的位置 | 字符串 | `your-path/Flowise/logs` |
+| LOG_LEVEL | 日志的不同级别 | 枚举字符串: `error`, `info`, `verbose`, `debug` | `info` |
+| APIKEY_PATH | 存储 API 密钥的位置 | 字符串 | `your-path/Flowise/packages/server` |
+| TOOL_FUNCTION_BUILTIN_DEP | 用于工具函数的 NodeJS 内置模块 | 字符串 | |
+| TOOL_FUNCTION_EXTERNAL_DEP | 用于工具函数的外部模块 | 字符串 | |
+| OVERRIDE_DATABASE | 是否使用默认值覆盖当前数据库 | 枚举字符串: `true`, `false` | `true` |
+| DATABASE_TYPE | 存储 flowise 数据的数据库类型 | 枚举字符串: `sqlite`, `mysql`, `postgres` | `sqlite` |
+| DATABASE_PATH | 数据库保存的位置(当 DATABASE_TYPE 是 sqlite 时) | 字符串 | `your-home-dir/.flowise` |
+| DATABASE_HOST | 主机 URL 或 IP 地址(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
+| DATABASE_PORT | 数据库端口(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
+| DATABASE_USERNAME | 数据库用户名(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
+| DATABASE_PASSWORD | 数据库密码(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
+| DATABASE_NAME | 数据库名称(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
+
+您也可以在使用 `npx` 时指定环境变量。例如:
+
+```
+npx flowise start --PORT=3000 --DEBUG=true
+```
+
+## 📖 贡献文档
+
+[Flowise 文档](https://github.com/FlowiseAI/FlowiseDocs)
+
+## 🏷️ Pull Request 流程
+
+当您打开一个 Pull Request 时,FlowiseAI 团队的成员将自动收到通知/指派。您也可以在 [Discord](https://discord.gg/jbaHfsRVBW) 上联系我们。
+
+##
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index a09051f32..90ba5498d 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -2,6 +2,8 @@
# Contributing to Flowise
+English | [中文](./CONTRIBUTING-ZH.md)
+
We appreciate any form of contributions.
## ⭐ Star
@@ -42,7 +44,7 @@ Flowise has 3 different modules in a single mono repository.
#### Prerequisite
-- Install Yarn
+- Install [Yarn v1](https://classic.yarnpkg.com/en/docs/install)
```bash
npm i -g yarn
```
@@ -84,7 +86,11 @@ Flowise has 3 different modules in a single mono repository.
yarn start
```
-9. For development, run
+9. For development:
+
+ - Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/ui`
+ - Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/server`
+ - Run
```bash
yarn dev
@@ -110,9 +116,41 @@ Flowise has 3 different modules in a single mono repository.
11. Commit code and submit Pull Request from forked branch pointing to [Flowise master](https://github.com/FlowiseAI/Flowise/tree/master).
+## 🌱 Env Variables
+
+Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://docs.flowiseai.com/environment-variables)
+
+| Variable | Description | Type | Default |
+| -------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
+| PORT | The HTTP port Flowise runs on | Number | 3000 |
+| FLOWISE_USERNAME | Username to login | String | |
+| FLOWISE_PASSWORD | Password to login | String | |
+| DEBUG | Print logs from components | Boolean | |
+| LOG_PATH | Location where log files are stored | String | `your-path/Flowise/logs` |
+| LOG_LEVEL | Different levels of logs | Enum String: `error`, `info`, `verbose`, `debug` | `info` |
+| APIKEY_PATH | Location where api keys are saved | String | `your-path/Flowise/packages/server` |
+| TOOL_FUNCTION_BUILTIN_DEP | NodeJS built-in modules to be used for Tool Function | String | |
+| TOOL_FUNCTION_EXTERNAL_DEP | External modules to be used for Tool Function | String | |
+| OVERRIDE_DATABASE | Override current database with default | Enum String: `true`, `false` | `true` |
+| DATABASE_TYPE | Type of database to store the flowise data | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite` |
+| DATABASE_PATH | Location where database is saved (When DATABASE_TYPE is sqlite) | String | `your-home-dir/.flowise` |
+| DATABASE_HOST | Host URL or IP address (When DATABASE_TYPE is not sqlite) | String | |
+| DATABASE_PORT | Database port (When DATABASE_TYPE is not sqlite) | String | |
+| DATABASE_USER | Database username (When DATABASE_TYPE is not sqlite) | String | |
+| DATABASE_PASSWORD | Database password (When DATABASE_TYPE is not sqlite) | String | |
+| DATABASE_NAME | Database name (When DATABASE_TYPE is not sqlite) | String | |
+| PASSPHRASE | Passphrase used to create encryption key | String | `MYPASSPHRASE` |
+| SECRETKEY_PATH | Location where encryption key (used to encrypt/decrypt credentials) is saved | String | `your-path/Flowise/packages/server` |
+
+You can also specify the env variables when using `npx`. For example:
+
+```
+npx flowise start --PORT=3000 --DEBUG=true
+```
+
## 📖 Contribute to Docs
-In-Progress
+[Flowise Docs](https://github.com/FlowiseAI/FlowiseDocs)
## 🏷️ Pull Request process
diff --git a/Dockerfile b/Dockerfile
index 315b4739f..e485cd3ef 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,14 +1,24 @@
# Build local monorepo image
# docker build --no-cache -t flowise .
+
# Run image
# docker run -d -p 3000:3000 flowise
+
FROM node:18-alpine
+RUN apk add --update libc6-compat python3 make g++
+# needed for pdfjs-dist
+RUN apk add --no-cache build-base cairo-dev pango-dev
+
+# Install Chromium
+RUN apk add --no-cache chromium
+
+ENV PUPPETEER_SKIP_DOWNLOAD=true
+ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
WORKDIR /usr/src/packages
# Copy root package.json and lockfile
-COPY package.json ./
-COPY yarn.lock ./
+COPY package.json yarn.loc[k] ./
# Copy components package.json
COPY packages/components/package.json ./packages/components/package.json
diff --git a/README-ZH.md b/README-ZH.md
new file mode 100644
index 000000000..e0eb9de26
--- /dev/null
+++ b/README-ZH.md
@@ -0,0 +1,188 @@
+
+
+
+
+# Flowise - 轻松构建 LLM 应用程序
+
+[](https://github.com/FlowiseAI/Flowise/releases)
+[](https://discord.gg/jbaHfsRVBW)
+[](https://twitter.com/FlowiseAI)
+[](https://star-history.com/#FlowiseAI/Flowise)
+[](https://github.com/FlowiseAI/Flowise/fork)
+
+[English](./README.md) | 中文
+
+
拖放界面构建定制化的LLM流程
+
+
+
+## ⚡ 快速入门
+
+下载并安装 [NodeJS](https://nodejs.org/en/download) >= 18.15.0
+
+1. 安装 Flowise
+ ```bash
+ npm install -g flowise
+ ```
+2. 启动 Flowise
+
+ ```bash
+ npx flowise start
+ ```
+
+ 使用用户名和密码
+
+ ```bash
+ npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
+ ```
+
+3. 打开 [http://localhost:3000](http://localhost:3000)
+
+## 🐳 Docker
+
+### Docker Compose
+
+1. 进入项目根目录下的 `docker` 文件夹
+2. 创建 `.env` 文件并指定 `PORT`(参考 `.env.example`)
+3. 运行 `docker-compose up -d`
+4. 打开 [http://localhost:3000](http://localhost:3000)
+5. 可以通过 `docker-compose stop` 停止容器
+
+### Docker 镜像
+
+1. 本地构建镜像:
+ ```bash
+ docker build --no-cache -t flowise .
+ ```
+2. 运行镜像:
+
+ ```bash
+ docker run -d --name flowise -p 3000:3000 flowise
+ ```
+
+3. 停止镜像:
+ ```bash
+ docker stop flowise
+ ```
+
+## 👨💻 开发者
+
+Flowise 在一个单一的代码库中有 3 个不同的模块。
+
+- `server`:用于提供 API 逻辑的 Node 后端
+- `ui`:React 前端
+- `components`:Langchain 组件
+
+### 先决条件
+
+- 安装 [Yarn v1](https://classic.yarnpkg.com/en/docs/install)
+ ```bash
+ npm i -g yarn
+ ```
+
+### 设置
+
+1. 克隆仓库
+
+ ```bash
+ git clone https://github.com/FlowiseAI/Flowise.git
+ ```
+
+2. 进入仓库文件夹
+
+ ```bash
+ cd Flowise
+ ```
+
+3. 安装所有模块的依赖:
+
+ ```bash
+ yarn install
+ ```
+
+4. 构建所有代码:
+
+ ```bash
+ yarn build
+ ```
+
+5. 启动应用:
+
+ ```bash
+ yarn start
+ ```
+
+ 现在可以在 [http://localhost:3000](http://localhost:3000) 访问应用
+
+6. 用于开发构建:
+
+ - 在 `packages/ui` 中创建 `.env` 文件并指定 `PORT`(参考 `.env.example`)
+ - 在 `packages/server` 中创建 `.env` 文件并指定 `PORT`(参考 `.env.example`)
+ - 运行
+
+ ```bash
+ yarn dev
+ ```
+
+ 任何代码更改都会自动重新加载应用程序,访问 [http://localhost:8080](http://localhost:8080)
+
+## 🔒 认证
+
+要启用应用程序级身份验证,在 `packages/server` 的 `.env` 文件中添加 `FLOWISE_USERNAME` 和 `FLOWISE_PASSWORD`:
+
+```
+FLOWISE_USERNAME=user
+FLOWISE_PASSWORD=1234
+```
+
+## 🌱 环境变量
+
+Flowise 支持不同的环境变量来配置您的实例。您可以在 `packages/server` 文件夹中的 `.env` 文件中指定以下变量。了解更多信息,请阅读[文档](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
+
+## 📖 文档
+
+[Flowise 文档](https://docs.flowiseai.com/)
+
+## 🌐 自托管
+
+### [Railway](https://docs.flowiseai.com/deployment/railway)
+
+[](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
+
+### [Render](https://docs.flowiseai.com/deployment/render)
+
+[](https://docs.flowiseai.com/deployment/render)
+
+### [HuggingFace Spaces](https://docs.flowiseai.com/deployment/hugging-face)
+
+
+
+### [AWS](https://docs.flowiseai.com/deployment/aws)
+
+### [Azure](https://docs.flowiseai.com/deployment/azure)
+
+### [DigitalOcean](https://docs.flowiseai.com/deployment/digital-ocean)
+
+### [GCP](https://docs.flowiseai.com/deployment/gcp)
+
+## 💻 云托管
+
+即将推出
+
+## 🙋 支持
+
+在[讨论区](https://github.com/FlowiseAI/Flowise/discussions)中随时提问、提出问题和请求新功能
+
+## 🙌 贡献
+
+感谢这些了不起的贡献者
+
+
+
+
+
+参见[贡献指南](CONTRIBUTING.md)。如果您有任何问题或问题,请在[Discord](https://discord.gg/jbaHfsRVBW)上与我们联系。
+
+## 📄 许可证
+
+此代码库中的源代码在[MIT 许可证](LICENSE.md)下提供。
diff --git a/README.md b/README.md
index 545b36ba8..b98a223a6 100644
--- a/README.md
+++ b/README.md
@@ -1,14 +1,25 @@
-# Flowise - LangchainJS UI
+
+# Flowise - Build LLM Apps Easily
+
+[](https://github.com/FlowiseAI/Flowise/releases)
+[](https://discord.gg/jbaHfsRVBW)
+[](https://twitter.com/FlowiseAI)
+[](https://star-history.com/#FlowiseAI/Flowise)
+[](https://github.com/FlowiseAI/Flowise/fork)
+
+English | [中文](./README-ZH.md)
+
+Drag & drop UI to build your customized LLM flow
-Drag & drop UI to build your customized LLM flow using [LangchainJS](https://github.com/hwchase17/langchainjs)
-
## ⚡Quick Start
+Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
+
1. Install Flowise
```bash
npm install -g flowise
@@ -19,16 +30,41 @@ Drag & drop UI to build your customized LLM flow using [LangchainJS](https://git
npx flowise start
```
+ With username & password
+
+ ```bash
+ npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
+ ```
+
3. Open [http://localhost:3000](http://localhost:3000)
## 🐳 Docker
+### Docker Compose
+
1. Go to `docker` folder at the root of the project
-2. Create `.env` file and specify the `PORT` (refer to `.env.example`)
+2. Copy `.env.example` file, paste it into the same location, and rename to `.env`
3. `docker-compose up -d`
4. Open [http://localhost:3000](http://localhost:3000)
5. You can bring the containers down by `docker-compose stop`
+### Docker Image
+
+1. Build the image locally:
+ ```bash
+ docker build --no-cache -t flowise .
+ ```
+2. Run image:
+
+ ```bash
+ docker run -d --name flowise -p 3000:3000 flowise
+ ```
+
+3. Stop image:
+ ```bash
+ docker stop flowise
+ ```
+
## 👨💻 Developers
Flowise has 3 different modules in a single mono repository.
@@ -39,7 +75,7 @@ Flowise has 3 different modules in a single mono repository.
### Prerequisite
-- Install Yarn
+- Install [Yarn v1](https://classic.yarnpkg.com/en/docs/install)
```bash
npm i -g yarn
```
@@ -80,31 +116,57 @@ Flowise has 3 different modules in a single mono repository.
6. For development build:
- ```bash
- yarn dev
- ```
+ - Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/ui`
+ - Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/server`
+ - Run
+
+ ```bash
+ yarn dev
+ ```
Any code changes will reload the app automatically on [http://localhost:8080](http://localhost:8080)
## 🔒 Authentication
-To enable app level authentication, add `USERNAME` and `PASSWORD` to the `.env` file in `packages/server`:
+To enable app level authentication, add `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `.env` file in `packages/server`:
```
-USERNAME=user
-PASSWORD=1234
+FLOWISE_USERNAME=user
+FLOWISE_PASSWORD=1234
```
+## 🌱 Env Variables
+
+Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
+
## 📖 Documentation
-Coming soon
-
-## 💻 Cloud Hosted
-
-Coming soon
+[Flowise Docs](https://docs.flowiseai.com/)
## 🌐 Self Host
+### [Railway](https://docs.flowiseai.com/deployment/railway)
+
+[](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
+
+### [Render](https://docs.flowiseai.com/deployment/render)
+
+[](https://docs.flowiseai.com/deployment/render)
+
+### [HuggingFace Spaces](https://docs.flowiseai.com/deployment/hugging-face)
+
+
+
+### [AWS](https://docs.flowiseai.com/deployment/aws)
+
+### [Azure](https://docs.flowiseai.com/deployment/azure)
+
+### [DigitalOcean](https://docs.flowiseai.com/deployment/digital-ocean)
+
+### [GCP](https://docs.flowiseai.com/deployment/gcp)
+
+## 💻 Cloud Hosted
+
Coming soon
## 🙋 Support
@@ -113,7 +175,14 @@ Feel free to ask any questions, raise problems, and request new features in [dis
## 🙌 Contributing
+Thanks go to these awesome contributors
+
+
+
+
+
See [contributing guide](CONTRIBUTING.md). Reach out to us at [Discord](https://discord.gg/jbaHfsRVBW) if you have any questions or issues.
+[](https://star-history.com/#FlowiseAI/Flowise&Date)
## 📄 License
diff --git a/artillery-load-test.yml b/artillery-load-test.yml
new file mode 100644
index 000000000..6b1c81401
--- /dev/null
+++ b/artillery-load-test.yml
@@ -0,0 +1,36 @@
+# npm install -g artillery@latest
+# artillery run artillery-load-test.yml
+# Refer https://www.artillery.io/docs
+
+config:
+ target: http://128.128.128.128:3000 # replace with your url
+ phases:
+ - duration: 1
+ arrivalRate: 1
+ rampTo: 2
+ name: Warm up phase
+ - duration: 1
+ arrivalRate: 2
+ rampTo: 3
+ name: Ramp up load
+ - duration: 1
+ arrivalRate: 3
+ name: Sustained peak load
+scenarios:
+ - flow:
+ - loop:
+ - post:
+ url: '/api/v1/prediction/chatflow-id' # replace with your chatflowid
+ json:
+ question: 'hello' # replace with your question
+ count: 1 # how many request each user make
+
+# User __
+# 3 /
+# 2 /
+# 1 _/
+# 1 2 3
+# Seconds
+# Total Users = 2 + 3 + 3 = 8
+# Each making 1 HTTP call
+# Over a duration of 3 seconds
diff --git a/docker/.env.example b/docker/.env.example
index c0c68b1ca..16b19cdcf 100644
--- a/docker/.env.example
+++ b/docker/.env.example
@@ -1 +1,26 @@
-PORT=3000
\ No newline at end of file
+PORT=3000
+PASSPHRASE=MYPASSPHRASE # Passphrase used to create encryption key
+DATABASE_PATH=/root/.flowise
+APIKEY_PATH=/root/.flowise
+SECRETKEY_PATH=/root/.flowise
+LOG_PATH=/root/.flowise/logs
+
+# DATABASE_TYPE=postgres
+# DATABASE_PORT=""
+# DATABASE_HOST=""
+# DATABASE_NAME="flowise"
+# DATABASE_USER=""
+# DATABASE_PASSWORD=""
+# OVERRIDE_DATABASE=true
+
+# FLOWISE_USERNAME=user
+# FLOWISE_PASSWORD=1234
+# DEBUG=true
+# LOG_LEVEL=debug (error | warn | info | verbose | debug)
+# TOOL_FUNCTION_BUILTIN_DEP=crypto,fs
+# TOOL_FUNCTION_EXTERNAL_DEP=moment,lodash
+
+# LANGCHAIN_TRACING_V2=true
+# LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
+# LANGCHAIN_API_KEY=your_api_key
+# LANGCHAIN_PROJECT=your_project
\ No newline at end of file
diff --git a/docker/Dockerfile b/docker/Dockerfile
index e4bf704a0..1ad1bf5ee 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -4,6 +4,14 @@ USER root
RUN apk add --no-cache git
RUN apk add --no-cache python3 py3-pip make g++
+# needed for pdfjs-dist
+RUN apk add --no-cache build-base cairo-dev pango-dev
+
+# Install Chromium
+RUN apk add --no-cache chromium
+
+ENV PUPPETEER_SKIP_DOWNLOAD=true
+ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# You can install a specific version like: flowise@1.0.0
RUN npm install -g flowise
diff --git a/docker/README.md b/docker/README.md
new file mode 100644
index 000000000..d3ad1c197
--- /dev/null
+++ b/docker/README.md
@@ -0,0 +1,35 @@
+# Flowise Docker Hub Image
+
+Starts Flowise from [DockerHub Image](https://hub.docker.com/repository/docker/flowiseai/flowise/general)
+
+## Usage
+
+1. Create `.env` file and specify the `PORT` (refer to `.env.example`)
+2. `docker-compose up -d`
+3. Open [http://localhost:3000](http://localhost:3000)
+4. You can bring the containers down by `docker-compose stop`
+
+## 🔒 Authentication
+
+1. Create `.env` file and specify the `PORT`, `FLOWISE_USERNAME`, and `FLOWISE_PASSWORD` (refer to `.env.example`)
+2. Pass `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `docker-compose.yml` file:
+ ```
+ environment:
+ - PORT=${PORT}
+ - FLOWISE_USERNAME=${FLOWISE_USERNAME}
+ - FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
+ ```
+3. `docker-compose up -d`
+4. Open [http://localhost:3000](http://localhost:3000)
+5. You can bring the containers down by `docker-compose stop`
+
+## 🌱 Env Variables
+
+If you like to persist your data (flows, logs, apikeys, credentials), set these variables in the `.env` file inside `docker` folder:
+
+- DATABASE_PATH=/root/.flowise
+- APIKEY_PATH=/root/.flowise
+- LOG_PATH=/root/.flowise/logs
+- SECRETKEY_PATH=/root/.flowise
+
+Flowise also support different environment variables to configure your instance. Read [more](https://docs.flowiseai.com/environment-variables)
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index 7d142cb8a..4a03bcf33 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -6,6 +6,15 @@ services:
restart: always
environment:
- PORT=${PORT}
+ - PASSPHRASE=${PASSPHRASE}
+ - FLOWISE_USERNAME=${FLOWISE_USERNAME}
+ - FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
+ - DEBUG=${DEBUG}
+ - DATABASE_PATH=${DATABASE_PATH}
+ - APIKEY_PATH=${APIKEY_PATH}
+ - SECRETKEY_PATH=${SECRETKEY_PATH}
+ - LOG_LEVEL=${LOG_LEVEL}
+ - LOG_PATH=${LOG_PATH}
ports:
- '${PORT}:${PORT}'
volumes:
diff --git a/images/flowise.png b/images/flowise.png
new file mode 100644
index 000000000..09c71fde2
Binary files /dev/null and b/images/flowise.png differ
diff --git a/package.json b/package.json
index 50c59a308..d5af440d2 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "flowise",
- "version": "1.2.6",
+ "version": "1.3.3",
"private": true,
"homepage": "https://flowiseai.com",
"workspaces": [
@@ -28,7 +28,6 @@
"*.{js,jsx,ts,tsx,json,md}": "eslint --fix"
},
"devDependencies": {
- "turbo": "1.7.4",
"@babel/preset-env": "^7.19.4",
"@babel/preset-typescript": "7.18.6",
"@types/express": "^4.17.13",
@@ -48,6 +47,7 @@
"pretty-quick": "^3.1.3",
"rimraf": "^3.0.2",
"run-script-os": "^1.1.6",
+ "turbo": "1.7.4",
"typescript": "^4.8.4"
},
"engines": {
diff --git a/packages/components/.env.example b/packages/components/.env.example
deleted file mode 100644
index 352bc6cb0..000000000
--- a/packages/components/.env.example
+++ /dev/null
@@ -1 +0,0 @@
-DEBUG=true
\ No newline at end of file
diff --git a/packages/components/README-ZH.md b/packages/components/README-ZH.md
new file mode 100644
index 000000000..2a8ba4ac5
--- /dev/null
+++ b/packages/components/README-ZH.md
@@ -0,0 +1,19 @@
+
+
+# 流式组件
+
+[English](./README.md) | 中文
+
+Flowise 的应用集成。包含节点和凭据。
+
+
+
+安装:
+
+```bash
+npm i flowise-components
+```
+
+## 许可证
+
+此存储库中的源代码在[MIT 许可证](https://github.com/FlowiseAI/Flowise/blob/master/LICENSE.md)下提供。
diff --git a/packages/components/README.md b/packages/components/README.md
index 8014661e1..848071882 100644
--- a/packages/components/README.md
+++ b/packages/components/README.md
@@ -2,6 +2,8 @@
# Flowise Components
+English | [中文](./README-ZH.md)
+
Apps integration for Flowise. Contain Nodes and Credentials.

@@ -12,14 +14,6 @@ Install:
npm i flowise-components
```
-## Debug
-
-To view all the logs, create an `.env` file and add:
-
-```
-DEBUG=true
-```
-
## License
Source code in this repository is made available under the [MIT License](https://github.com/FlowiseAI/Flowise/blob/master/LICENSE.md).
diff --git a/packages/components/credentials/AirtableApi.credential.ts b/packages/components/credentials/AirtableApi.credential.ts
new file mode 100644
index 000000000..323b308f3
--- /dev/null
+++ b/packages/components/credentials/AirtableApi.credential.ts
@@ -0,0 +1,27 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class AirtableApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Airtable API'
+ this.name = 'airtableApi'
+ this.version = 1.0
+ this.description =
+ 'Refer to official guide on how to get accessToken on Airtable'
+ this.inputs = [
+ {
+ label: 'Access Token',
+ name: 'accessToken',
+ type: 'password',
+ placeholder: ''
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: AirtableApi }
diff --git a/packages/components/credentials/AnthropicApi.credential.ts b/packages/components/credentials/AnthropicApi.credential.ts
new file mode 100644
index 000000000..955196c9b
--- /dev/null
+++ b/packages/components/credentials/AnthropicApi.credential.ts
@@ -0,0 +1,23 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class AnthropicApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Anthropic API'
+ this.name = 'anthropicApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Anthropic Api Key',
+ name: 'anthropicApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: AnthropicApi }
diff --git a/packages/components/credentials/ApifyApi.credential.ts b/packages/components/credentials/ApifyApi.credential.ts
new file mode 100644
index 000000000..c961fd385
--- /dev/null
+++ b/packages/components/credentials/ApifyApi.credential.ts
@@ -0,0 +1,26 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class ApifyApiCredential implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Apify API'
+ this.name = 'apifyApi'
+ this.version = 1.0
+ this.description =
+ 'You can find the Apify API token on your Apify account page.'
+ this.inputs = [
+ {
+ label: 'Apify API',
+ name: 'apifyApiToken',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: ApifyApiCredential }
diff --git a/packages/components/credentials/AzureOpenAIApi.credential.ts b/packages/components/credentials/AzureOpenAIApi.credential.ts
new file mode 100644
index 000000000..65f63f379
--- /dev/null
+++ b/packages/components/credentials/AzureOpenAIApi.credential.ts
@@ -0,0 +1,47 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class AzureOpenAIApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Azure OpenAI API'
+ this.name = 'azureOpenAIApi'
+ this.version = 1.0
+ this.description =
+ 'Refer to official guide of how to use Azure OpenAI service'
+ this.inputs = [
+ {
+ label: 'Azure OpenAI Api Key',
+ name: 'azureOpenAIApiKey',
+ type: 'password',
+ description: `Refer to official guide on how to create API key on Azure OpenAI`
+ },
+ {
+ label: 'Azure OpenAI Api Instance Name',
+ name: 'azureOpenAIApiInstanceName',
+ type: 'string',
+ placeholder: 'YOUR-INSTANCE-NAME'
+ },
+ {
+ label: 'Azure OpenAI Api Deployment Name',
+ name: 'azureOpenAIApiDeploymentName',
+ type: 'string',
+ placeholder: 'YOUR-DEPLOYMENT-NAME'
+ },
+ {
+ label: 'Azure OpenAI Api Version',
+ name: 'azureOpenAIApiVersion',
+ type: 'string',
+ placeholder: '2023-06-01-preview',
+ description:
+ 'Description of Supported API Versions. Please refer examples'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: AzureOpenAIApi }
diff --git a/packages/components/credentials/BraveSearchApi.credential.ts b/packages/components/credentials/BraveSearchApi.credential.ts
new file mode 100644
index 000000000..fdacf82c1
--- /dev/null
+++ b/packages/components/credentials/BraveSearchApi.credential.ts
@@ -0,0 +1,24 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class BraveSearchApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Brave Search API'
+ this.name = 'braveSearchApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'BraveSearch Api Key',
+ name: 'braveApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: BraveSearchApi }
diff --git a/packages/components/credentials/ChromaApi.credential.ts b/packages/components/credentials/ChromaApi.credential.ts
new file mode 100644
index 000000000..759c113cd
--- /dev/null
+++ b/packages/components/credentials/ChromaApi.credential.ts
@@ -0,0 +1,24 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class ChromaApi implements INodeCredential {
+ label: string
+ name: string
+ description: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Chroma API'
+ this.name = 'chromaApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Chroma Api Key',
+ name: 'chromaApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: ChromaApi }
diff --git a/packages/components/credentials/CohereApi.credential.ts b/packages/components/credentials/CohereApi.credential.ts
new file mode 100644
index 000000000..b171090e5
--- /dev/null
+++ b/packages/components/credentials/CohereApi.credential.ts
@@ -0,0 +1,23 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class CohereApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Cohere API'
+ this.name = 'cohereApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Cohere Api Key',
+ name: 'cohereApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: CohereApi }
diff --git a/packages/components/credentials/ConfluenceApi.credential.ts b/packages/components/credentials/ConfluenceApi.credential.ts
new file mode 100644
index 000000000..a1d32e9ca
--- /dev/null
+++ b/packages/components/credentials/ConfluenceApi.credential.ts
@@ -0,0 +1,33 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class ConfluenceApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Confluence API'
+ this.name = 'confluenceApi'
+ this.version = 1.0
+ this.description =
+ 'Refer to official guide on how to get accessToken on Confluence'
+ this.inputs = [
+ {
+ label: 'Access Token',
+ name: 'accessToken',
+ type: 'password',
+ placeholder: ''
+ },
+ {
+ label: 'Username',
+ name: 'username',
+ type: 'string',
+ placeholder: ''
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: ConfluenceApi }
diff --git a/packages/components/credentials/DynamodbMemoryApi.credential.ts b/packages/components/credentials/DynamodbMemoryApi.credential.ts
new file mode 100644
index 000000000..2f5ffa64c
--- /dev/null
+++ b/packages/components/credentials/DynamodbMemoryApi.credential.ts
@@ -0,0 +1,29 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class DynamodbMemoryApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'DynamodbMemory API'
+ this.name = 'dynamodbMemoryApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Access Key',
+ name: 'accessKey',
+ type: 'password'
+ },
+ {
+ label: 'Secret Access Key',
+ name: 'secretAccessKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: DynamodbMemoryApi }
diff --git a/packages/components/credentials/FigmaApi.credential.ts b/packages/components/credentials/FigmaApi.credential.ts
new file mode 100644
index 000000000..aed49359e
--- /dev/null
+++ b/packages/components/credentials/FigmaApi.credential.ts
@@ -0,0 +1,27 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class FigmaApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Figma API'
+ this.name = 'figmaApi'
+ this.version = 1.0
+ this.description =
+ 'Refer to official guide on how to get accessToken on Figma'
+ this.inputs = [
+ {
+ label: 'Access Token',
+ name: 'accessToken',
+ type: 'password',
+ placeholder: ''
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: FigmaApi }
diff --git a/packages/components/credentials/GithubApi.credential.ts b/packages/components/credentials/GithubApi.credential.ts
new file mode 100644
index 000000000..34c5074ec
--- /dev/null
+++ b/packages/components/credentials/GithubApi.credential.ts
@@ -0,0 +1,27 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class GithubApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Github API'
+ this.name = 'githubApi'
+ this.version = 1.0
+ this.description =
+ 'Refer to official guide on how to get accessToken on Github'
+ this.inputs = [
+ {
+ label: 'Access Token',
+ name: 'accessToken',
+ type: 'password',
+ placeholder: ''
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: GithubApi }
diff --git a/packages/components/credentials/GoogleAuth.credential.ts b/packages/components/credentials/GoogleAuth.credential.ts
new file mode 100644
index 000000000..16b8e3b3f
--- /dev/null
+++ b/packages/components/credentials/GoogleAuth.credential.ts
@@ -0,0 +1,55 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class GoogleVertexAuth implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Google Vertex Auth'
+ this.name = 'googleVertexAuth'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Google Application Credential File Path',
+ name: 'googleApplicationCredentialFilePath',
+ description:
+ 'Path to your google application credential json file. You can also use the credential JSON object (either one)',
+ placeholder: 'your-path/application_default_credentials.json',
+ type: 'string',
+ optional: true
+ },
+ {
+ label: 'Google Credential JSON Object',
+ name: 'googleApplicationCredential',
+ description: 'JSON object of your google application credential. You can also use the file path (either one)',
+ placeholder: `{
+ "type": ...,
+ "project_id": ...,
+ "private_key_id": ...,
+ "private_key": ...,
+ "client_email": ...,
+ "client_id": ...,
+ "auth_uri": ...,
+ "token_uri": ...,
+ "auth_provider_x509_cert_url": ...,
+ "client_x509_cert_url": ...
+}`,
+ type: 'string',
+ rows: 4,
+ optional: true
+ },
+ {
+ label: 'Project ID',
+ name: 'projectID',
+ description: 'Project ID of GCP. If not provided, it will be read from the credential file',
+ type: 'string',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: GoogleVertexAuth }
diff --git a/packages/components/credentials/GoogleSearchApi.credential.ts b/packages/components/credentials/GoogleSearchApi.credential.ts
new file mode 100644
index 000000000..cb82b25ae
--- /dev/null
+++ b/packages/components/credentials/GoogleSearchApi.credential.ts
@@ -0,0 +1,31 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class GoogleSearchApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Google Custom Search API'
+ this.name = 'googleCustomSearchApi'
+ this.version = 1.0
+ this.description =
+ 'Please refer to the Google Cloud Console for instructions on how to create an API key, and visit the Search Engine Creation page to learn how to generate your Search Engine ID.'
+ this.inputs = [
+ {
+ label: 'Google Custom Search Api Key',
+ name: 'googleCustomSearchApiKey',
+ type: 'password'
+ },
+ {
+ label: 'Programmable Search Engine ID',
+ name: 'googleCustomSearchApiId',
+ type: 'string'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: GoogleSearchApi }
diff --git a/packages/components/credentials/HuggingFaceApi.credential.ts b/packages/components/credentials/HuggingFaceApi.credential.ts
new file mode 100644
index 000000000..1b9221941
--- /dev/null
+++ b/packages/components/credentials/HuggingFaceApi.credential.ts
@@ -0,0 +1,23 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class HuggingFaceApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'HuggingFace API'
+ this.name = 'huggingFaceApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'HuggingFace Api Key',
+ name: 'huggingFaceApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: HuggingFaceApi }
diff --git a/packages/components/credentials/MotorheadMemoryApi.credential.ts b/packages/components/credentials/MotorheadMemoryApi.credential.ts
new file mode 100644
index 000000000..68a18ec1c
--- /dev/null
+++ b/packages/components/credentials/MotorheadMemoryApi.credential.ts
@@ -0,0 +1,31 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class MotorheadMemoryApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Motorhead Memory API'
+ this.name = 'motorheadMemoryApi'
+ this.version = 1.0
+ this.description =
+ 'Refer to official guide on how to create API key and Client ID on Motorhead Memory'
+ this.inputs = [
+ {
+ label: 'Client ID',
+ name: 'clientId',
+ type: 'string'
+ },
+ {
+ label: 'API Key',
+ name: 'apiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: MotorheadMemoryApi }
diff --git a/packages/components/credentials/NotionApi.credential.ts b/packages/components/credentials/NotionApi.credential.ts
new file mode 100644
index 000000000..ebe4bf99d
--- /dev/null
+++ b/packages/components/credentials/NotionApi.credential.ts
@@ -0,0 +1,26 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class NotionApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Notion API'
+ this.name = 'notionApi'
+ this.version = 1.0
+ this.description =
+ 'You can find integration token here'
+ this.inputs = [
+ {
+ label: 'Notion Integration Token',
+ name: 'notionIntegrationToken',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: NotionApi }
diff --git a/packages/components/credentials/OpenAIApi.credential.ts b/packages/components/credentials/OpenAIApi.credential.ts
new file mode 100644
index 000000000..836da7e9b
--- /dev/null
+++ b/packages/components/credentials/OpenAIApi.credential.ts
@@ -0,0 +1,23 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class OpenAIApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'OpenAI API'
+ this.name = 'openAIApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'OpenAI Api Key',
+ name: 'openAIApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: OpenAIApi }
diff --git a/packages/components/credentials/OpenAPIAuth.credential.ts b/packages/components/credentials/OpenAPIAuth.credential.ts
new file mode 100644
index 000000000..3f0ef9075
--- /dev/null
+++ b/packages/components/credentials/OpenAPIAuth.credential.ts
@@ -0,0 +1,25 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class OpenAPIAuth implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'OpenAPI Auth Token'
+ this.name = 'openAPIAuth'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'OpenAPI Token',
+ name: 'openAPIToken',
+ type: 'password',
+ description: 'Auth Token. For example: Bearer '
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: OpenAPIAuth }
diff --git a/packages/components/credentials/PineconeApi.credential.ts b/packages/components/credentials/PineconeApi.credential.ts
new file mode 100644
index 000000000..4c5f62fe8
--- /dev/null
+++ b/packages/components/credentials/PineconeApi.credential.ts
@@ -0,0 +1,29 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class PineconeApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Pinecone API'
+ this.name = 'pineconeApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Pinecone Api Key',
+ name: 'pineconeApiKey',
+ type: 'password'
+ },
+ {
+ label: 'Pinecone Environment',
+ name: 'pineconeEnv',
+ type: 'string'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: PineconeApi }
diff --git a/packages/components/credentials/QdrantApi.credential.ts b/packages/components/credentials/QdrantApi.credential.ts
new file mode 100644
index 000000000..fffebccc2
--- /dev/null
+++ b/packages/components/credentials/QdrantApi.credential.ts
@@ -0,0 +1,24 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class QdrantApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Qdrant API'
+ this.name = 'qdrantApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Qdrant API Key',
+ name: 'qdrantApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: QdrantApi }
diff --git a/packages/components/credentials/ReplicateApi.credential.ts b/packages/components/credentials/ReplicateApi.credential.ts
new file mode 100644
index 000000000..e638826be
--- /dev/null
+++ b/packages/components/credentials/ReplicateApi.credential.ts
@@ -0,0 +1,23 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class ReplicateApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Replicate API'
+ this.name = 'replicateApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Replicate Api Key',
+ name: 'replicateApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: ReplicateApi }
diff --git a/packages/components/credentials/SerpApi.credential.ts b/packages/components/credentials/SerpApi.credential.ts
new file mode 100644
index 000000000..20cf6ab5a
--- /dev/null
+++ b/packages/components/credentials/SerpApi.credential.ts
@@ -0,0 +1,24 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class SerpApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Serp API'
+ this.name = 'serpApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Serp Api Key',
+ name: 'serpApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: SerpApi }
diff --git a/packages/components/credentials/SerperApi.credential.ts b/packages/components/credentials/SerperApi.credential.ts
new file mode 100644
index 000000000..9a8fee1e3
--- /dev/null
+++ b/packages/components/credentials/SerperApi.credential.ts
@@ -0,0 +1,24 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class SerperApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Serper API'
+ this.name = 'serperApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Serper Api Key',
+ name: 'serperApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: SerperApi }
diff --git a/packages/components/credentials/SingleStoreApi.credential.ts b/packages/components/credentials/SingleStoreApi.credential.ts
new file mode 100644
index 000000000..fee9853bb
--- /dev/null
+++ b/packages/components/credentials/SingleStoreApi.credential.ts
@@ -0,0 +1,31 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class SingleStoreApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'SingleStore API'
+ this.name = 'singleStoreApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'User',
+ name: 'user',
+ type: 'string',
+ placeholder: ''
+ },
+ {
+ label: 'Password',
+ name: 'password',
+ type: 'password',
+ placeholder: ''
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: SingleStoreApi }
diff --git a/packages/components/credentials/SupabaseApi.credential.ts b/packages/components/credentials/SupabaseApi.credential.ts
new file mode 100644
index 000000000..beb2a4223
--- /dev/null
+++ b/packages/components/credentials/SupabaseApi.credential.ts
@@ -0,0 +1,24 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class SupabaseApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Supabase API'
+ this.name = 'supabaseApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Supabase API Key',
+ name: 'supabaseApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: SupabaseApi }
diff --git a/packages/components/credentials/VectaraApi.credential.ts b/packages/components/credentials/VectaraApi.credential.ts
new file mode 100644
index 000000000..96ad29a66
--- /dev/null
+++ b/packages/components/credentials/VectaraApi.credential.ts
@@ -0,0 +1,34 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class VectaraAPI implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Vectara API'
+ this.name = 'vectaraApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Vectara Customer ID',
+ name: 'customerID',
+ type: 'string'
+ },
+ {
+ label: 'Vectara Corpus ID',
+ name: 'corpusID',
+ type: 'string'
+ },
+ {
+ label: 'Vectara API Key',
+ name: 'apiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: VectaraAPI }
diff --git a/packages/components/credentials/WeaviateApi.credential.ts b/packages/components/credentials/WeaviateApi.credential.ts
new file mode 100644
index 000000000..041b41eac
--- /dev/null
+++ b/packages/components/credentials/WeaviateApi.credential.ts
@@ -0,0 +1,24 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class WeaviateApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Weaviate API'
+ this.name = 'weaviateApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Weaviate API Key',
+ name: 'weaviateApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: WeaviateApi }
diff --git a/packages/components/credentials/ZapierNLAApi.credential.ts b/packages/components/credentials/ZapierNLAApi.credential.ts
new file mode 100644
index 000000000..72035660e
--- /dev/null
+++ b/packages/components/credentials/ZapierNLAApi.credential.ts
@@ -0,0 +1,24 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class ZapierNLAApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Zapier NLA API'
+ this.name = 'zapierNLAApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Zapier NLA Api Key',
+ name: 'zapierNLAApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: ZapierNLAApi }
diff --git a/packages/components/credentials/ZepMemoryApi.credential.ts b/packages/components/credentials/ZepMemoryApi.credential.ts
new file mode 100644
index 000000000..a78ad6d60
--- /dev/null
+++ b/packages/components/credentials/ZepMemoryApi.credential.ts
@@ -0,0 +1,26 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class ZepMemoryApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Zep Memory API'
+ this.name = 'zepMemoryApi'
+ this.version = 1.0
+ this.description =
+ 'Refer to official guide on how to create API key on Zep'
+ this.inputs = [
+ {
+ label: 'API Key',
+ name: 'apiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: ZepMemoryApi }
diff --git a/packages/components/nodes/agents/AirtableAgent/AirtableAgent.ts b/packages/components/nodes/agents/AirtableAgent/AirtableAgent.ts
new file mode 100644
index 000000000..074f39c1b
--- /dev/null
+++ b/packages/components/nodes/agents/AirtableAgent/AirtableAgent.ts
@@ -0,0 +1,232 @@
+import { ICommonObject, INode, INodeData, INodeParams, PromptTemplate } from '../../../src/Interface'
+import { AgentExecutor } from 'langchain/agents'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { LoadPyodide, finalSystemPrompt, systemPrompt } from './core'
+import { LLMChain } from 'langchain/chains'
+import { BaseLanguageModel } from 'langchain/base_language'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+import axios from 'axios'
+
+class Airtable_Agents implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Airtable Agent'
+ this.name = 'airtableAgent'
+ this.version = 1.0
+ this.type = 'AgentExecutor'
+ this.category = 'Agents'
+ this.icon = 'airtable.svg'
+ this.description = 'Agent used to to answer queries on Airtable table'
+ this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['airtableApi']
+ }
+ this.inputs = [
+ {
+ label: 'Language Model',
+ name: 'model',
+ type: 'BaseLanguageModel'
+ },
+ {
+ label: 'Base Id',
+ name: 'baseId',
+ type: 'string',
+ placeholder: 'app11RobdGoX0YNsC',
+ description:
+ 'If your table URL looks like: https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYICO/viw9UrP77Id0CE4ee, app11RovdGoX0YNsC is the base id'
+ },
+ {
+ label: 'Table Id',
+ name: 'tableId',
+ type: 'string',
+ placeholder: 'tblJdmvbrgizbYICO',
+ description:
+ 'If your table URL looks like: https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYICO/viw9UrP77Id0CE4ee, tblJdmvbrgizbYICO is the table id'
+ },
+ {
+ label: 'Return All',
+ name: 'returnAll',
+ type: 'boolean',
+ default: true,
+ additionalParams: true,
+ description: 'If all results should be returned or only up to a given limit'
+ },
+ {
+ label: 'Limit',
+ name: 'limit',
+ type: 'number',
+ default: 100,
+ additionalParams: true,
+ description: 'Number of results to return'
+ }
+ ]
+ }
+
+ async init(): Promise {
+ // Not used
+ return undefined
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const baseId = nodeData.inputs?.baseId as string
+ const tableId = nodeData.inputs?.tableId as string
+ const returnAll = nodeData.inputs?.returnAll as boolean
+ const limit = nodeData.inputs?.limit as string
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const accessToken = getCredentialParam('accessToken', credentialData, nodeData)
+
+ let airtableData: ICommonObject[] = []
+
+ if (returnAll) {
+ airtableData = await loadAll(baseId, tableId, accessToken)
+ } else {
+ airtableData = await loadLimit(limit ? parseInt(limit, 10) : 100, baseId, tableId, accessToken)
+ }
+
+ let base64String = Buffer.from(JSON.stringify(airtableData)).toString('base64')
+
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
+
+ const pyodide = await LoadPyodide()
+
+ // First load the csv file and get the dataframe dictionary of column types
+ // For example using titanic.csv: {'PassengerId': 'int64', 'Survived': 'int64', 'Pclass': 'int64', 'Name': 'object', 'Sex': 'object', 'Age': 'float64', 'SibSp': 'int64', 'Parch': 'int64', 'Ticket': 'object', 'Fare': 'float64', 'Cabin': 'object', 'Embarked': 'object'}
+ let dataframeColDict = ''
+ try {
+ const code = `import pandas as pd
+import base64
+import json
+
+base64_string = "${base64String}"
+
+decoded_data = base64.b64decode(base64_string)
+
+json_data = json.loads(decoded_data)
+
+df = pd.DataFrame(json_data)
+my_dict = df.dtypes.astype(str).to_dict()
+print(my_dict)
+json.dumps(my_dict)`
+ dataframeColDict = await pyodide.runPythonAsync(code)
+ } catch (error) {
+ throw new Error(error)
+ }
+
+ // Then tell GPT to come out with ONLY python code
+ // For example: len(df), df[df['SibSp'] > 3]['PassengerId'].count()
+ let pythonCode = ''
+ if (dataframeColDict) {
+ const chain = new LLMChain({
+ llm: model,
+ prompt: PromptTemplate.fromTemplate(systemPrompt),
+ verbose: process.env.DEBUG === 'true' ? true : false
+ })
+ const inputs = {
+ dict: dataframeColDict,
+ question: input
+ }
+ const res = await chain.call(inputs, [loggerHandler])
+ pythonCode = res?.text
+ }
+
+ // Then run the code using Pyodide
+ let finalResult = ''
+ if (pythonCode) {
+ try {
+ const code = `import pandas as pd\n${pythonCode}`
+ finalResult = await pyodide.runPythonAsync(code)
+ } catch (error) {
+ throw new Error(`Sorry, I'm unable to find answer for question: "${input}" using follwoing code: "${pythonCode}"`)
+ }
+ }
+
+ // Finally, return a complete answer
+ if (finalResult) {
+ const chain = new LLMChain({
+ llm: model,
+ prompt: PromptTemplate.fromTemplate(finalSystemPrompt),
+ verbose: process.env.DEBUG === 'true' ? true : false
+ })
+ const inputs = {
+ question: input,
+ answer: finalResult
+ }
+
+ if (options.socketIO && options.socketIOClientId) {
+ const result = await chain.call(inputs, [loggerHandler, handler])
+ return result?.text
+ } else {
+ const result = await chain.call(inputs, [loggerHandler])
+ return result?.text
+ }
+ }
+
+ return pythonCode
+ }
+}
+
+interface AirtableLoaderResponse {
+ records: AirtableLoaderPage[]
+ offset?: string
+}
+
+interface AirtableLoaderPage {
+ id: string
+ createdTime: string
+ fields: ICommonObject
+}
+
+const fetchAirtableData = async (url: string, params: ICommonObject, accessToken: string): Promise => {
+ try {
+ const headers = {
+ Authorization: `Bearer ${accessToken}`,
+ 'Content-Type': 'application/json',
+ Accept: 'application/json'
+ }
+ const response = await axios.get(url, { params, headers })
+ return response.data
+ } catch (error) {
+ throw new Error(`Failed to fetch ${url} from Airtable: ${error}`)
+ }
+}
+
+const loadAll = async (baseId: string, tableId: string, accessToken: string): Promise => {
+ const params: ICommonObject = { pageSize: 100 }
+ let data: AirtableLoaderResponse
+ let returnPages: AirtableLoaderPage[] = []
+
+ do {
+ data = await fetchAirtableData(`https://api.airtable.com/v0/${baseId}/${tableId}`, params, accessToken)
+ returnPages.push.apply(returnPages, data.records)
+ params.offset = data.offset
+ } while (data.offset !== undefined)
+
+ return data.records.map((page) => page.fields)
+}
+
+const loadLimit = async (limit: number, baseId: string, tableId: string, accessToken: string): Promise => {
+ const params = { maxRecords: limit }
+ const data = await fetchAirtableData(`https://api.airtable.com/v0/${baseId}/${tableId}`, params, accessToken)
+ if (data.records.length === 0) {
+ return []
+ }
+ return data.records.map((page) => page.fields)
+}
+
+module.exports = { nodeClass: Airtable_Agents }
diff --git a/packages/components/nodes/agents/AirtableAgent/airtable.svg b/packages/components/nodes/agents/AirtableAgent/airtable.svg
new file mode 100644
index 000000000..867c3b5ae
--- /dev/null
+++ b/packages/components/nodes/agents/AirtableAgent/airtable.svg
@@ -0,0 +1,9 @@
+
+
diff --git a/packages/components/nodes/agents/AirtableAgent/core.ts b/packages/components/nodes/agents/AirtableAgent/core.ts
new file mode 100644
index 000000000..450bf5ea6
--- /dev/null
+++ b/packages/components/nodes/agents/AirtableAgent/core.ts
@@ -0,0 +1,29 @@
+import type { PyodideInterface } from 'pyodide'
+import * as path from 'path'
+import { getUserHome } from '../../../src/utils'
+
+let pyodideInstance: PyodideInterface | undefined
+
+export async function LoadPyodide(): Promise {
+ if (pyodideInstance === undefined) {
+ const { loadPyodide } = await import('pyodide')
+ const obj: any = { packageCacheDir: path.join(getUserHome(), '.flowise', 'pyodideCacheDir') }
+ pyodideInstance = await loadPyodide(obj)
+ await pyodideInstance.loadPackage(['pandas', 'numpy'])
+ }
+
+ return pyodideInstance
+}
+
+export const systemPrompt = `You are working with a pandas dataframe in Python. The name of the dataframe is df.
+
+The columns and data types of a dataframe are given below as a Python dictionary with keys showing column names and values showing the data types.
+{dict}
+
+I will ask question, and you will output the Python code using pandas dataframe to answer my question. Do not provide any explanations. Do not respond with anything except the output of the code.
+
+Question: {question}
+Output Code:`
+
+export const finalSystemPrompt = `You are given the question: {question}. You have an answer to the question: {answer}. Rephrase the answer into a standalone answer.
+Standalone Answer:`
diff --git a/packages/components/nodes/agents/AutoGPT/AutoGPT.ts b/packages/components/nodes/agents/AutoGPT/AutoGPT.ts
index 4775507b2..69e9b9ed5 100644
--- a/packages/components/nodes/agents/AutoGPT/AutoGPT.ts
+++ b/packages/components/nodes/agents/AutoGPT/AutoGPT.ts
@@ -3,10 +3,12 @@ import { BaseChatModel } from 'langchain/chat_models/base'
import { AutoGPT } from 'langchain/experimental/autogpt'
import { Tool } from 'langchain/tools'
import { VectorStoreRetriever } from 'langchain/vectorstores/base'
+import { flatten } from 'lodash'
class AutoGPT_Agents implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -17,6 +19,7 @@ class AutoGPT_Agents implements INode {
constructor() {
this.label = 'AutoGPT'
this.name = 'autoGPT'
+ this.version = 1.0
this.type = 'AutoGPT'
this.category = 'Agents'
this.icon = 'autogpt.png'
@@ -67,7 +70,7 @@ class AutoGPT_Agents implements INode {
const model = nodeData.inputs?.model as BaseChatModel
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as VectorStoreRetriever
let tools = nodeData.inputs?.tools as Tool[]
- tools = tools.flat()
+ tools = flatten(tools)
const aiName = (nodeData.inputs?.aiName as string) || 'AutoGPT'
const aiRole = (nodeData.inputs?.aiRole as string) || 'Assistant'
const maxLoop = nodeData.inputs?.maxLoop as string
@@ -89,7 +92,6 @@ class AutoGPT_Agents implements INode {
const res = await executor.run([input])
return res || 'I have completed all my tasks.'
} catch (e) {
- console.error(e)
throw new Error(e)
}
}
diff --git a/packages/components/nodes/agents/BabyAGI/BabyAGI.ts b/packages/components/nodes/agents/BabyAGI/BabyAGI.ts
index 5112be0ea..303c231ec 100644
--- a/packages/components/nodes/agents/BabyAGI/BabyAGI.ts
+++ b/packages/components/nodes/agents/BabyAGI/BabyAGI.ts
@@ -6,6 +6,7 @@ import { VectorStore } from 'langchain/vectorstores'
class BabyAGI_Agents implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -16,6 +17,7 @@ class BabyAGI_Agents implements INode {
constructor() {
this.label = 'BabyAGI'
this.name = 'babyAGI'
+ this.version = 1.0
this.type = 'BabyAGI'
this.category = 'Agents'
this.icon = 'babyagi.jpg'
@@ -45,8 +47,9 @@ class BabyAGI_Agents implements INode {
const model = nodeData.inputs?.model as BaseChatModel
const vectorStore = nodeData.inputs?.vectorStore as VectorStore
const taskLoop = nodeData.inputs?.taskLoop as string
+ const k = (vectorStore as any)?.k ?? 4
- const babyAgi = BabyAGI.fromLLM(model, vectorStore, parseInt(taskLoop, 10))
+ const babyAgi = BabyAGI.fromLLM(model, vectorStore, parseInt(taskLoop, 10), k)
return babyAgi
}
diff --git a/packages/components/nodes/agents/BabyAGI/core.ts b/packages/components/nodes/agents/BabyAGI/core.ts
index 76889b527..444aa3eb5 100644
--- a/packages/components/nodes/agents/BabyAGI/core.ts
+++ b/packages/components/nodes/agents/BabyAGI/core.ts
@@ -154,18 +154,22 @@ export class BabyAGI {
maxIterations = 3
+ topK = 4
+
constructor(
taskCreationChain: TaskCreationChain,
taskPrioritizationChain: TaskPrioritizationChain,
executionChain: ExecutionChain,
vectorStore: VectorStore,
- maxIterations: number
+ maxIterations: number,
+ topK: number
) {
this.taskCreationChain = taskCreationChain
this.taskPrioritizationChain = taskPrioritizationChain
this.executionChain = executionChain
this.vectorStore = vectorStore
this.maxIterations = maxIterations
+ this.topK = topK
}
addTask(task: Task) {
@@ -219,7 +223,7 @@ export class BabyAGI {
this.printNextTask(task)
// Step 2: Execute the task
- const result = await executeTask(this.vectorStore, this.executionChain, objective, task.task_name)
+ const result = await executeTask(this.vectorStore, this.executionChain, objective, task.task_name, this.topK)
const thisTaskId = task.task_id
finalResult = result
this.printTaskResult(result)
@@ -257,10 +261,10 @@ export class BabyAGI {
return finalResult
}
- static fromLLM(llm: BaseChatModel, vectorstore: VectorStore, maxIterations = 3): BabyAGI {
+ static fromLLM(llm: BaseChatModel, vectorstore: VectorStore, maxIterations = 3, topK = 4): BabyAGI {
const taskCreationChain = TaskCreationChain.from_llm(llm)
const taskPrioritizationChain = TaskPrioritizationChain.from_llm(llm)
const executionChain = ExecutionChain.from_llm(llm)
- return new BabyAGI(taskCreationChain, taskPrioritizationChain, executionChain, vectorstore, maxIterations)
+ return new BabyAGI(taskCreationChain, taskPrioritizationChain, executionChain, vectorstore, maxIterations, topK)
}
}
diff --git a/packages/components/nodes/agents/CSVAgent/CSVAgent.ts b/packages/components/nodes/agents/CSVAgent/CSVAgent.ts
new file mode 100644
index 000000000..4a42592ff
--- /dev/null
+++ b/packages/components/nodes/agents/CSVAgent/CSVAgent.ts
@@ -0,0 +1,164 @@
+import { ICommonObject, INode, INodeData, INodeParams, PromptTemplate } from '../../../src/Interface'
+import { AgentExecutor } from 'langchain/agents'
+import { getBaseClasses } from '../../../src/utils'
+import { LoadPyodide, finalSystemPrompt, systemPrompt } from './core'
+import { LLMChain } from 'langchain/chains'
+import { BaseLanguageModel } from 'langchain/base_language'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+
+class CSV_Agents implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'CSV Agent'
+ this.name = 'csvAgent'
+ this.version = 1.0
+ this.type = 'AgentExecutor'
+ this.category = 'Agents'
+ this.icon = 'csvagent.png'
+ this.description = 'Agent used to to answer queries on CSV data'
+ this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
+ this.inputs = [
+ {
+ label: 'Csv File',
+ name: 'csvFile',
+ type: 'file',
+ fileType: '.csv'
+ },
+ {
+ label: 'Language Model',
+ name: 'model',
+ type: 'BaseLanguageModel'
+ },
+ {
+ label: 'System Message',
+ name: 'systemMessagePrompt',
+ type: 'string',
+ rows: 4,
+ additionalParams: true,
+ optional: true,
+ placeholder:
+ 'I want you to act as a document that I am having a conversation with. Your name is "AI Assistant". You will provide me with answers from the given info. If the answer is not included, say exactly "Hmm, I am not sure." and stop after that. Refuse to answer any question not about the info. Never break character.'
+ }
+ ]
+ }
+
+ async init(): Promise {
+ // Not used
+ return undefined
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const csvFileBase64 = nodeData.inputs?.csvFile as string
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const systemMessagePrompt = nodeData.inputs?.systemMessagePrompt as string
+
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
+
+ let files: string[] = []
+
+ if (csvFileBase64.startsWith('[') && csvFileBase64.endsWith(']')) {
+ files = JSON.parse(csvFileBase64)
+ } else {
+ files = [csvFileBase64]
+ }
+
+ let base64String = ''
+
+ for (const file of files) {
+ const splitDataURI = file.split(',')
+ splitDataURI.pop()
+ base64String = splitDataURI.pop() ?? ''
+ }
+
+ const pyodide = await LoadPyodide()
+
+ // First load the csv file and get the dataframe dictionary of column types
+ // For example using titanic.csv: {'PassengerId': 'int64', 'Survived': 'int64', 'Pclass': 'int64', 'Name': 'object', 'Sex': 'object', 'Age': 'float64', 'SibSp': 'int64', 'Parch': 'int64', 'Ticket': 'object', 'Fare': 'float64', 'Cabin': 'object', 'Embarked': 'object'}
+ let dataframeColDict = ''
+ try {
+ const code = `import pandas as pd
+import base64
+from io import StringIO
+import json
+
+base64_string = "${base64String}"
+
+decoded_data = base64.b64decode(base64_string)
+
+csv_data = StringIO(decoded_data.decode('utf-8'))
+
+df = pd.read_csv(csv_data)
+my_dict = df.dtypes.astype(str).to_dict()
+print(my_dict)
+json.dumps(my_dict)`
+ dataframeColDict = await pyodide.runPythonAsync(code)
+ } catch (error) {
+ throw new Error(error)
+ }
+
+ // Then tell GPT to come out with ONLY python code
+ // For example: len(df), df[df['SibSp'] > 3]['PassengerId'].count()
+ let pythonCode = ''
+ if (dataframeColDict) {
+ const chain = new LLMChain({
+ llm: model,
+ prompt: PromptTemplate.fromTemplate(systemPrompt),
+ verbose: process.env.DEBUG === 'true' ? true : false
+ })
+ const inputs = {
+ dict: dataframeColDict,
+ question: input
+ }
+ const res = await chain.call(inputs, [loggerHandler])
+ pythonCode = res?.text
+ }
+
+ // Then run the code using Pyodide
+ let finalResult = ''
+ if (pythonCode) {
+ try {
+ const code = `import pandas as pd\n${pythonCode}`
+ finalResult = await pyodide.runPythonAsync(code)
+ } catch (error) {
+ throw new Error(`Sorry, I'm unable to find answer for question: "${input}" using follwoing code: "${pythonCode}"`)
+ }
+ }
+
+ // Finally, return a complete answer
+ if (finalResult) {
+ const chain = new LLMChain({
+ llm: model,
+ prompt: PromptTemplate.fromTemplate(
+ systemMessagePrompt ? `${systemMessagePrompt}\n${finalSystemPrompt}` : finalSystemPrompt
+ ),
+ verbose: process.env.DEBUG === 'true' ? true : false
+ })
+ const inputs = {
+ question: input,
+ answer: finalResult
+ }
+
+ if (options.socketIO && options.socketIOClientId) {
+ const result = await chain.call(inputs, [loggerHandler, handler])
+ return result?.text
+ } else {
+ const result = await chain.call(inputs, [loggerHandler])
+ return result?.text
+ }
+ }
+
+ return pythonCode
+ }
+}
+
+module.exports = { nodeClass: CSV_Agents }
diff --git a/packages/components/nodes/agents/CSVAgent/core.ts b/packages/components/nodes/agents/CSVAgent/core.ts
new file mode 100644
index 000000000..450bf5ea6
--- /dev/null
+++ b/packages/components/nodes/agents/CSVAgent/core.ts
@@ -0,0 +1,29 @@
+import type { PyodideInterface } from 'pyodide'
+import * as path from 'path'
+import { getUserHome } from '../../../src/utils'
+
+let pyodideInstance: PyodideInterface | undefined
+
+export async function LoadPyodide(): Promise {
+ if (pyodideInstance === undefined) {
+ const { loadPyodide } = await import('pyodide')
+ const obj: any = { packageCacheDir: path.join(getUserHome(), '.flowise', 'pyodideCacheDir') }
+ pyodideInstance = await loadPyodide(obj)
+ await pyodideInstance.loadPackage(['pandas', 'numpy'])
+ }
+
+ return pyodideInstance
+}
+
+export const systemPrompt = `You are working with a pandas dataframe in Python. The name of the dataframe is df.
+
+The columns and data types of a dataframe are given below as a Python dictionary with keys showing column names and values showing the data types.
+{dict}
+
+I will ask question, and you will output the Python code using pandas dataframe to answer my question. Do not provide any explanations. Do not respond with anything except the output of the code.
+
+Question: {question}
+Output Code:`
+
+export const finalSystemPrompt = `You are given the question: {question}. You have an answer to the question: {answer}. Rephrase the answer into a standalone answer.
+Standalone Answer:`
diff --git a/packages/components/nodes/agents/CSVAgent/csvagent.png b/packages/components/nodes/agents/CSVAgent/csvagent.png
new file mode 100644
index 000000000..3ed16bb2c
Binary files /dev/null and b/packages/components/nodes/agents/CSVAgent/csvagent.png differ
diff --git a/packages/components/nodes/agents/ConversationalAgent/ConversationalAgent.ts b/packages/components/nodes/agents/ConversationalAgent/ConversationalAgent.ts
index d2106e185..d8d8506c2 100644
--- a/packages/components/nodes/agents/ConversationalAgent/ConversationalAgent.ts
+++ b/packages/components/nodes/agents/ConversationalAgent/ConversationalAgent.ts
@@ -1,14 +1,23 @@
-import { ICommonObject, IMessage, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { initializeAgentExecutorWithOptions, AgentExecutor, InitializeAgentExecutorOptions } from 'langchain/agents'
import { Tool } from 'langchain/tools'
-import { BaseChatMemory, ChatMessageHistory } from 'langchain/memory'
-import { getBaseClasses } from '../../../src/utils'
-import { AIChatMessage, HumanChatMessage } from 'langchain/schema'
+import { BaseChatMemory } from 'langchain/memory'
+import { getBaseClasses, mapChatHistory } from '../../../src/utils'
import { BaseLanguageModel } from 'langchain/base_language'
+import { flatten } from 'lodash'
+
+const DEFAULT_PREFIX = `Assistant is a large language model trained by OpenAI.
+
+Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
+
+Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
+
+Overall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.`
class ConversationalAgent_Agents implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -19,6 +28,7 @@ class ConversationalAgent_Agents implements INode {
constructor() {
this.label = 'Conversational Agent'
this.name = 'conversationalAgent'
+ this.version = 1.0
this.type = 'AgentExecutor'
this.category = 'Agents'
this.icon = 'agent.svg'
@@ -46,14 +56,7 @@ class ConversationalAgent_Agents implements INode {
name: 'systemMessage',
type: 'string',
rows: 4,
- optional: true,
- additionalParams: true
- },
- {
- label: 'Human Message',
- name: 'humanMessage',
- type: 'string',
- rows: 4,
+ default: DEFAULT_PREFIX,
optional: true,
additionalParams: true
}
@@ -63,9 +66,8 @@ class ConversationalAgent_Agents implements INode {
async init(nodeData: INodeData): Promise {
const model = nodeData.inputs?.model as BaseLanguageModel
let tools = nodeData.inputs?.tools as Tool[]
- tools = tools.flat()
+ tools = flatten(tools)
const memory = nodeData.inputs?.memory as BaseChatMemory
- const humanMessage = nodeData.inputs?.humanMessage as string
const systemMessage = nodeData.inputs?.systemMessage as string
const obj: InitializeAgentExecutorOptions = {
@@ -74,9 +76,6 @@ class ConversationalAgent_Agents implements INode {
}
const agentArgs: any = {}
- if (humanMessage) {
- agentArgs.humanMessage = humanMessage
- }
if (systemMessage) {
agentArgs.systemMessage = systemMessage
}
@@ -93,19 +92,10 @@ class ConversationalAgent_Agents implements INode {
const memory = nodeData.inputs?.memory as BaseChatMemory
if (options && options.chatHistory) {
- const chatHistory = []
- const histories: IMessage[] = options.chatHistory
-
- for (const message of histories) {
- if (message.type === 'apiMessage') {
- chatHistory.push(new AIChatMessage(message.message))
- } else if (message.type === 'userMessage') {
- chatHistory.push(new HumanChatMessage(message.message))
- }
- }
- memory.chatHistory = new ChatMessageHistory(chatHistory)
+ memory.chatHistory = mapChatHistory(options)
executor.memory = memory
}
+
const result = await executor.call({ input })
return result?.output
diff --git a/packages/components/nodes/agents/ConversationalRetrievalAgent/ConversationalRetrievalAgent.ts b/packages/components/nodes/agents/ConversationalRetrievalAgent/ConversationalRetrievalAgent.ts
new file mode 100644
index 000000000..c0cef0526
--- /dev/null
+++ b/packages/components/nodes/agents/ConversationalRetrievalAgent/ConversationalRetrievalAgent.ts
@@ -0,0 +1,101 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { initializeAgentExecutorWithOptions, AgentExecutor } from 'langchain/agents'
+import { getBaseClasses, mapChatHistory } from '../../../src/utils'
+import { flatten } from 'lodash'
+import { BaseChatMemory } from 'langchain/memory'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+
+const defaultMessage = `Do your best to answer the questions. Feel free to use any tools available to look up relevant information, only if necessary.`
+
+class ConversationalRetrievalAgent_Agents implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Conversational Retrieval Agent'
+ this.name = 'conversationalRetrievalAgent'
+ this.version = 1.0
+ this.type = 'AgentExecutor'
+ this.category = 'Agents'
+ this.icon = 'agent.svg'
+ this.description = `An agent optimized for retrieval during conversation, answering questions based on past dialogue, all using OpenAI's Function Calling`
+ this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
+ this.inputs = [
+ {
+ label: 'Allowed Tools',
+ name: 'tools',
+ type: 'Tool',
+ list: true
+ },
+ {
+ label: 'Memory',
+ name: 'memory',
+ type: 'BaseChatMemory'
+ },
+ {
+ label: 'OpenAI Chat Model',
+ name: 'model',
+ type: 'ChatOpenAI'
+ },
+ {
+ label: 'System Message',
+ name: 'systemMessage',
+ type: 'string',
+ default: defaultMessage,
+ rows: 4,
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const model = nodeData.inputs?.model
+ const memory = nodeData.inputs?.memory as BaseChatMemory
+ const systemMessage = nodeData.inputs?.systemMessage as string
+
+ let tools = nodeData.inputs?.tools
+ tools = flatten(tools)
+
+ const executor = await initializeAgentExecutorWithOptions(tools, model, {
+ agentType: 'openai-functions',
+ verbose: process.env.DEBUG === 'true' ? true : false,
+ agentArgs: {
+ prefix: systemMessage ?? defaultMessage
+ },
+ returnIntermediateSteps: true
+ })
+ executor.memory = memory
+ return executor
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const executor = nodeData.instance as AgentExecutor
+
+ if (executor.memory) {
+ ;(executor.memory as any).memoryKey = 'chat_history'
+ ;(executor.memory as any).outputKey = 'output'
+ ;(executor.memory as any).chatHistory = mapChatHistory(options)
+ }
+
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
+ const result = await executor.call({ input }, [loggerHandler, handler])
+ return result?.output
+ } else {
+ const result = await executor.call({ input }, [loggerHandler])
+ return result?.output
+ }
+ }
+}
+
+module.exports = { nodeClass: ConversationalRetrievalAgent_Agents }
diff --git a/packages/components/nodes/agents/ConversationalRetrievalAgent/agent.svg b/packages/components/nodes/agents/ConversationalRetrievalAgent/agent.svg
new file mode 100644
index 000000000..c87861e5c
--- /dev/null
+++ b/packages/components/nodes/agents/ConversationalRetrievalAgent/agent.svg
@@ -0,0 +1,9 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/agents/MRKLAgentChat/MRKLAgentChat.ts b/packages/components/nodes/agents/MRKLAgentChat/MRKLAgentChat.ts
index 34b36fc1e..0a9e744c9 100644
--- a/packages/components/nodes/agents/MRKLAgentChat/MRKLAgentChat.ts
+++ b/packages/components/nodes/agents/MRKLAgentChat/MRKLAgentChat.ts
@@ -3,10 +3,12 @@ import { initializeAgentExecutorWithOptions, AgentExecutor } from 'langchain/age
import { getBaseClasses } from '../../../src/utils'
import { Tool } from 'langchain/tools'
import { BaseLanguageModel } from 'langchain/base_language'
+import { flatten } from 'lodash'
class MRKLAgentChat_Agents implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -17,6 +19,7 @@ class MRKLAgentChat_Agents implements INode {
constructor() {
this.label = 'MRKL Agent for Chat Models'
this.name = 'mrklAgentChat'
+ this.version = 1.0
this.type = 'AgentExecutor'
this.category = 'Agents'
this.icon = 'agent.svg'
@@ -40,7 +43,7 @@ class MRKLAgentChat_Agents implements INode {
async init(nodeData: INodeData): Promise {
const model = nodeData.inputs?.model as BaseLanguageModel
let tools = nodeData.inputs?.tools as Tool[]
- tools = tools.flat()
+ tools = flatten(tools)
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: 'chat-zero-shot-react-description',
verbose: process.env.DEBUG === 'true' ? true : false
diff --git a/packages/components/nodes/agents/MRKLAgentLLM/MRKLAgentLLM.ts b/packages/components/nodes/agents/MRKLAgentLLM/MRKLAgentLLM.ts
index 20246ffa1..d7af586b4 100644
--- a/packages/components/nodes/agents/MRKLAgentLLM/MRKLAgentLLM.ts
+++ b/packages/components/nodes/agents/MRKLAgentLLM/MRKLAgentLLM.ts
@@ -3,10 +3,12 @@ import { initializeAgentExecutorWithOptions, AgentExecutor } from 'langchain/age
import { Tool } from 'langchain/tools'
import { getBaseClasses } from '../../../src/utils'
import { BaseLanguageModel } from 'langchain/base_language'
+import { flatten } from 'lodash'
class MRKLAgentLLM_Agents implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -17,6 +19,7 @@ class MRKLAgentLLM_Agents implements INode {
constructor() {
this.label = 'MRKL Agent for LLMs'
this.name = 'mrklAgentLLM'
+ this.version = 1.0
this.type = 'AgentExecutor'
this.category = 'Agents'
this.icon = 'agent.svg'
@@ -40,7 +43,7 @@ class MRKLAgentLLM_Agents implements INode {
async init(nodeData: INodeData): Promise {
const model = nodeData.inputs?.model as BaseLanguageModel
let tools = nodeData.inputs?.tools as Tool[]
- tools = tools.flat()
+ tools = flatten(tools)
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: 'zero-shot-react-description',
diff --git a/packages/components/nodes/agents/OpenAIFunctionAgent/OpenAIFunctionAgent.ts b/packages/components/nodes/agents/OpenAIFunctionAgent/OpenAIFunctionAgent.ts
new file mode 100644
index 000000000..8c182d1ac
--- /dev/null
+++ b/packages/components/nodes/agents/OpenAIFunctionAgent/OpenAIFunctionAgent.ts
@@ -0,0 +1,101 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { initializeAgentExecutorWithOptions, AgentExecutor } from 'langchain/agents'
+import { getBaseClasses, mapChatHistory } from '../../../src/utils'
+import { BaseLanguageModel } from 'langchain/base_language'
+import { flatten } from 'lodash'
+import { BaseChatMemory } from 'langchain/memory'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+
+class OpenAIFunctionAgent_Agents implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'OpenAI Function Agent'
+ this.name = 'openAIFunctionAgent'
+ this.version = 1.0
+ this.type = 'AgentExecutor'
+ this.category = 'Agents'
+ this.icon = 'openai.png'
+ this.description = `An agent that uses OpenAI's Function Calling functionality to pick the tool and args to call`
+ this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
+ this.inputs = [
+ {
+ label: 'Allowed Tools',
+ name: 'tools',
+ type: 'Tool',
+ list: true
+ },
+ {
+ label: 'Memory',
+ name: 'memory',
+ type: 'BaseChatMemory'
+ },
+ {
+ label: 'OpenAI Chat Model',
+ name: 'model',
+ description:
+ 'Only works with gpt-3.5-turbo-0613 and gpt-4-0613. Refer docs for more info',
+ type: 'BaseChatModel'
+ },
+ {
+ label: 'System Message',
+ name: 'systemMessage',
+ type: 'string',
+ rows: 4,
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const memory = nodeData.inputs?.memory as BaseChatMemory
+ const systemMessage = nodeData.inputs?.systemMessage as string
+
+ let tools = nodeData.inputs?.tools
+ tools = flatten(tools)
+
+ const executor = await initializeAgentExecutorWithOptions(tools, model, {
+ agentType: 'openai-functions',
+ verbose: process.env.DEBUG === 'true' ? true : false,
+ agentArgs: {
+ prefix: systemMessage ?? `You are a helpful AI assistant.`
+ }
+ })
+ if (memory) executor.memory = memory
+
+ return executor
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const executor = nodeData.instance as AgentExecutor
+ const memory = nodeData.inputs?.memory as BaseChatMemory
+
+ if (options && options.chatHistory) {
+ memory.chatHistory = mapChatHistory(options)
+ executor.memory = memory
+ }
+
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
+ const result = await executor.run(input, [loggerHandler, handler])
+ return result
+ } else {
+ const result = await executor.run(input, [loggerHandler])
+ return result
+ }
+ }
+}
+
+module.exports = { nodeClass: OpenAIFunctionAgent_Agents }
diff --git a/packages/components/nodes/agents/OpenAIFunctionAgent/openai.png b/packages/components/nodes/agents/OpenAIFunctionAgent/openai.png
new file mode 100644
index 000000000..de08a05b2
Binary files /dev/null and b/packages/components/nodes/agents/OpenAIFunctionAgent/openai.png differ
diff --git a/packages/components/nodes/chains/ApiChain/GETApiChain.ts b/packages/components/nodes/chains/ApiChain/GETApiChain.ts
new file mode 100644
index 000000000..bd4f3bc0d
--- /dev/null
+++ b/packages/components/nodes/chains/ApiChain/GETApiChain.ts
@@ -0,0 +1,134 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { APIChain } from 'langchain/chains'
+import { getBaseClasses } from '../../../src/utils'
+import { BaseLanguageModel } from 'langchain/base_language'
+import { PromptTemplate } from 'langchain/prompts'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+
+export const API_URL_RAW_PROMPT_TEMPLATE = `You are given the below API Documentation:
+{api_docs}
+Using this documentation, generate the full API url to call for answering the user question.
+You should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.
+
+Question:{question}
+API url:`
+
+export const API_RESPONSE_RAW_PROMPT_TEMPLATE =
+ 'Given this {api_response} response for {api_url}. use the given response to answer this {question}'
+
+class GETApiChain_Chains implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'GET API Chain'
+ this.name = 'getApiChain'
+ this.version = 1.0
+ this.type = 'GETApiChain'
+ this.icon = 'apichain.svg'
+ this.category = 'Chains'
+ this.description = 'Chain to run queries against GET API'
+ this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
+ this.inputs = [
+ {
+ label: 'Language Model',
+ name: 'model',
+ type: 'BaseLanguageModel'
+ },
+ {
+ label: 'API Documentation',
+ name: 'apiDocs',
+ type: 'string',
+ description:
+ 'Description of how API works. Please refer to more examples',
+ rows: 4
+ },
+ {
+ label: 'Headers',
+ name: 'headers',
+ type: 'json',
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'URL Prompt',
+ name: 'urlPrompt',
+ type: 'string',
+ description: 'Prompt used to tell LLMs how to construct the URL. Must contains {api_docs} and {question}',
+ default: API_URL_RAW_PROMPT_TEMPLATE,
+ rows: 4,
+ additionalParams: true
+ },
+ {
+ label: 'Answer Prompt',
+ name: 'ansPrompt',
+ type: 'string',
+ description:
+ 'Prompt used to tell LLMs how to return the API response. Must contains {api_response}, {api_url}, and {question}',
+ default: API_RESPONSE_RAW_PROMPT_TEMPLATE,
+ rows: 4,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const apiDocs = nodeData.inputs?.apiDocs as string
+ const headers = nodeData.inputs?.headers as string
+ const urlPrompt = nodeData.inputs?.urlPrompt as string
+ const ansPrompt = nodeData.inputs?.ansPrompt as string
+
+ const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
+ return chain
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const apiDocs = nodeData.inputs?.apiDocs as string
+ const headers = nodeData.inputs?.headers as string
+ const urlPrompt = nodeData.inputs?.urlPrompt as string
+ const ansPrompt = nodeData.inputs?.ansPrompt as string
+
+ const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2)
+ const res = await chain.run(input, [loggerHandler, handler])
+ return res
+ } else {
+ const res = await chain.run(input, [loggerHandler])
+ return res
+ }
+ }
+}
+
+const getAPIChain = async (documents: string, llm: BaseLanguageModel, headers: string, urlPrompt: string, ansPrompt: string) => {
+ const apiUrlPrompt = new PromptTemplate({
+ inputVariables: ['api_docs', 'question'],
+ template: urlPrompt ? urlPrompt : API_URL_RAW_PROMPT_TEMPLATE
+ })
+
+ const apiResponsePrompt = new PromptTemplate({
+ inputVariables: ['api_docs', 'question', 'api_url', 'api_response'],
+ template: ansPrompt ? ansPrompt : API_RESPONSE_RAW_PROMPT_TEMPLATE
+ })
+
+ const chain = APIChain.fromLLMAndAPIDocs(llm, documents, {
+ apiUrlPrompt,
+ apiResponsePrompt,
+ verbose: process.env.DEBUG === 'true' ? true : false,
+ headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {}
+ })
+ return chain
+}
+
+module.exports = { nodeClass: GETApiChain_Chains }
diff --git a/packages/components/nodes/chains/ApiChain/OpenAPIChain.ts b/packages/components/nodes/chains/ApiChain/OpenAPIChain.ts
new file mode 100644
index 000000000..9f6c79e4e
--- /dev/null
+++ b/packages/components/nodes/chains/ApiChain/OpenAPIChain.ts
@@ -0,0 +1,100 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { APIChain, createOpenAPIChain } from 'langchain/chains'
+import { getBaseClasses } from '../../../src/utils'
+import { ChatOpenAI } from 'langchain/chat_models/openai'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+
+class OpenApiChain_Chains implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'OpenAPI Chain'
+ this.name = 'openApiChain'
+ this.version = 1.0
+ this.type = 'OpenAPIChain'
+ this.icon = 'openapi.png'
+ this.category = 'Chains'
+ this.description = 'Chain that automatically select and call APIs based only on an OpenAPI spec'
+ this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
+ this.inputs = [
+ {
+ label: 'ChatOpenAI Model',
+ name: 'model',
+ type: 'ChatOpenAI'
+ },
+ {
+ label: 'YAML Link',
+ name: 'yamlLink',
+ type: 'string',
+ placeholder: 'https://api.speak.com/openapi.yaml',
+ description: 'If YAML link is provided, uploaded YAML File will be ignored and YAML link will be used instead'
+ },
+ {
+ label: 'YAML File',
+ name: 'yamlFile',
+ type: 'file',
+ fileType: '.yaml',
+ description: 'If YAML link is provided, uploaded YAML File will be ignored and YAML link will be used instead'
+ },
+ {
+ label: 'Headers',
+ name: 'headers',
+ type: 'json',
+ additionalParams: true,
+ optional: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ return await initChain(nodeData)
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const chain = await initChain(nodeData)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
+ const res = await chain.run(input, [loggerHandler, handler])
+ return res
+ } else {
+ const res = await chain.run(input, [loggerHandler])
+ return res
+ }
+ }
+}
+
+const initChain = async (nodeData: INodeData) => {
+ const model = nodeData.inputs?.model as ChatOpenAI
+ const headers = nodeData.inputs?.headers as string
+ const yamlLink = nodeData.inputs?.yamlLink as string
+ const yamlFileBase64 = nodeData.inputs?.yamlFile as string
+
+ let yamlString = ''
+
+ if (yamlLink) {
+ yamlString = yamlLink
+ } else {
+ const splitDataURI = yamlFileBase64.split(',')
+ splitDataURI.pop()
+ const bf = Buffer.from(splitDataURI.pop() || '', 'base64')
+ yamlString = bf.toString('utf-8')
+ }
+
+ return await createOpenAPIChain(yamlString, {
+ llm: model,
+ headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {},
+ verbose: process.env.DEBUG === 'true' ? true : false
+ })
+}
+
+module.exports = { nodeClass: OpenApiChain_Chains }
diff --git a/packages/components/nodes/chains/ApiChain/POSTApiChain.ts b/packages/components/nodes/chains/ApiChain/POSTApiChain.ts
new file mode 100644
index 000000000..cba4a2970
--- /dev/null
+++ b/packages/components/nodes/chains/ApiChain/POSTApiChain.ts
@@ -0,0 +1,123 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses } from '../../../src/utils'
+import { BaseLanguageModel } from 'langchain/base_language'
+import { PromptTemplate } from 'langchain/prompts'
+import { API_RESPONSE_RAW_PROMPT_TEMPLATE, API_URL_RAW_PROMPT_TEMPLATE, APIChain } from './postCore'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+
+class POSTApiChain_Chains implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'POST API Chain'
+ this.name = 'postApiChain'
+ this.version = 1.0
+ this.type = 'POSTApiChain'
+ this.icon = 'apichain.svg'
+ this.category = 'Chains'
+ this.description = 'Chain to run queries against POST API'
+ this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
+ this.inputs = [
+ {
+ label: 'Language Model',
+ name: 'model',
+ type: 'BaseLanguageModel'
+ },
+ {
+ label: 'API Documentation',
+ name: 'apiDocs',
+ type: 'string',
+ description:
+ 'Description of how API works. Please refer to more examples',
+ rows: 4
+ },
+ {
+ label: 'Headers',
+ name: 'headers',
+ type: 'json',
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'URL Prompt',
+ name: 'urlPrompt',
+ type: 'string',
+ description: 'Prompt used to tell LLMs how to construct the URL. Must contains {api_docs} and {question}',
+ default: API_URL_RAW_PROMPT_TEMPLATE,
+ rows: 4,
+ additionalParams: true
+ },
+ {
+ label: 'Answer Prompt',
+ name: 'ansPrompt',
+ type: 'string',
+ description:
+ 'Prompt used to tell LLMs how to return the API response. Must contains {api_response}, {api_url}, and {question}',
+ default: API_RESPONSE_RAW_PROMPT_TEMPLATE,
+ rows: 4,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const apiDocs = nodeData.inputs?.apiDocs as string
+ const headers = nodeData.inputs?.headers as string
+ const urlPrompt = nodeData.inputs?.urlPrompt as string
+ const ansPrompt = nodeData.inputs?.ansPrompt as string
+
+ const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
+ return chain
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const apiDocs = nodeData.inputs?.apiDocs as string
+ const headers = nodeData.inputs?.headers as string
+ const urlPrompt = nodeData.inputs?.urlPrompt as string
+ const ansPrompt = nodeData.inputs?.ansPrompt as string
+
+ const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2)
+ const res = await chain.run(input, [loggerHandler, handler])
+ return res
+ } else {
+ const res = await chain.run(input, [loggerHandler])
+ return res
+ }
+ }
+}
+
+const getAPIChain = async (documents: string, llm: BaseLanguageModel, headers: string, urlPrompt: string, ansPrompt: string) => {
+ const apiUrlPrompt = new PromptTemplate({
+ inputVariables: ['api_docs', 'question'],
+ template: urlPrompt ? urlPrompt : API_URL_RAW_PROMPT_TEMPLATE
+ })
+
+ const apiResponsePrompt = new PromptTemplate({
+ inputVariables: ['api_docs', 'question', 'api_url_body', 'api_response'],
+ template: ansPrompt ? ansPrompt : API_RESPONSE_RAW_PROMPT_TEMPLATE
+ })
+
+ const chain = APIChain.fromLLMAndAPIDocs(llm, documents, {
+ apiUrlPrompt,
+ apiResponsePrompt,
+ verbose: process.env.DEBUG === 'true' ? true : false,
+ headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {}
+ })
+ return chain
+}
+
+module.exports = { nodeClass: POSTApiChain_Chains }
diff --git a/packages/components/nodes/chains/ApiChain/apichain.svg b/packages/components/nodes/chains/ApiChain/apichain.svg
new file mode 100644
index 000000000..3b86b9051
--- /dev/null
+++ b/packages/components/nodes/chains/ApiChain/apichain.svg
@@ -0,0 +1,3 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/chains/ApiChain/openapi.png b/packages/components/nodes/chains/ApiChain/openapi.png
new file mode 100644
index 000000000..457c2e405
Binary files /dev/null and b/packages/components/nodes/chains/ApiChain/openapi.png differ
diff --git a/packages/components/nodes/chains/ApiChain/postCore.ts b/packages/components/nodes/chains/ApiChain/postCore.ts
new file mode 100644
index 000000000..de7215d92
--- /dev/null
+++ b/packages/components/nodes/chains/ApiChain/postCore.ts
@@ -0,0 +1,162 @@
+import { BaseLanguageModel } from 'langchain/base_language'
+import { CallbackManagerForChainRun } from 'langchain/callbacks'
+import { BaseChain, ChainInputs, LLMChain, SerializedAPIChain } from 'langchain/chains'
+import { BasePromptTemplate, PromptTemplate } from 'langchain/prompts'
+import { ChainValues } from 'langchain/schema'
+import fetch from 'node-fetch'
+
+export const API_URL_RAW_PROMPT_TEMPLATE = `You are given the below API Documentation:
+{api_docs}
+Using this documentation, generate a json string with two keys: "url" and "data".
+The value of "url" should be a string, which is the API url to call for answering the user question.
+The value of "data" should be a dictionary of key-value pairs you want to POST to the url as a JSON body.
+Be careful to always use double quotes for strings in the json string.
+You should build the json string in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.
+
+Question:{question}
+json string:`
+
+export const API_RESPONSE_RAW_PROMPT_TEMPLATE = `${API_URL_RAW_PROMPT_TEMPLATE} {api_url_body}
+
+Here is the response from the API:
+
+{api_response}
+
+Summarize this response to answer the original question.
+
+Summary:`
+
+const defaultApiUrlPrompt = new PromptTemplate({
+ inputVariables: ['api_docs', 'question'],
+ template: API_URL_RAW_PROMPT_TEMPLATE
+})
+
+const defaultApiResponsePrompt = new PromptTemplate({
+ inputVariables: ['api_docs', 'question', 'api_url_body', 'api_response'],
+ template: API_RESPONSE_RAW_PROMPT_TEMPLATE
+})
+
+export interface APIChainInput extends Omit {
+ apiAnswerChain: LLMChain
+ apiRequestChain: LLMChain
+ apiDocs: string
+ inputKey?: string
+ headers?: Record
+ /** Key to use for output, defaults to `output` */
+ outputKey?: string
+}
+
+export type APIChainOptions = {
+ headers?: Record
+ apiUrlPrompt?: BasePromptTemplate
+ apiResponsePrompt?: BasePromptTemplate
+}
+
+export class APIChain extends BaseChain implements APIChainInput {
+ apiAnswerChain: LLMChain
+
+ apiRequestChain: LLMChain
+
+ apiDocs: string
+
+ headers = {}
+
+ inputKey = 'question'
+
+ outputKey = 'output'
+
+ get inputKeys() {
+ return [this.inputKey]
+ }
+
+ get outputKeys() {
+ return [this.outputKey]
+ }
+
+ constructor(fields: APIChainInput) {
+ super(fields)
+ this.apiRequestChain = fields.apiRequestChain
+ this.apiAnswerChain = fields.apiAnswerChain
+ this.apiDocs = fields.apiDocs
+ this.inputKey = fields.inputKey ?? this.inputKey
+ this.outputKey = fields.outputKey ?? this.outputKey
+ this.headers = fields.headers ?? this.headers
+ }
+
+ /** @ignore */
+ async _call(values: ChainValues, runManager?: CallbackManagerForChainRun): Promise {
+ try {
+ const question: string = values[this.inputKey]
+
+ const api_url_body = await this.apiRequestChain.predict({ question, api_docs: this.apiDocs }, runManager?.getChild())
+
+ const { url, data } = JSON.parse(api_url_body)
+
+ const res = await fetch(url, {
+ method: 'POST',
+ headers: this.headers,
+ body: JSON.stringify(data)
+ })
+
+ const api_response = await res.text()
+
+ const answer = await this.apiAnswerChain.predict(
+ { question, api_docs: this.apiDocs, api_url_body, api_response },
+ runManager?.getChild()
+ )
+
+ return { [this.outputKey]: answer }
+ } catch (error) {
+ return { [this.outputKey]: error }
+ }
+ }
+
+ _chainType() {
+ return 'api_chain' as const
+ }
+
+ static async deserialize(data: SerializedAPIChain) {
+ const { api_request_chain, api_answer_chain, api_docs } = data
+
+ if (!api_request_chain) {
+ throw new Error('LLMChain must have api_request_chain')
+ }
+ if (!api_answer_chain) {
+ throw new Error('LLMChain must have api_answer_chain')
+ }
+ if (!api_docs) {
+ throw new Error('LLMChain must have api_docs')
+ }
+
+ return new APIChain({
+ apiAnswerChain: await LLMChain.deserialize(api_answer_chain),
+ apiRequestChain: await LLMChain.deserialize(api_request_chain),
+ apiDocs: api_docs
+ })
+ }
+
+ serialize(): SerializedAPIChain {
+ return {
+ _type: this._chainType(),
+ api_answer_chain: this.apiAnswerChain.serialize(),
+ api_request_chain: this.apiRequestChain.serialize(),
+ api_docs: this.apiDocs
+ }
+ }
+
+ static fromLLMAndAPIDocs(
+ llm: BaseLanguageModel,
+ apiDocs: string,
+ options: APIChainOptions & Omit = {}
+ ): APIChain {
+ const { apiUrlPrompt = defaultApiUrlPrompt, apiResponsePrompt = defaultApiResponsePrompt } = options
+ const apiRequestChain = new LLMChain({ prompt: apiUrlPrompt, llm })
+ const apiAnswerChain = new LLMChain({ prompt: apiResponsePrompt, llm })
+ return new this({
+ apiAnswerChain,
+ apiRequestChain,
+ apiDocs,
+ ...options
+ })
+ }
+}
diff --git a/packages/components/nodes/chains/ConversationChain/ConversationChain.ts b/packages/components/nodes/chains/ConversationChain/ConversationChain.ts
index 19e28752e..08663395e 100644
--- a/packages/components/nodes/chains/ConversationChain/ConversationChain.ts
+++ b/packages/components/nodes/chains/ConversationChain/ConversationChain.ts
@@ -1,16 +1,19 @@
-import { ICommonObject, IMessage, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { ConversationChain } from 'langchain/chains'
-import { getBaseClasses } from '../../../src/utils'
+import { getBaseClasses, mapChatHistory } from '../../../src/utils'
import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate } from 'langchain/prompts'
-import { BufferMemory, ChatMessageHistory } from 'langchain/memory'
+import { BufferMemory } from 'langchain/memory'
import { BaseChatModel } from 'langchain/chat_models/base'
-import { AIChatMessage, HumanChatMessage } from 'langchain/schema'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+import { flatten } from 'lodash'
+import { Document } from 'langchain/document'
-const systemMessage = `The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.`
+let systemMessage = `The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.`
class ConversationChain_Chains implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
@@ -21,6 +24,7 @@ class ConversationChain_Chains implements INode {
constructor() {
this.label = 'Conversation Chain'
this.name = 'conversationChain'
+ this.version = 1.0
this.type = 'ConversationChain'
this.icon = 'chain.svg'
this.category = 'Chains'
@@ -37,6 +41,15 @@ class ConversationChain_Chains implements INode {
name: 'memory',
type: 'BaseMemory'
},
+ {
+ label: 'Document',
+ name: 'document',
+ type: 'Document',
+ description:
+ 'Include whole document into the context window, if you get maximum context length error, please use model with higher context window like Claude 100k, or gpt4 32k',
+ optional: true,
+ list: true
+ },
{
label: 'System Message',
name: 'systemMessagePrompt',
@@ -53,10 +66,28 @@ class ConversationChain_Chains implements INode {
const model = nodeData.inputs?.model as BaseChatModel
const memory = nodeData.inputs?.memory as BufferMemory
const prompt = nodeData.inputs?.systemMessagePrompt as string
+ const docs = nodeData.inputs?.document as Document[]
+
+ const flattenDocs = docs && docs.length ? flatten(docs) : []
+ const finalDocs = []
+ for (let i = 0; i < flattenDocs.length; i += 1) {
+ finalDocs.push(new Document(flattenDocs[i]))
+ }
+
+ let finalText = ''
+ for (let i = 0; i < finalDocs.length; i += 1) {
+ finalText += finalDocs[i].pageContent
+ }
+
+ const replaceChar: string[] = ['{', '}']
+ for (const char of replaceChar) finalText = finalText.replaceAll(char, '')
+
+ if (finalText) systemMessage = `${systemMessage}\nThe AI has the following context:\n${finalText}`
const obj: any = {
llm: model,
- memory
+ memory,
+ verbose: process.env.DEBUG === 'true' ? true : false
}
const chatPrompt = ChatPromptTemplate.fromPromptMessages([
@@ -75,22 +106,20 @@ class ConversationChain_Chains implements INode {
const memory = nodeData.inputs?.memory as BufferMemory
if (options && options.chatHistory) {
- const chatHistory = []
- const histories: IMessage[] = options.chatHistory
-
- for (const message of histories) {
- if (message.type === 'apiMessage') {
- chatHistory.push(new AIChatMessage(message.message))
- } else if (message.type === 'userMessage') {
- chatHistory.push(new HumanChatMessage(message.message))
- }
- }
- memory.chatHistory = new ChatMessageHistory(chatHistory)
+ memory.chatHistory = mapChatHistory(options)
chain.memory = memory
}
- const res = await chain.call({ input })
- return res?.response
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
+ const res = await chain.call({ input }, [loggerHandler, handler])
+ return res?.response
+ } else {
+ const res = await chain.call({ input }, [loggerHandler])
+ return res?.response
+ }
}
}
diff --git a/packages/components/nodes/chains/ConversationalRetrievalQAChain/ConversationalRetrievalQAChain.ts b/packages/components/nodes/chains/ConversationalRetrievalQAChain/ConversationalRetrievalQAChain.ts
index 7e37f9131..c14b292d6 100644
--- a/packages/components/nodes/chains/ConversationalRetrievalQAChain/ConversationalRetrievalQAChain.ts
+++ b/packages/components/nodes/chains/ConversationalRetrievalQAChain/ConversationalRetrievalQAChain.ts
@@ -1,26 +1,25 @@
import { BaseLanguageModel } from 'langchain/base_language'
-import { ICommonObject, IMessage, INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
-import { ConversationalRetrievalQAChain } from 'langchain/chains'
-import { BaseRetriever } from 'langchain/schema'
-
-const default_qa_template = `Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
-
-{context}
-
-Question: {question}
-Helpful Answer:`
-
-const qa_template = `Use the following pieces of context to answer the question at the end.
-
-{context}
-
-Question: {question}
-Helpful Answer:`
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, mapChatHistory } from '../../../src/utils'
+import { ConversationalRetrievalQAChain, QAChainParams } from 'langchain/chains'
+import { BaseRetriever } from 'langchain/schema/retriever'
+import { BufferMemory, BufferMemoryInput } from 'langchain/memory'
+import { PromptTemplate } from 'langchain/prompts'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+import {
+ default_map_reduce_template,
+ default_qa_template,
+ qa_template,
+ map_reduce_template,
+ CUSTOM_QUESTION_GENERATOR_CHAIN_PROMPT,
+ refine_question_template,
+ refine_template
+} from './prompts'
class ConversationalRetrievalQAChain_Chains implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
@@ -31,6 +30,7 @@ class ConversationalRetrievalQAChain_Chains implements INode {
constructor() {
this.label = 'Conversational Retrieval QA Chain'
this.name = 'conversationalRetrievalQAChain'
+ this.version = 1.0
this.type = 'ConversationalRetrievalQAChain'
this.icon = 'chain.svg'
this.category = 'Chains'
@@ -47,6 +47,19 @@ class ConversationalRetrievalQAChain_Chains implements INode {
name: 'vectorStoreRetriever',
type: 'BaseRetriever'
},
+ {
+ label: 'Memory',
+ name: 'memory',
+ type: 'BaseMemory',
+ optional: true,
+ description: 'If left empty, a default BufferMemory will be used'
+ },
+ {
+ label: 'Return Source Documents',
+ name: 'returnSourceDocuments',
+ type: 'boolean',
+ optional: true
+ },
{
label: 'System Message',
name: 'systemMessagePrompt',
@@ -56,6 +69,31 @@ class ConversationalRetrievalQAChain_Chains implements INode {
optional: true,
placeholder:
'I want you to act as a document that I am having a conversation with. Your name is "AI Assistant". You will provide me with answers from the given info. If the answer is not included, say exactly "Hmm, I am not sure." and stop after that. Refuse to answer any question not about the info. Never break character.'
+ },
+ {
+ label: 'Chain Option',
+ name: 'chainOption',
+ type: 'options',
+ options: [
+ {
+ label: 'MapReduceDocumentsChain',
+ name: 'map_reduce',
+ description:
+ 'Suitable for QA tasks over larger documents and can run the preprocessing step in parallel, reducing the running time'
+ },
+ {
+ label: 'RefineDocumentsChain',
+ name: 'refine',
+ description: 'Suitable for QA tasks over a large number of documents.'
+ },
+ {
+ label: 'StuffDocumentsChain',
+ name: 'stuff',
+ description: 'Suitable for QA tasks over a small number of documents.'
+ }
+ ],
+ additionalParams: true,
+ optional: true
}
]
}
@@ -64,35 +102,112 @@ class ConversationalRetrievalQAChain_Chains implements INode {
const model = nodeData.inputs?.model as BaseLanguageModel
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever
const systemMessagePrompt = nodeData.inputs?.systemMessagePrompt as string
+ const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
+ const chainOption = nodeData.inputs?.chainOption as string
+ const externalMemory = nodeData.inputs?.memory
- const chain = ConversationalRetrievalQAChain.fromLLM(model, vectorStoreRetriever, {
+ const obj: any = {
verbose: process.env.DEBUG === 'true' ? true : false,
- qaTemplate: systemMessagePrompt ? `${systemMessagePrompt}\n${qa_template}` : default_qa_template
- })
+ questionGeneratorChainOptions: {
+ template: CUSTOM_QUESTION_GENERATOR_CHAIN_PROMPT
+ }
+ }
+
+ if (returnSourceDocuments) obj.returnSourceDocuments = returnSourceDocuments
+
+ if (chainOption === 'map_reduce') {
+ obj.qaChainOptions = {
+ type: 'map_reduce',
+ combinePrompt: PromptTemplate.fromTemplate(
+ systemMessagePrompt ? `${systemMessagePrompt}\n${map_reduce_template}` : default_map_reduce_template
+ )
+ } as QAChainParams
+ } else if (chainOption === 'refine') {
+ const qprompt = new PromptTemplate({
+ inputVariables: ['context', 'question'],
+ template: refine_question_template(systemMessagePrompt)
+ })
+ const rprompt = new PromptTemplate({
+ inputVariables: ['context', 'question', 'existing_answer'],
+ template: refine_template
+ })
+ obj.qaChainOptions = {
+ type: 'refine',
+ questionPrompt: qprompt,
+ refinePrompt: rprompt
+ } as QAChainParams
+ } else {
+ obj.qaChainOptions = {
+ type: 'stuff',
+ prompt: PromptTemplate.fromTemplate(systemMessagePrompt ? `${systemMessagePrompt}\n${qa_template}` : default_qa_template)
+ } as QAChainParams
+ }
+
+ if (externalMemory) {
+ externalMemory.memoryKey = 'chat_history'
+ externalMemory.inputKey = 'question'
+ externalMemory.outputKey = 'text'
+ externalMemory.returnMessages = true
+ if (chainOption === 'refine') externalMemory.outputKey = 'output_text'
+ obj.memory = externalMemory
+ } else {
+ const fields: BufferMemoryInput = {
+ memoryKey: 'chat_history',
+ inputKey: 'question',
+ outputKey: 'text',
+ returnMessages: true
+ }
+ if (chainOption === 'refine') fields.outputKey = 'output_text'
+ obj.memory = new BufferMemory(fields)
+ }
+
+ const chain = ConversationalRetrievalQAChain.fromLLM(model, vectorStoreRetriever, obj)
return chain
}
- async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
const chain = nodeData.instance as ConversationalRetrievalQAChain
- let chatHistory = ''
+ const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
+ const chainOption = nodeData.inputs?.chainOption as string
- if (options && options.chatHistory) {
- const histories: IMessage[] = options.chatHistory
- chatHistory = histories
- .map((item) => {
- return item.message
- })
- .join('')
+ let model = nodeData.inputs?.model
+
+ // Temporary fix: https://github.com/hwchase17/langchainjs/issues/754
+ model.streaming = false
+ chain.questionGeneratorChain.llm = model
+
+ const obj = { question: input }
+
+ if (options && options.chatHistory && chain.memory) {
+ ;(chain.memory as any).chatHistory = mapChatHistory(options)
}
- const obj = {
- question: input,
- chat_history: chatHistory ? chatHistory : []
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(
+ options.socketIO,
+ options.socketIOClientId,
+ chainOption === 'refine' ? 4 : undefined,
+ returnSourceDocuments
+ )
+ const res = await chain.call(obj, [loggerHandler, handler])
+ if (chainOption === 'refine') {
+ if (res.output_text && res.sourceDocuments) {
+ return {
+ text: res.output_text,
+ sourceDocuments: res.sourceDocuments
+ }
+ }
+ return res?.output_text
+ }
+ if (res.text && res.sourceDocuments) return res
+ return res?.text
+ } else {
+ const res = await chain.call(obj, [loggerHandler])
+ if (res.text && res.sourceDocuments) return res
+ return res?.text
}
-
- const res = await chain.call(obj)
-
- return res?.text
}
}
diff --git a/packages/components/nodes/chains/ConversationalRetrievalQAChain/prompts.ts b/packages/components/nodes/chains/ConversationalRetrievalQAChain/prompts.ts
new file mode 100644
index 000000000..132e3a97e
--- /dev/null
+++ b/packages/components/nodes/chains/ConversationalRetrievalQAChain/prompts.ts
@@ -0,0 +1,64 @@
+export const default_qa_template = `Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
+
+{context}
+
+Question: {question}
+Helpful Answer:`
+
+export const qa_template = `Use the following pieces of context to answer the question at the end.
+
+{context}
+
+Question: {question}
+Helpful Answer:`
+
+export const default_map_reduce_template = `Given the following extracted parts of a long document and a question, create a final answer.
+If you don't know the answer, just say that you don't know. Don't try to make up an answer.
+
+{summaries}
+
+Question: {question}
+Helpful Answer:`
+
+export const map_reduce_template = `Given the following extracted parts of a long document and a question, create a final answer.
+
+{summaries}
+
+Question: {question}
+Helpful Answer:`
+
+export const refine_question_template = (sysPrompt?: string) => {
+ let returnPrompt = ''
+ if (sysPrompt)
+ returnPrompt = `Context information is below.
+---------------------
+{context}
+---------------------
+Given the context information and not prior knowledge, ${sysPrompt}
+Answer the question: {question}.
+Answer:`
+ if (!sysPrompt)
+ returnPrompt = `Context information is below.
+---------------------
+{context}
+---------------------
+Given the context information and not prior knowledge, answer the question: {question}.
+Answer:`
+ return returnPrompt
+}
+
+export const refine_template = `The original question is as follows: {question}
+We have provided an existing answer: {existing_answer}
+We have the opportunity to refine the existing answer (only if needed) with some more context below.
+------------
+{context}
+------------
+Given the new context, refine the original answer to better answer the question.
+If you can't find answer from the context, return the original answer.`
+
+export const CUSTOM_QUESTION_GENERATOR_CHAIN_PROMPT = `Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, answer in the same language as the follow up question. include it in the standalone question.
+
+Chat History:
+{chat_history}
+Follow Up Input: {question}
+Standalone question:`
diff --git a/packages/components/nodes/chains/LLMChain/LLMChain.ts b/packages/components/nodes/chains/LLMChain/LLMChain.ts
index b178e28df..5088b34d4 100644
--- a/packages/components/nodes/chains/LLMChain/LLMChain.ts
+++ b/packages/components/nodes/chains/LLMChain/LLMChain.ts
@@ -1,11 +1,13 @@
import { ICommonObject, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { getBaseClasses, handleEscapeCharacters } from '../../../src/utils'
import { LLMChain } from 'langchain/chains'
import { BaseLanguageModel } from 'langchain/base_language'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
class LLMChain_Chains implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
@@ -17,6 +19,7 @@ class LLMChain_Chains implements INode {
constructor() {
this.label = 'LLM Chain'
this.name = 'llmChain'
+ this.version = 1.0
this.type = 'LLMChain'
this.icon = 'chain.svg'
this.category = 'Chains'
@@ -50,12 +53,12 @@ class LLMChain_Chains implements INode {
{
label: 'Output Prediction',
name: 'outputPrediction',
- baseClasses: ['string']
+ baseClasses: ['string', 'json']
}
]
}
- async init(nodeData: INodeData, input: string): Promise {
+ async init(nodeData: INodeData, input: string, options: ICommonObject): Promise {
const model = nodeData.inputs?.model as BaseLanguageModel
const prompt = nodeData.inputs?.prompt
const output = nodeData.outputs?.output as string
@@ -67,21 +70,25 @@ class LLMChain_Chains implements INode {
} else if (output === 'outputPrediction') {
const chain = new LLMChain({ llm: model, prompt, verbose: process.env.DEBUG === 'true' ? true : false })
const inputVariables = chain.prompt.inputVariables as string[] // ["product"]
- const res = await runPrediction(inputVariables, chain, input, promptValues)
+ const res = await runPrediction(inputVariables, chain, input, promptValues, options)
// eslint-disable-next-line no-console
console.log('\x1b[92m\x1b[1m\n*****OUTPUT PREDICTION*****\n\x1b[0m\x1b[0m')
// eslint-disable-next-line no-console
console.log(res)
- return res
+ /**
+ * Apply string transformation to convert special chars:
+ * FROM: hello i am ben\n\n\thow are you?
+ * TO: hello i am benFLOWISE_NEWLINEFLOWISE_NEWLINEFLOWISE_TABhow are you?
+ */
+ return handleEscapeCharacters(res, false)
}
}
- async run(nodeData: INodeData, input: string): Promise {
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
const inputVariables = nodeData.instance.prompt.inputVariables as string[] // ["product"]
const chain = nodeData.instance as LLMChain
const promptValues = nodeData.inputs?.prompt.promptValues as ICommonObject
-
- const res = await runPrediction(inputVariables, chain, input, promptValues)
+ const res = await runPrediction(inputVariables, chain, input, promptValues, options)
// eslint-disable-next-line no-console
console.log('\x1b[93m\x1b[1m\n*****FINAL RESULT*****\n\x1b[0m\x1b[0m')
// eslint-disable-next-line no-console
@@ -90,11 +97,26 @@ class LLMChain_Chains implements INode {
}
}
-const runPrediction = async (inputVariables: string[], chain: LLMChain, input: string, promptValues: ICommonObject) => {
- if (inputVariables.length === 1) {
- const res = await chain.run(input)
- return res
- } else if (inputVariables.length > 1) {
+const runPrediction = async (
+ inputVariables: string[],
+ chain: LLMChain,
+ input: string,
+ promptValuesRaw: ICommonObject,
+ options: ICommonObject
+) => {
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const isStreaming = options.socketIO && options.socketIOClientId
+ const socketIO = isStreaming ? options.socketIO : undefined
+ const socketIOClientId = isStreaming ? options.socketIOClientId : ''
+
+ /**
+ * Apply string transformation to reverse converted special chars:
+ * FROM: { "value": "hello i am benFLOWISE_NEWLINEFLOWISE_NEWLINEFLOWISE_TABhow are you?" }
+ * TO: { "value": "hello i am ben\n\n\thow are you?" }
+ */
+ const promptValues = handleEscapeCharacters(promptValuesRaw, true)
+
+ if (promptValues && inputVariables.length > 0) {
let seen: string[] = []
for (const variable of inputVariables) {
@@ -106,11 +128,15 @@ const runPrediction = async (inputVariables: string[], chain: LLMChain, input: s
if (seen.length === 0) {
// All inputVariables have fixed values specified
- const options = {
- ...promptValues
+ const options = { ...promptValues }
+ if (isStreaming) {
+ const handler = new CustomChainHandler(socketIO, socketIOClientId)
+ const res = await chain.call(options, [loggerHandler, handler])
+ return res?.text
+ } else {
+ const res = await chain.call(options, [loggerHandler])
+ return res?.text
}
- const res = await chain.call(options)
- return res?.text
} else if (seen.length === 1) {
// If one inputVariable is not specify, use input (user's question) as value
const lastValue = seen.pop()
@@ -119,14 +145,26 @@ const runPrediction = async (inputVariables: string[], chain: LLMChain, input: s
...promptValues,
[lastValue]: input
}
- const res = await chain.call(options)
- return res?.text
+ if (isStreaming) {
+ const handler = new CustomChainHandler(socketIO, socketIOClientId)
+ const res = await chain.call(options, [loggerHandler, handler])
+ return res?.text
+ } else {
+ const res = await chain.call(options, [loggerHandler])
+ return res?.text
+ }
} else {
throw new Error(`Please provide Prompt Values for: ${seen.join(', ')}`)
}
} else {
- const res = await chain.run(input)
- return res
+ if (isStreaming) {
+ const handler = new CustomChainHandler(socketIO, socketIOClientId)
+ const res = await chain.run(input, [loggerHandler, handler])
+ return res
+ } else {
+ const res = await chain.run(input, [loggerHandler])
+ return res
+ }
}
}
diff --git a/packages/components/nodes/chains/MultiPromptChain/MultiPromptChain.ts b/packages/components/nodes/chains/MultiPromptChain/MultiPromptChain.ts
new file mode 100644
index 000000000..0d1377143
--- /dev/null
+++ b/packages/components/nodes/chains/MultiPromptChain/MultiPromptChain.ts
@@ -0,0 +1,82 @@
+import { BaseLanguageModel } from 'langchain/base_language'
+import { ICommonObject, INode, INodeData, INodeParams, PromptRetriever } from '../../../src/Interface'
+import { getBaseClasses } from '../../../src/utils'
+import { MultiPromptChain } from 'langchain/chains'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+
+class MultiPromptChain_Chains implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Multi Prompt Chain'
+ this.name = 'multiPromptChain'
+ this.version = 1.0
+ this.type = 'MultiPromptChain'
+ this.icon = 'chain.svg'
+ this.category = 'Chains'
+ this.description = 'Chain automatically picks an appropriate prompt from multiple prompt templates'
+ this.baseClasses = [this.type, ...getBaseClasses(MultiPromptChain)]
+ this.inputs = [
+ {
+ label: 'Language Model',
+ name: 'model',
+ type: 'BaseLanguageModel'
+ },
+ {
+ label: 'Prompt Retriever',
+ name: 'promptRetriever',
+ type: 'PromptRetriever',
+ list: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const promptRetriever = nodeData.inputs?.promptRetriever as PromptRetriever[]
+ const promptNames = []
+ const promptDescriptions = []
+ const promptTemplates = []
+
+ for (const prompt of promptRetriever) {
+ promptNames.push(prompt.name)
+ promptDescriptions.push(prompt.description)
+ promptTemplates.push(prompt.systemMessage)
+ }
+
+ const chain = MultiPromptChain.fromLLMAndPrompts(model, {
+ promptNames,
+ promptDescriptions,
+ promptTemplates,
+ llmChainOpts: { verbose: process.env.DEBUG === 'true' ? true : false }
+ })
+
+ return chain
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const chain = nodeData.instance as MultiPromptChain
+ const obj = { input }
+
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2)
+ const res = await chain.call(obj, [loggerHandler, handler])
+ return res?.text
+ } else {
+ const res = await chain.call(obj, [loggerHandler])
+ return res?.text
+ }
+ }
+}
+
+module.exports = { nodeClass: MultiPromptChain_Chains }
diff --git a/packages/components/nodes/chains/MultiPromptChain/chain.svg b/packages/components/nodes/chains/MultiPromptChain/chain.svg
new file mode 100644
index 000000000..a5b32f90a
--- /dev/null
+++ b/packages/components/nodes/chains/MultiPromptChain/chain.svg
@@ -0,0 +1,6 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/chains/MultiRetrievalQAChain/MultiRetrievalQAChain.ts b/packages/components/nodes/chains/MultiRetrievalQAChain/MultiRetrievalQAChain.ts
new file mode 100644
index 000000000..6d1506475
--- /dev/null
+++ b/packages/components/nodes/chains/MultiRetrievalQAChain/MultiRetrievalQAChain.ts
@@ -0,0 +1,92 @@
+import { BaseLanguageModel } from 'langchain/base_language'
+import { ICommonObject, INode, INodeData, INodeParams, VectorStoreRetriever } from '../../../src/Interface'
+import { getBaseClasses } from '../../../src/utils'
+import { MultiRetrievalQAChain } from 'langchain/chains'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+
+class MultiRetrievalQAChain_Chains implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Multi Retrieval QA Chain'
+ this.name = 'multiRetrievalQAChain'
+ this.version = 1.0
+ this.type = 'MultiRetrievalQAChain'
+ this.icon = 'chain.svg'
+ this.category = 'Chains'
+ this.description = 'QA Chain that automatically picks an appropriate vector store from multiple retrievers'
+ this.baseClasses = [this.type, ...getBaseClasses(MultiRetrievalQAChain)]
+ this.inputs = [
+ {
+ label: 'Language Model',
+ name: 'model',
+ type: 'BaseLanguageModel'
+ },
+ {
+ label: 'Vector Store Retriever',
+ name: 'vectorStoreRetriever',
+ type: 'VectorStoreRetriever',
+ list: true
+ },
+ {
+ label: 'Return Source Documents',
+ name: 'returnSourceDocuments',
+ type: 'boolean',
+ optional: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as VectorStoreRetriever[]
+ const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
+
+ const retrieverNames = []
+ const retrieverDescriptions = []
+ const retrievers = []
+
+ for (const vs of vectorStoreRetriever) {
+ retrieverNames.push(vs.name)
+ retrieverDescriptions.push(vs.description)
+ retrievers.push(vs.vectorStore.asRetriever((vs.vectorStore as any).k ?? 4))
+ }
+
+ const chain = MultiRetrievalQAChain.fromLLMAndRetrievers(model, {
+ retrieverNames,
+ retrieverDescriptions,
+ retrievers,
+ retrievalQAChainOpts: { verbose: process.env.DEBUG === 'true' ? true : false, returnSourceDocuments }
+ })
+ return chain
+ }
+
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const chain = nodeData.instance as MultiRetrievalQAChain
+ const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
+
+ const obj = { input }
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2, returnSourceDocuments)
+ const res = await chain.call(obj, [loggerHandler, handler])
+ if (res.text && res.sourceDocuments) return res
+ return res?.text
+ } else {
+ const res = await chain.call(obj, [loggerHandler])
+ if (res.text && res.sourceDocuments) return res
+ return res?.text
+ }
+ }
+}
+
+module.exports = { nodeClass: MultiRetrievalQAChain_Chains }
diff --git a/packages/components/nodes/chains/MultiRetrievalQAChain/chain.svg b/packages/components/nodes/chains/MultiRetrievalQAChain/chain.svg
new file mode 100644
index 000000000..a5b32f90a
--- /dev/null
+++ b/packages/components/nodes/chains/MultiRetrievalQAChain/chain.svg
@@ -0,0 +1,6 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/chains/RetrievalQAChain/RetrievalQAChain.ts b/packages/components/nodes/chains/RetrievalQAChain/RetrievalQAChain.ts
index c002b6848..935866ca2 100644
--- a/packages/components/nodes/chains/RetrievalQAChain/RetrievalQAChain.ts
+++ b/packages/components/nodes/chains/RetrievalQAChain/RetrievalQAChain.ts
@@ -1,12 +1,14 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { RetrievalQAChain } from 'langchain/chains'
-import { BaseRetriever } from 'langchain/schema'
+import { BaseRetriever } from 'langchain/schema/retriever'
import { getBaseClasses } from '../../../src/utils'
import { BaseLanguageModel } from 'langchain/base_language'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
class RetrievalQAChain_Chains implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
@@ -17,6 +19,7 @@ class RetrievalQAChain_Chains implements INode {
constructor() {
this.label = 'Retrieval QA Chain'
this.name = 'retrievalQAChain'
+ this.version = 1.0
this.type = 'RetrievalQAChain'
this.icon = 'chain.svg'
this.category = 'Chains'
@@ -44,13 +47,21 @@ class RetrievalQAChain_Chains implements INode {
return chain
}
- async run(nodeData: INodeData, input: string): Promise {
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
const chain = nodeData.instance as RetrievalQAChain
const obj = {
query: input
}
- const res = await chain.call(obj)
- return res?.text
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
+ const res = await chain.call(obj, [loggerHandler, handler])
+ return res?.text
+ } else {
+ const res = await chain.call(obj, [loggerHandler])
+ return res?.text
+ }
}
}
diff --git a/packages/components/nodes/chains/SqlDatabaseChain/SqlDatabaseChain.ts b/packages/components/nodes/chains/SqlDatabaseChain/SqlDatabaseChain.ts
index 7ea10d941..2a0c71cf2 100644
--- a/packages/components/nodes/chains/SqlDatabaseChain/SqlDatabaseChain.ts
+++ b/packages/components/nodes/chains/SqlDatabaseChain/SqlDatabaseChain.ts
@@ -1,13 +1,18 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { SqlDatabaseChain, SqlDatabaseChainInput } from 'langchain/chains'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { SqlDatabaseChain, SqlDatabaseChainInput } from 'langchain/chains/sql_db'
import { getBaseClasses } from '../../../src/utils'
import { DataSource } from 'typeorm'
import { SqlDatabase } from 'langchain/sql_db'
import { BaseLanguageModel } from 'langchain/base_language'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
+import { DataSourceOptions } from 'typeorm/data-source'
+
+type DatabaseType = 'sqlite' | 'postgres' | 'mssql' | 'mysql'
class SqlDatabaseChain_Chains implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
@@ -18,6 +23,7 @@ class SqlDatabaseChain_Chains implements INode {
constructor() {
this.label = 'Sql Database Chain'
this.name = 'sqlDatabaseChain'
+ this.version = 1.0
this.type = 'SqlDatabaseChain'
this.icon = 'sqlchain.svg'
this.category = 'Chains'
@@ -35,46 +41,73 @@ class SqlDatabaseChain_Chains implements INode {
type: 'options',
options: [
{
- label: 'SQlite',
+ label: 'SQLite',
name: 'sqlite'
+ },
+ {
+ label: 'PostgreSQL',
+ name: 'postgres'
+ },
+ {
+ label: 'MSSQL',
+ name: 'mssql'
+ },
+ {
+ label: 'MySQL',
+ name: 'mysql'
}
],
default: 'sqlite'
},
{
- label: 'Database File Path',
- name: 'dbFilePath',
+ label: 'Connection string or file path (sqlite only)',
+ name: 'url',
type: 'string',
- placeholder: 'C:/Users/chinook.db'
+ placeholder: '1270.0.0.1:5432/chinook'
}
]
}
async init(nodeData: INodeData): Promise {
- const databaseType = nodeData.inputs?.database as 'sqlite'
+ const databaseType = nodeData.inputs?.database as DatabaseType
const model = nodeData.inputs?.model as BaseLanguageModel
- const dbFilePath = nodeData.inputs?.dbFilePath
+ const url = nodeData.inputs?.url
- const chain = await getSQLDBChain(databaseType, dbFilePath, model)
+ const chain = await getSQLDBChain(databaseType, url, model)
return chain
}
- async run(nodeData: INodeData, input: string): Promise {
- const databaseType = nodeData.inputs?.database as 'sqlite'
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
+ const databaseType = nodeData.inputs?.database as DatabaseType
const model = nodeData.inputs?.model as BaseLanguageModel
- const dbFilePath = nodeData.inputs?.dbFilePath
+ const url = nodeData.inputs?.url
- const chain = await getSQLDBChain(databaseType, dbFilePath, model)
- const res = await chain.run(input)
- return res
+ const chain = await getSQLDBChain(databaseType, url, model)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2)
+ const res = await chain.run(input, [loggerHandler, handler])
+ return res
+ } else {
+ const res = await chain.run(input, [loggerHandler])
+ return res
+ }
}
}
-const getSQLDBChain = async (databaseType: 'sqlite', dbFilePath: string, llm: BaseLanguageModel) => {
- const datasource = new DataSource({
- type: databaseType,
- database: dbFilePath
- })
+const getSQLDBChain = async (databaseType: DatabaseType, url: string, llm: BaseLanguageModel) => {
+ const datasource = new DataSource(
+ databaseType === 'sqlite'
+ ? {
+ type: databaseType,
+ database: url
+ }
+ : ({
+ type: databaseType,
+ url: url
+ } as DataSourceOptions)
+ )
const db = await SqlDatabase.fromDataSourceParams({
appDataSource: datasource
diff --git a/packages/components/nodes/chains/VectorDBQAChain/VectorDBQAChain.ts b/packages/components/nodes/chains/VectorDBQAChain/VectorDBQAChain.ts
index 37d388b4a..038116827 100644
--- a/packages/components/nodes/chains/VectorDBQAChain/VectorDBQAChain.ts
+++ b/packages/components/nodes/chains/VectorDBQAChain/VectorDBQAChain.ts
@@ -1,12 +1,14 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses } from '../../../src/utils'
import { VectorDBQAChain } from 'langchain/chains'
import { BaseLanguageModel } from 'langchain/base_language'
import { VectorStore } from 'langchain/vectorstores'
+import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
class VectorDBQAChain_Chains implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
@@ -17,6 +19,7 @@ class VectorDBQAChain_Chains implements INode {
constructor() {
this.label = 'VectorDB QA Chain'
this.name = 'vectorDBQAChain'
+ this.version = 1.0
this.type = 'VectorDBQAChain'
this.icon = 'chain.svg'
this.category = 'Chains'
@@ -40,17 +43,29 @@ class VectorDBQAChain_Chains implements INode {
const model = nodeData.inputs?.model as BaseLanguageModel
const vectorStore = nodeData.inputs?.vectorStore as VectorStore
- const chain = VectorDBQAChain.fromLLM(model, vectorStore, { verbose: process.env.DEBUG === 'true' ? true : false })
+ const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
+ k: (vectorStore as any)?.k ?? 4,
+ verbose: process.env.DEBUG === 'true' ? true : false
+ })
return chain
}
- async run(nodeData: INodeData, input: string): Promise {
+ async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
const chain = nodeData.instance as VectorDBQAChain
const obj = {
query: input
}
- const res = await chain.call(obj)
- return res?.text
+
+ const loggerHandler = new ConsoleCallbackHandler(options.logger)
+
+ if (options.socketIO && options.socketIOClientId) {
+ const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
+ const res = await chain.call(obj, [loggerHandler, handler])
+ return res?.text
+ } else {
+ const res = await chain.call(obj, [loggerHandler])
+ return res?.text
+ }
}
}
diff --git a/packages/components/nodes/chatmodels/AzureChatOpenAI/Azure.svg b/packages/components/nodes/chatmodels/AzureChatOpenAI/Azure.svg
index 51eb62535..47ad8c440 100644
--- a/packages/components/nodes/chatmodels/AzureChatOpenAI/Azure.svg
+++ b/packages/components/nodes/chatmodels/AzureChatOpenAI/Azure.svg
@@ -1,5 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/packages/components/nodes/chatmodels/AzureChatOpenAI/AzureChatOpenAI.ts b/packages/components/nodes/chatmodels/AzureChatOpenAI/AzureChatOpenAI.ts
index 1d2fabc76..90f430f04 100644
--- a/packages/components/nodes/chatmodels/AzureChatOpenAI/AzureChatOpenAI.ts
+++ b/packages/components/nodes/chatmodels/AzureChatOpenAI/AzureChatOpenAI.ts
@@ -1,32 +1,36 @@
import { OpenAIBaseInput } from 'langchain/dist/types/openai-types'
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { AzureOpenAIInput, ChatOpenAI } from 'langchain/chat_models/openai'
class AzureChatOpenAI_ChatModels implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Azure ChatOpenAI'
this.name = 'azureChatOpenAI'
+ this.version = 1.0
this.type = 'AzureChatOpenAI'
this.icon = 'Azure.svg'
this.category = 'Chat Models'
this.description = 'Wrapper around Azure OpenAI large language models that use the Chat endpoint'
this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['azureOpenAIApi']
+ }
this.inputs = [
- {
- label: 'Azure OpenAI Api Key',
- name: 'azureOpenAIApiKey',
- type: 'password'
- },
{
label: 'Model Name',
name: 'modelName',
@@ -43,6 +47,10 @@ class AzureChatOpenAI_ChatModels implements INode {
{
label: 'gpt-35-turbo',
name: 'gpt-35-turbo'
+ },
+ {
+ label: 'gpt-35-turbo-16k',
+ name: 'gpt-35-turbo-16k'
}
],
default: 'gpt-35-turbo',
@@ -52,37 +60,15 @@ class AzureChatOpenAI_ChatModels implements INode {
label: 'Temperature',
name: 'temperature',
type: 'number',
+ step: 0.1,
default: 0.9,
optional: true
},
- {
- label: 'Azure OpenAI Api Instance Name',
- name: 'azureOpenAIApiInstanceName',
- type: 'string',
- placeholder: 'YOUR-INSTANCE-NAME'
- },
- {
- label: 'Azure OpenAI Api Deployment Name',
- name: 'azureOpenAIApiDeploymentName',
- type: 'string',
- placeholder: 'YOUR-DEPLOYMENT-NAME'
- },
- {
- label: 'Azure OpenAI Api Version',
- name: 'azureOpenAIApiVersion',
- type: 'options',
- options: [
- {
- label: '2023-03-15-preview',
- name: '2023-03-15-preview'
- }
- ],
- default: '2023-03-15-preview'
- },
{
label: 'Max Tokens',
name: 'maxTokens',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -90,6 +76,7 @@ class AzureChatOpenAI_ChatModels implements INode {
label: 'Frequency Penalty',
name: 'frequencyPenalty',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -97,6 +84,7 @@ class AzureChatOpenAI_ChatModels implements INode {
label: 'Presence Penalty',
name: 'presencePenalty',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -104,36 +92,41 @@ class AzureChatOpenAI_ChatModels implements INode {
label: 'Timeout',
name: 'timeout',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
}
]
}
- async init(nodeData: INodeData): Promise {
- const azureOpenAIApiKey = nodeData.inputs?.azureOpenAIApiKey as string
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const modelName = nodeData.inputs?.modelName as string
const temperature = nodeData.inputs?.temperature as string
- const azureOpenAIApiInstanceName = nodeData.inputs?.azureOpenAIApiInstanceName as string
- const azureOpenAIApiDeploymentName = nodeData.inputs?.azureOpenAIApiDeploymentName as string
- const azureOpenAIApiVersion = nodeData.inputs?.azureOpenAIApiVersion as string
const maxTokens = nodeData.inputs?.maxTokens as string
const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
const presencePenalty = nodeData.inputs?.presencePenalty as string
const timeout = nodeData.inputs?.timeout as string
+ const streaming = nodeData.inputs?.streaming as boolean
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const azureOpenAIApiKey = getCredentialParam('azureOpenAIApiKey', credentialData, nodeData)
+ const azureOpenAIApiInstanceName = getCredentialParam('azureOpenAIApiInstanceName', credentialData, nodeData)
+ const azureOpenAIApiDeploymentName = getCredentialParam('azureOpenAIApiDeploymentName', credentialData, nodeData)
+ const azureOpenAIApiVersion = getCredentialParam('azureOpenAIApiVersion', credentialData, nodeData)
const obj: Partial & Partial = {
- temperature: parseInt(temperature, 10),
+ temperature: parseFloat(temperature),
modelName,
azureOpenAIApiKey,
azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName,
- azureOpenAIApiVersion
+ azureOpenAIApiVersion,
+ streaming: streaming ?? true
}
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
- if (frequencyPenalty) obj.frequencyPenalty = parseInt(frequencyPenalty, 10)
- if (presencePenalty) obj.presencePenalty = parseInt(presencePenalty, 10)
+ if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
+ if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
if (timeout) obj.timeout = parseInt(timeout, 10)
const model = new ChatOpenAI(obj)
diff --git a/packages/components/nodes/chatmodels/ChatAnthropic/ChatAnthropic.ts b/packages/components/nodes/chatmodels/ChatAnthropic/ChatAnthropic.ts
index b13339ad4..12a33d994 100644
--- a/packages/components/nodes/chatmodels/ChatAnthropic/ChatAnthropic.ts
+++ b/packages/components/nodes/chatmodels/ChatAnthropic/ChatAnthropic.ts
@@ -1,36 +1,50 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { AnthropicInput, ChatAnthropic } from 'langchain/chat_models/anthropic'
class ChatAnthropic_ChatModels implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'ChatAnthropic'
this.name = 'chatAnthropic'
+ this.version = 1.0
this.type = 'ChatAnthropic'
this.icon = 'chatAnthropic.png'
this.category = 'Chat Models'
this.description = 'Wrapper around ChatAnthropic large language models that use the Chat endpoint'
this.baseClasses = [this.type, ...getBaseClasses(ChatAnthropic)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['anthropicApi']
+ }
this.inputs = [
- {
- label: 'ChatAnthropic Api Key',
- name: 'anthropicApiKey',
- type: 'password'
- },
{
label: 'Model Name',
name: 'modelName',
type: 'options',
options: [
+ {
+ label: 'claude-2',
+ name: 'claude-2',
+ description: 'Claude 2 latest major version, automatically get updates to the model as they are released'
+ },
+ {
+ label: 'claude-instant-1',
+ name: 'claude-instant-1',
+ description: 'Claude Instant latest major version, automatically get updates to the model as they are released'
+ },
{
label: 'claude-v1',
name: 'claude-v1'
@@ -76,13 +90,14 @@ class ChatAnthropic_ChatModels implements INode {
name: 'claude-instant-v1.1-100k'
}
],
- default: 'claude-v1',
+ default: 'claude-2',
optional: true
},
{
label: 'Temperature',
name: 'temperature',
type: 'number',
+ step: 0.1,
default: 0.9,
optional: true
},
@@ -90,6 +105,7 @@ class ChatAnthropic_ChatModels implements INode {
label: 'Max Tokens',
name: 'maxTokensToSample',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -97,6 +113,7 @@ class ChatAnthropic_ChatModels implements INode {
label: 'Top P',
name: 'topP',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -104,29 +121,34 @@ class ChatAnthropic_ChatModels implements INode {
label: 'Top K',
name: 'topK',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
}
]
}
- async init(nodeData: INodeData): Promise {
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
- const anthropicApiKey = nodeData.inputs?.anthropicApiKey as string
const maxTokensToSample = nodeData.inputs?.maxTokensToSample as string
const topP = nodeData.inputs?.topP as string
const topK = nodeData.inputs?.topK as string
+ const streaming = nodeData.inputs?.streaming as boolean
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const anthropicApiKey = getCredentialParam('anthropicApiKey', credentialData, nodeData)
const obj: Partial & { anthropicApiKey?: string } = {
- temperature: parseInt(temperature, 10),
+ temperature: parseFloat(temperature),
modelName,
- anthropicApiKey
+ anthropicApiKey,
+ streaming: streaming ?? true
}
if (maxTokensToSample) obj.maxTokensToSample = parseInt(maxTokensToSample, 10)
- if (topP) obj.topP = parseInt(topP, 10)
- if (topK) obj.topK = parseInt(topK, 10)
+ if (topP) obj.topP = parseFloat(topP)
+ if (topK) obj.topK = parseFloat(topK)
const model = new ChatAnthropic(obj)
return model
diff --git a/packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts b/packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts
new file mode 100644
index 000000000..ee55c7bb9
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts
@@ -0,0 +1,126 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { HFInput, HuggingFaceInference } from './core'
+
+class ChatHuggingFace_ChatModels implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ description: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'ChatHuggingFace'
+ this.name = 'chatHuggingFace'
+ this.version = 1.0
+ this.type = 'ChatHuggingFace'
+ this.icon = 'huggingface.png'
+ this.category = 'Chat Models'
+ this.description = 'Wrapper around HuggingFace large language models'
+ this.baseClasses = [this.type, 'BaseChatModel', ...getBaseClasses(HuggingFaceInference)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['huggingFaceApi']
+ }
+ this.inputs = [
+ {
+ label: 'Model',
+ name: 'model',
+ type: 'string',
+ description: 'If using own inference endpoint, leave this blank',
+ placeholder: 'gpt2',
+ optional: true
+ },
+ {
+ label: 'Endpoint',
+ name: 'endpoint',
+ type: 'string',
+ placeholder: 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2',
+ description: 'Using your own inference endpoint',
+ optional: true
+ },
+ {
+ label: 'Temperature',
+ name: 'temperature',
+ type: 'number',
+ step: 0.1,
+ description: 'Temperature parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Max Tokens',
+ name: 'maxTokens',
+ type: 'number',
+ step: 1,
+ description: 'Max Tokens parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Top Probability',
+ name: 'topP',
+ type: 'number',
+ step: 0.1,
+ description: 'Top Probability parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Top K',
+ name: 'hfTopK',
+ type: 'number',
+ step: 0.1,
+ description: 'Top K parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Frequency Penalty',
+ name: 'frequencyPenalty',
+ type: 'number',
+ step: 0.1,
+ description: 'Frequency Penalty parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const model = nodeData.inputs?.model as string
+ const temperature = nodeData.inputs?.temperature as string
+ const maxTokens = nodeData.inputs?.maxTokens as string
+ const topP = nodeData.inputs?.topP as string
+ const hfTopK = nodeData.inputs?.hfTopK as string
+ const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
+ const endpoint = nodeData.inputs?.endpoint as string
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData)
+
+ const obj: Partial = {
+ model,
+ apiKey: huggingFaceApiKey
+ }
+
+ if (temperature) obj.temperature = parseFloat(temperature)
+ if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
+ if (topP) obj.topP = parseFloat(topP)
+ if (hfTopK) obj.topK = parseFloat(hfTopK)
+ if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
+ if (endpoint) obj.endpoint = endpoint
+
+ const huggingFace = new HuggingFaceInference(obj)
+ return huggingFace
+ }
+}
+
+module.exports = { nodeClass: ChatHuggingFace_ChatModels }
diff --git a/packages/components/nodes/chatmodels/ChatHuggingFace/core.ts b/packages/components/nodes/chatmodels/ChatHuggingFace/core.ts
new file mode 100644
index 000000000..416567f0d
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatHuggingFace/core.ts
@@ -0,0 +1,113 @@
+import { getEnvironmentVariable } from '../../../src/utils'
+import { LLM, BaseLLMParams } from 'langchain/llms/base'
+
+export interface HFInput {
+ /** Model to use */
+ model: string
+
+ /** Sampling temperature to use */
+ temperature?: number
+
+ /**
+ * Maximum number of tokens to generate in the completion.
+ */
+ maxTokens?: number
+
+ /** Total probability mass of tokens to consider at each step */
+ topP?: number
+
+ /** Integer to define the top tokens considered within the sample operation to create new text. */
+ topK?: number
+
+ /** Penalizes repeated tokens according to frequency */
+ frequencyPenalty?: number
+
+ /** API key to use. */
+ apiKey?: string
+
+ /** Private endpoint to use. */
+ endpoint?: string
+}
+
+export class HuggingFaceInference extends LLM implements HFInput {
+ get lc_secrets(): { [key: string]: string } | undefined {
+ return {
+ apiKey: 'HUGGINGFACEHUB_API_KEY'
+ }
+ }
+
+ model = 'gpt2'
+
+ temperature: number | undefined = undefined
+
+ maxTokens: number | undefined = undefined
+
+ topP: number | undefined = undefined
+
+ topK: number | undefined = undefined
+
+ frequencyPenalty: number | undefined = undefined
+
+ apiKey: string | undefined = undefined
+
+ endpoint: string | undefined = undefined
+
+ constructor(fields?: Partial & BaseLLMParams) {
+ super(fields ?? {})
+
+ this.model = fields?.model ?? this.model
+ this.temperature = fields?.temperature ?? this.temperature
+ this.maxTokens = fields?.maxTokens ?? this.maxTokens
+ this.topP = fields?.topP ?? this.topP
+ this.topK = fields?.topK ?? this.topK
+ this.frequencyPenalty = fields?.frequencyPenalty ?? this.frequencyPenalty
+ this.endpoint = fields?.endpoint ?? ''
+ this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
+ if (!this.apiKey) {
+ throw new Error(
+ 'Please set an API key for HuggingFace Hub in the environment variable HUGGINGFACEHUB_API_KEY or in the apiKey field of the HuggingFaceInference constructor.'
+ )
+ }
+ }
+
+ _llmType() {
+ return 'hf'
+ }
+
+ /** @ignore */
+ async _call(prompt: string, options: this['ParsedCallOptions']): Promise {
+ const { HfInference } = await HuggingFaceInference.imports()
+ const hf = new HfInference(this.apiKey)
+ const obj: any = {
+ parameters: {
+ // make it behave similar to openai, returning only the generated text
+ return_full_text: false,
+ temperature: this.temperature,
+ max_new_tokens: this.maxTokens,
+ top_p: this.topP,
+ top_k: this.topK,
+ repetition_penalty: this.frequencyPenalty
+ },
+ inputs: prompt
+ }
+ if (this.endpoint) {
+ hf.endpoint(this.endpoint)
+ } else {
+ obj.model = this.model
+ }
+ const res = await this.caller.callWithOptions({ signal: options.signal }, hf.textGeneration.bind(hf), obj)
+ return res.generated_text
+ }
+
+ /** @ignore */
+ static async imports(): Promise<{
+ HfInference: typeof import('@huggingface/inference').HfInference
+ }> {
+ try {
+ const { HfInference } = await import('@huggingface/inference')
+ return { HfInference }
+ } catch (e) {
+ throw new Error('Please install huggingface as a dependency with, e.g. `yarn add @huggingface/inference`')
+ }
+ }
+}
diff --git a/packages/components/nodes/chatmodels/ChatHuggingFace/huggingface.png b/packages/components/nodes/chatmodels/ChatHuggingFace/huggingface.png
new file mode 100644
index 000000000..f8f202a46
Binary files /dev/null and b/packages/components/nodes/chatmodels/ChatHuggingFace/huggingface.png differ
diff --git a/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts b/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts
index bd25a9fa6..a6ddfae42 100644
--- a/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts
+++ b/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts
@@ -6,6 +6,7 @@ import { OpenAIChatInput } from 'langchain/chat_models/openai'
class ChatLocalAI_ChatModels implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
@@ -16,6 +17,7 @@ class ChatLocalAI_ChatModels implements INode {
constructor() {
this.label = 'ChatLocalAI'
this.name = 'chatLocalAI'
+ this.version = 1.0
this.type = 'ChatLocalAI'
this.icon = 'localai.png'
this.category = 'Chat Models'
@@ -38,6 +40,7 @@ class ChatLocalAI_ChatModels implements INode {
label: 'Temperature',
name: 'temperature',
type: 'number',
+ step: 0.1,
default: 0.9,
optional: true
},
@@ -45,6 +48,7 @@ class ChatLocalAI_ChatModels implements INode {
label: 'Max Tokens',
name: 'maxTokens',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -52,6 +56,7 @@ class ChatLocalAI_ChatModels implements INode {
label: 'Top Probability',
name: 'topP',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -59,6 +64,7 @@ class ChatLocalAI_ChatModels implements INode {
label: 'Timeout',
name: 'timeout',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
}
@@ -74,13 +80,13 @@ class ChatLocalAI_ChatModels implements INode {
const basePath = nodeData.inputs?.basePath as string
const obj: Partial & { openAIApiKey?: string } = {
- temperature: parseInt(temperature, 10),
+ temperature: parseFloat(temperature),
modelName,
openAIApiKey: 'sk-'
}
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
- if (topP) obj.topP = parseInt(topP, 10)
+ if (topP) obj.topP = parseFloat(topP)
if (timeout) obj.timeout = parseInt(timeout, 10)
const model = new OpenAIChat(obj, { basePath })
diff --git a/packages/components/nodes/chatmodels/ChatOpenAI/ChatOpenAI.ts b/packages/components/nodes/chatmodels/ChatOpenAI/ChatOpenAI.ts
index 5d608c5e2..ca081ff43 100644
--- a/packages/components/nodes/chatmodels/ChatOpenAI/ChatOpenAI.ts
+++ b/packages/components/nodes/chatmodels/ChatOpenAI/ChatOpenAI.ts
@@ -1,31 +1,35 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { ChatOpenAI, OpenAIChatInput } from 'langchain/chat_models/openai'
class ChatOpenAI_ChatModels implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'ChatOpenAI'
this.name = 'chatOpenAI'
+ this.version = 1.0
this.type = 'ChatOpenAI'
this.icon = 'openai.png'
this.category = 'Chat Models'
this.description = 'Wrapper around OpenAI large language models that use the Chat endpoint'
this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['openAIApi']
+ }
this.inputs = [
- {
- label: 'OpenAI Api Key',
- name: 'openAIApiKey',
- type: 'password'
- },
{
label: 'Model Name',
name: 'modelName',
@@ -36,20 +40,32 @@ class ChatOpenAI_ChatModels implements INode {
name: 'gpt-4'
},
{
- label: 'gpt-4-0314',
- name: 'gpt-4-0314'
+ label: 'gpt-4-0613',
+ name: 'gpt-4-0613'
},
{
- label: 'gpt-4-32k-0314',
- name: 'gpt-4-32k-0314'
+ label: 'gpt-4-32k',
+ name: 'gpt-4-32k'
+ },
+ {
+ label: 'gpt-4-32k-0613',
+ name: 'gpt-4-32k-0613'
},
{
label: 'gpt-3.5-turbo',
name: 'gpt-3.5-turbo'
},
{
- label: 'gpt-3.5-turbo-0301',
- name: 'gpt-3.5-turbo-0301'
+ label: 'gpt-3.5-turbo-0613',
+ name: 'gpt-3.5-turbo-0613'
+ },
+ {
+ label: 'gpt-3.5-turbo-16k',
+ name: 'gpt-3.5-turbo-16k'
+ },
+ {
+ label: 'gpt-3.5-turbo-16k-0613',
+ name: 'gpt-3.5-turbo-16k-0613'
}
],
default: 'gpt-3.5-turbo',
@@ -59,6 +75,7 @@ class ChatOpenAI_ChatModels implements INode {
label: 'Temperature',
name: 'temperature',
type: 'number',
+ step: 0.1,
default: 0.9,
optional: true
},
@@ -66,6 +83,7 @@ class ChatOpenAI_ChatModels implements INode {
label: 'Max Tokens',
name: 'maxTokens',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -73,6 +91,7 @@ class ChatOpenAI_ChatModels implements INode {
label: 'Top Probability',
name: 'topP',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -80,6 +99,7 @@ class ChatOpenAI_ChatModels implements INode {
label: 'Frequency Penalty',
name: 'frequencyPenalty',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -87,6 +107,7 @@ class ChatOpenAI_ChatModels implements INode {
label: 'Presence Penalty',
name: 'presencePenalty',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -94,35 +115,68 @@ class ChatOpenAI_ChatModels implements INode {
label: 'Timeout',
name: 'timeout',
type: 'number',
+ step: 1,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'BasePath',
+ name: 'basepath',
+ type: 'string',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'BaseOptions',
+ name: 'baseOptions',
+ type: 'json',
optional: true,
additionalParams: true
}
]
}
- async init(nodeData: INodeData): Promise {
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
- const openAIApiKey = nodeData.inputs?.openAIApiKey as string
const maxTokens = nodeData.inputs?.maxTokens as string
const topP = nodeData.inputs?.topP as string
const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
const presencePenalty = nodeData.inputs?.presencePenalty as string
const timeout = nodeData.inputs?.timeout as string
+ const streaming = nodeData.inputs?.streaming as boolean
+ const basePath = nodeData.inputs?.basepath as string
+ const baseOptions = nodeData.inputs?.baseOptions
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, nodeData)
const obj: Partial & { openAIApiKey?: string } = {
- temperature: parseInt(temperature, 10),
+ temperature: parseFloat(temperature),
modelName,
- openAIApiKey
+ openAIApiKey,
+ streaming: streaming ?? true
}
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
- if (topP) obj.topP = parseInt(topP, 10)
- if (frequencyPenalty) obj.frequencyPenalty = parseInt(frequencyPenalty, 10)
- if (presencePenalty) obj.presencePenalty = parseInt(presencePenalty, 10)
+ if (topP) obj.topP = parseFloat(topP)
+ if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
+ if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
if (timeout) obj.timeout = parseInt(timeout, 10)
- const model = new ChatOpenAI(obj)
+ let parsedBaseOptions: any | undefined = undefined
+
+ if (baseOptions) {
+ try {
+ parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions)
+ } catch (exception) {
+ throw new Error("Invalid JSON in the ChatOpenAI's BaseOptions: " + exception)
+ }
+ }
+ const model = new ChatOpenAI(obj, {
+ basePath,
+ baseOptions: parsedBaseOptions
+ })
return model
}
}
diff --git a/packages/components/nodes/chatmodels/GoogleVertexAI/ChatGoogleVertexAI.ts b/packages/components/nodes/chatmodels/GoogleVertexAI/ChatGoogleVertexAI.ts
new file mode 100644
index 000000000..a06ce0c95
--- /dev/null
+++ b/packages/components/nodes/chatmodels/GoogleVertexAI/ChatGoogleVertexAI.ts
@@ -0,0 +1,115 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { ChatGoogleVertexAI, GoogleVertexAIChatInput } from 'langchain/chat_models/googlevertexai'
+import { GoogleAuthOptions } from 'google-auth-library'
+
+class GoogleVertexAI_ChatModels implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ description: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'ChatGoogleVertexAI'
+ this.name = 'chatGoogleVertexAI'
+ this.version = 1.0
+ this.type = 'ChatGoogleVertexAI'
+ this.icon = 'vertexai.svg'
+ this.category = 'Chat Models'
+ this.description = 'Wrapper around VertexAI large language models that use the Chat endpoint'
+ this.baseClasses = [this.type, ...getBaseClasses(ChatGoogleVertexAI)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['googleVertexAuth']
+ }
+ this.inputs = [
+ {
+ label: 'Model Name',
+ name: 'modelName',
+ type: 'options',
+ options: [
+ {
+ label: 'chat-bison',
+ name: 'chat-bison'
+ },
+ {
+ label: 'codechat-bison',
+ name: 'codechat-bison'
+ }
+ ],
+ default: 'chat-bison',
+ optional: true
+ },
+ {
+ label: 'Temperature',
+ name: 'temperature',
+ type: 'number',
+ step: 0.1,
+ default: 0.9,
+ optional: true
+ },
+ {
+ label: 'Max Output Tokens',
+ name: 'maxOutputTokens',
+ type: 'number',
+ step: 1,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Top Probability',
+ name: 'topP',
+ type: 'number',
+ step: 0.1,
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const googleApplicationCredentialFilePath = getCredentialParam('googleApplicationCredentialFilePath', credentialData, nodeData)
+ const googleApplicationCredential = getCredentialParam('googleApplicationCredential', credentialData, nodeData)
+ const projectID = getCredentialParam('projectID', credentialData, nodeData)
+
+ if (!googleApplicationCredentialFilePath && !googleApplicationCredential)
+ throw new Error('Please specify your Google Application Credential')
+ if (googleApplicationCredentialFilePath && googleApplicationCredential)
+ throw new Error('Please use either Google Application Credential File Path or Google Credential JSON Object')
+
+ const authOptions: GoogleAuthOptions = {}
+ if (googleApplicationCredentialFilePath && !googleApplicationCredential) authOptions.keyFile = googleApplicationCredentialFilePath
+ else if (!googleApplicationCredentialFilePath && googleApplicationCredential)
+ authOptions.credentials = JSON.parse(googleApplicationCredential)
+
+ if (projectID) authOptions.projectId = projectID
+
+ const temperature = nodeData.inputs?.temperature as string
+ const modelName = nodeData.inputs?.modelName as string
+ const maxOutputTokens = nodeData.inputs?.maxOutputTokens as string
+ const topP = nodeData.inputs?.topP as string
+
+ const obj: Partial = {
+ temperature: parseFloat(temperature),
+ model: modelName,
+ authOptions
+ }
+
+ if (maxOutputTokens) obj.maxOutputTokens = parseInt(maxOutputTokens, 10)
+ if (topP) obj.topP = parseFloat(topP)
+
+ const model = new ChatGoogleVertexAI(obj)
+ return model
+ }
+}
+
+module.exports = { nodeClass: GoogleVertexAI_ChatModels }
diff --git a/packages/components/nodes/chatmodels/GoogleVertexAI/vertexai.svg b/packages/components/nodes/chatmodels/GoogleVertexAI/vertexai.svg
new file mode 100644
index 000000000..31244412a
--- /dev/null
+++ b/packages/components/nodes/chatmodels/GoogleVertexAI/vertexai.svg
@@ -0,0 +1,2 @@
+
+
\ No newline at end of file
diff --git a/packages/components/nodes/documentloaders/API/APILoader.ts b/packages/components/nodes/documentloaders/API/APILoader.ts
new file mode 100644
index 000000000..3de6d6366
--- /dev/null
+++ b/packages/components/nodes/documentloaders/API/APILoader.ts
@@ -0,0 +1,200 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { BaseDocumentLoader } from 'langchain/document_loaders/base'
+import { Document } from 'langchain/document'
+import axios, { AxiosRequestConfig } from 'axios'
+
+class API_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs?: INodeParams[]
+
+ constructor() {
+ this.label = 'API Loader'
+ this.name = 'apiLoader'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'api-loader.png'
+ this.category = 'Document Loaders'
+ this.description = `Load data from an API`
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Method',
+ name: 'method',
+ type: 'options',
+ options: [
+ {
+ label: 'GET',
+ name: 'GET'
+ },
+ {
+ label: 'POST',
+ name: 'POST'
+ }
+ ]
+ },
+ {
+ label: 'URL',
+ name: 'url',
+ type: 'string'
+ },
+ {
+ label: 'Headers',
+ name: 'headers',
+ type: 'json',
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'Body',
+ name: 'body',
+ type: 'json',
+ description:
+ 'JSON body for the POST request. If not specified, agent will try to figure out itself from AIPlugin if provided',
+ additionalParams: true,
+ optional: true
+ }
+ ]
+ }
+ async init(nodeData: INodeData): Promise {
+ const headers = nodeData.inputs?.headers as string
+ const url = nodeData.inputs?.url as string
+ const body = nodeData.inputs?.body as string
+ const method = nodeData.inputs?.method as string
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const metadata = nodeData.inputs?.metadata
+
+ const options: ApiLoaderParams = {
+ url,
+ method
+ }
+
+ if (headers) {
+ const parsedHeaders = typeof headers === 'object' ? headers : JSON.parse(headers)
+ options.headers = parsedHeaders
+ }
+
+ if (body) {
+ const parsedBody = typeof body === 'object' ? body : JSON.parse(body)
+ options.body = parsedBody
+ }
+
+ const loader = new ApiLoader(options)
+
+ let docs = []
+
+ if (textSplitter) {
+ docs = await loader.loadAndSplit(textSplitter)
+ } else {
+ docs = await loader.load()
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of docs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+
+ return docs
+ }
+}
+
+interface ApiLoaderParams {
+ url: string
+ method: string
+ headers?: ICommonObject
+ body?: ICommonObject
+}
+
+class ApiLoader extends BaseDocumentLoader {
+ public readonly url: string
+
+ public readonly headers?: ICommonObject
+
+ public readonly body?: ICommonObject
+
+ public readonly method: string
+
+ constructor({ url, headers, body, method }: ApiLoaderParams) {
+ super()
+ this.url = url
+ this.headers = headers
+ this.body = body
+ this.method = method
+ }
+
+ public async load(): Promise {
+ if (this.method === 'POST') {
+ return this.executePostRequest(this.url, this.headers, this.body)
+ } else {
+ return this.executeGetRequest(this.url, this.headers)
+ }
+ }
+
+ protected async executeGetRequest(url: string, headers?: ICommonObject): Promise {
+ try {
+ const config: AxiosRequestConfig = {}
+ if (headers) {
+ config.headers = headers
+ }
+ const response = await axios.get(url, config)
+ const responseJsonString = JSON.stringify(response.data, null, 2)
+ const doc = new Document({
+ pageContent: responseJsonString,
+ metadata: {
+ url
+ }
+ })
+ return [doc]
+ } catch (error) {
+ throw new Error(`Failed to fetch ${url}: ${error}`)
+ }
+ }
+
+ protected async executePostRequest(url: string, headers?: ICommonObject, body?: ICommonObject): Promise {
+ try {
+ const config: AxiosRequestConfig = {}
+ if (headers) {
+ config.headers = headers
+ }
+ const response = await axios.post(url, body ?? {}, config)
+ const responseJsonString = JSON.stringify(response.data, null, 2)
+ const doc = new Document({
+ pageContent: responseJsonString,
+ metadata: {
+ url
+ }
+ })
+ return [doc]
+ } catch (error) {
+ throw new Error(`Failed to post ${url}: ${error}`)
+ }
+ }
+}
+
+module.exports = {
+ nodeClass: API_DocumentLoaders
+}
diff --git a/packages/components/nodes/documentloaders/API/api-loader.png b/packages/components/nodes/documentloaders/API/api-loader.png
new file mode 100644
index 000000000..93668c4cf
Binary files /dev/null and b/packages/components/nodes/documentloaders/API/api-loader.png differ
diff --git a/packages/components/nodes/documentloaders/Airtable/Airtable.ts b/packages/components/nodes/documentloaders/Airtable/Airtable.ts
new file mode 100644
index 000000000..70d0c674a
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Airtable/Airtable.ts
@@ -0,0 +1,230 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { BaseDocumentLoader } from 'langchain/document_loaders/base'
+import { Document } from 'langchain/document'
+import axios from 'axios'
+import { getCredentialData, getCredentialParam } from '../../../src/utils'
+
+class Airtable_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs?: INodeParams[]
+
+ constructor() {
+ this.label = 'Airtable'
+ this.name = 'airtable'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'airtable.svg'
+ this.category = 'Document Loaders'
+ this.description = `Load data from Airtable table`
+ this.baseClasses = [this.type]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['airtableApi']
+ }
+ this.inputs = [
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Base Id',
+ name: 'baseId',
+ type: 'string',
+ placeholder: 'app11RobdGoX0YNsC',
+ description:
+ 'If your table URL looks like: https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYICO/viw9UrP77Id0CE4ee, app11RovdGoX0YNsC is the base id'
+ },
+ {
+ label: 'Table Id',
+ name: 'tableId',
+ type: 'string',
+ placeholder: 'tblJdmvbrgizbYICO',
+ description:
+ 'If your table URL looks like: https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYICO/viw9UrP77Id0CE4ee, tblJdmvbrgizbYICO is the table id'
+ },
+ {
+ label: 'Return All',
+ name: 'returnAll',
+ type: 'boolean',
+ default: true,
+ additionalParams: true,
+ description: 'If all results should be returned or only up to a given limit'
+ },
+ {
+ label: 'Limit',
+ name: 'limit',
+ type: 'number',
+ default: 100,
+ additionalParams: true,
+ description: 'Number of results to return'
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const baseId = nodeData.inputs?.baseId as string
+ const tableId = nodeData.inputs?.tableId as string
+ const returnAll = nodeData.inputs?.returnAll as boolean
+ const limit = nodeData.inputs?.limit as string
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const metadata = nodeData.inputs?.metadata
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const accessToken = getCredentialParam('accessToken', credentialData, nodeData)
+
+ const airtableOptions: AirtableLoaderParams = {
+ baseId,
+ tableId,
+ returnAll,
+ accessToken,
+ limit: limit ? parseInt(limit, 10) : 100
+ }
+
+ const loader = new AirtableLoader(airtableOptions)
+
+ let docs = []
+
+ if (textSplitter) {
+ docs = await loader.loadAndSplit(textSplitter)
+ } else {
+ docs = await loader.load()
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of docs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+
+ return docs
+ }
+}
+
+interface AirtableLoaderParams {
+ baseId: string
+ tableId: string
+ accessToken: string
+ limit?: number
+ returnAll?: boolean
+}
+
+interface AirtableLoaderResponse {
+ records: AirtableLoaderPage[]
+ offset?: string
+}
+
+interface AirtableLoaderPage {
+ id: string
+ createdTime: string
+ fields: ICommonObject
+}
+
+class AirtableLoader extends BaseDocumentLoader {
+ public readonly baseId: string
+
+ public readonly tableId: string
+
+ public readonly accessToken: string
+
+ public readonly limit: number
+
+ public readonly returnAll: boolean
+
+ constructor({ baseId, tableId, accessToken, limit = 100, returnAll = false }: AirtableLoaderParams) {
+ super()
+ this.baseId = baseId
+ this.tableId = tableId
+ this.accessToken = accessToken
+ this.limit = limit
+ this.returnAll = returnAll
+ }
+
+ public async load(): Promise {
+ if (this.returnAll) {
+ return this.loadAll()
+ }
+ return this.loadLimit()
+ }
+
+ protected async fetchAirtableData(url: string, params: ICommonObject): Promise {
+ try {
+ const headers = {
+ Authorization: `Bearer ${this.accessToken}`,
+ 'Content-Type': 'application/json',
+ Accept: 'application/json'
+ }
+ const response = await axios.get(url, { params, headers })
+ return response.data
+ } catch (error) {
+ throw new Error(`Failed to fetch ${url} from Airtable: ${error}`)
+ }
+ }
+
+ private createDocumentFromPage(page: AirtableLoaderPage): Document {
+ // Generate the URL
+ const pageUrl = `https://api.airtable.com/v0/${this.baseId}/${this.tableId}/${page.id}`
+
+ // Return a langchain document
+ return new Document({
+ pageContent: JSON.stringify(page.fields, null, 2),
+ metadata: {
+ url: pageUrl
+ }
+ })
+ }
+
+ private async loadLimit(): Promise {
+ const params = { maxRecords: this.limit }
+ const data = await this.fetchAirtableData(`https://api.airtable.com/v0/${this.baseId}/${this.tableId}`, params)
+ if (data.records.length === 0) {
+ return []
+ }
+ return data.records.map((page) => this.createDocumentFromPage(page))
+ }
+
+ private async loadAll(): Promise {
+ const params: ICommonObject = { pageSize: 100 }
+ let data: AirtableLoaderResponse
+ let returnPages: AirtableLoaderPage[] = []
+
+ do {
+ data = await this.fetchAirtableData(`https://api.airtable.com/v0/${this.baseId}/${this.tableId}`, params)
+ returnPages.push.apply(returnPages, data.records)
+ params.offset = data.offset
+ } while (data.offset !== undefined)
+ return returnPages.map((page) => this.createDocumentFromPage(page))
+ }
+}
+
+module.exports = {
+ nodeClass: Airtable_DocumentLoaders
+}
diff --git a/packages/components/nodes/documentloaders/Airtable/airtable.svg b/packages/components/nodes/documentloaders/Airtable/airtable.svg
new file mode 100644
index 000000000..867c3b5ae
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Airtable/airtable.svg
@@ -0,0 +1,9 @@
+
+
diff --git a/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/ApifyWebsiteContentCrawler.ts b/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/ApifyWebsiteContentCrawler.ts
new file mode 100644
index 000000000..a5e6a6e03
--- /dev/null
+++ b/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/ApifyWebsiteContentCrawler.ts
@@ -0,0 +1,139 @@
+import { INode, INodeData, INodeParams, ICommonObject } from '../../../src/Interface'
+import { getCredentialData, getCredentialParam } from '../../../src/utils'
+import { TextSplitter } from 'langchain/text_splitter'
+import { ApifyDatasetLoader } from 'langchain/document_loaders/web/apify_dataset'
+import { Document } from 'langchain/document'
+
+class ApifyWebsiteContentCrawler_DocumentLoaders implements INode {
+ label: string
+ name: string
+ description: string
+ type: string
+ icon: string
+ version: number
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+ credential: INodeParams
+
+ constructor() {
+ this.label = 'Apify Website Content Crawler'
+ this.name = 'apifyWebsiteContentCrawler'
+ this.type = 'Document'
+ this.icon = 'apify-symbol-transparent.svg'
+ this.version = 1.0
+ this.category = 'Document Loaders'
+ this.description = 'Load data from Apify Website Content Crawler'
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'Start URLs',
+ name: 'urls',
+ type: 'string',
+ description: 'One or more URLs of pages where the crawler will start, separated by commas.',
+ placeholder: 'https://js.langchain.com/docs/'
+ },
+ {
+ label: 'Crawler type',
+ type: 'options',
+ name: 'crawlerType',
+ options: [
+ {
+ label: 'Headless web browser (Chrome+Playwright)',
+ name: 'playwright:chrome'
+ },
+ {
+ label: 'Stealthy web browser (Firefox+Playwright)',
+ name: 'playwright:firefox'
+ },
+ {
+ label: 'Raw HTTP client (Cheerio)',
+ name: 'cheerio'
+ },
+ {
+ label: 'Raw HTTP client with JavaScript execution (JSDOM) [experimental]',
+ name: 'jsdom'
+ }
+ ],
+ description:
+ 'Select the crawling engine, see documentation for additional information.',
+ default: 'playwright:firefox'
+ },
+ {
+ label: 'Max crawling depth',
+ name: 'maxCrawlDepth',
+ type: 'number',
+ optional: true,
+ default: 1
+ },
+ {
+ label: 'Max crawl pages',
+ name: 'maxCrawlPages',
+ type: 'number',
+ optional: true,
+ default: 3
+ },
+ {
+ label: 'Additional input',
+ name: 'additionalInput',
+ type: 'json',
+ default: JSON.stringify({}),
+ description:
+ 'For additional input options for the crawler see documentation.',
+ optional: true
+ },
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ }
+ ]
+ this.credential = {
+ label: 'Connect Apify API',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['apifyApi']
+ }
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+
+ // Get input options and merge with additional input
+ const urls = nodeData.inputs?.urls as string
+ const crawlerType = nodeData.inputs?.crawlerType as string
+ const maxCrawlDepth = nodeData.inputs?.maxCrawlDepth as string
+ const maxCrawlPages = nodeData.inputs?.maxCrawlPages as string
+ const additionalInput =
+ typeof nodeData.inputs?.additionalInput === 'object'
+ ? nodeData.inputs?.additionalInput
+ : JSON.parse(nodeData.inputs?.additionalInput as string)
+ const input = {
+ startUrls: urls.split(',').map((url) => ({ url: url.trim() })),
+ crawlerType,
+ maxCrawlDepth: parseInt(maxCrawlDepth, 10),
+ maxCrawlPages: parseInt(maxCrawlPages, 10),
+ ...additionalInput
+ }
+
+ // Get Apify API token from credential data
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const apifyApiToken = getCredentialParam('apifyApiToken', credentialData, nodeData)
+
+ const loader = await ApifyDatasetLoader.fromActorCall('apify/website-content-crawler', input, {
+ datasetMappingFunction: (item) =>
+ new Document({
+ pageContent: (item.text || '') as string,
+ metadata: { source: item.url }
+ }),
+ clientOptions: {
+ token: apifyApiToken
+ }
+ })
+
+ return textSplitter ? loader.loadAndSplit(textSplitter) : loader.load()
+ }
+}
+
+module.exports = { nodeClass: ApifyWebsiteContentCrawler_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/apify-symbol-transparent.svg b/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/apify-symbol-transparent.svg
new file mode 100644
index 000000000..423a3328d
--- /dev/null
+++ b/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/apify-symbol-transparent.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/documentloaders/Cheerio/Cheerio.ts b/packages/components/nodes/documentloaders/Cheerio/Cheerio.ts
index 9e1135059..1c21c1ea8 100644
--- a/packages/components/nodes/documentloaders/Cheerio/Cheerio.ts
+++ b/packages/components/nodes/documentloaders/Cheerio/Cheerio.ts
@@ -2,11 +2,12 @@ import { INode, INodeData, INodeParams } from '../../../src/Interface'
import { TextSplitter } from 'langchain/text_splitter'
import { CheerioWebBaseLoader } from 'langchain/document_loaders/web/cheerio'
import { test } from 'linkifyjs'
-import { getAvailableURLs } from '../../../src'
+import { webCrawl, xmlScrape } from '../../../src'
class Cheerio_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -17,6 +18,7 @@ class Cheerio_DocumentLoaders implements INode {
constructor() {
this.label = 'Cheerio Web Scraper'
this.name = 'cheerioWebScraper'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'cheerio.svg'
this.category = 'Document Loaders'
@@ -35,19 +37,34 @@ class Cheerio_DocumentLoaders implements INode {
optional: true
},
{
- label: 'Web Scrap for Relative Links',
- name: 'webScrap',
- type: 'boolean',
+ label: 'Get Relative Links Method',
+ name: 'relativeLinksMethod',
+ type: 'options',
+ description: 'Select a method to retrieve relative links',
+ options: [
+ {
+ label: 'Web Crawl',
+ name: 'webCrawl',
+ description: 'Crawl relative links from HTML URL'
+ },
+ {
+ label: 'Scrape XML Sitemap',
+ name: 'scrapeXMLSitemap',
+ description: 'Scrape relative links from XML sitemap URL'
+ }
+ ],
optional: true,
additionalParams: true
},
{
- label: 'Web Scrap Links Limit',
+ label: 'Get Relative Links Limit',
name: 'limit',
type: 'number',
- default: 10,
optional: true,
- additionalParams: true
+ additionalParams: true,
+ description:
+ 'Only used when "Get Relative Links Method" is selected. Set 0 to retrieve all relative links, default limit is 10.',
+ warning: `Retrieving all links might take long time, and all links will be upserted again if the flow's state changed (eg: different URL, chunk size, etc)`
},
{
label: 'Metadata',
@@ -62,7 +79,7 @@ class Cheerio_DocumentLoaders implements INode {
async init(nodeData: INodeData): Promise {
const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
const metadata = nodeData.inputs?.metadata
- const webScrap = nodeData.inputs?.webScrap as boolean
+ const relativeLinksMethod = nodeData.inputs?.relativeLinksMethod as string
let limit = nodeData.inputs?.limit as string
let url = nodeData.inputs?.url as string
@@ -71,25 +88,34 @@ class Cheerio_DocumentLoaders implements INode {
throw new Error('Invalid URL')
}
- const cheerioLoader = async (url: string): Promise => {
- let docs = []
- const loader = new CheerioWebBaseLoader(url)
- if (textSplitter) {
- docs = await loader.loadAndSplit(textSplitter)
- } else {
- docs = await loader.load()
+ async function cheerioLoader(url: string): Promise {
+ try {
+ let docs = []
+ const loader = new CheerioWebBaseLoader(url)
+ if (textSplitter) {
+ docs = await loader.loadAndSplit(textSplitter)
+ } else {
+ docs = await loader.load()
+ }
+ return docs
+ } catch (err) {
+ if (process.env.DEBUG === 'true') console.error(`error in CheerioWebBaseLoader: ${err.message}, on page: ${url}`)
}
- return docs
}
- let availableUrls: string[]
let docs = []
- if (webScrap) {
+ if (relativeLinksMethod) {
+ if (process.env.DEBUG === 'true') console.info(`Start ${relativeLinksMethod}`)
if (!limit) limit = '10'
- availableUrls = await getAvailableURLs(url, parseInt(limit))
- for (let i = 0; i < availableUrls.length; i++) {
- docs.push(...(await cheerioLoader(availableUrls[i])))
+ else if (parseInt(limit) < 0) throw new Error('Limit cannot be less than 0')
+ const pages: string[] =
+ relativeLinksMethod === 'webCrawl' ? await webCrawl(url, parseInt(limit)) : await xmlScrape(url, parseInt(limit))
+ if (process.env.DEBUG === 'true') console.info(`pages: ${JSON.stringify(pages)}, length: ${pages.length}`)
+ if (!pages || pages.length === 0) throw new Error('No relative links found')
+ for (const page of pages) {
+ docs.push(...(await cheerioLoader(page)))
}
+ if (process.env.DEBUG === 'true') console.info(`Finish ${relativeLinksMethod}`)
} else {
docs = await cheerioLoader(url)
}
diff --git a/packages/components/nodes/documentloaders/Confluence/Confluence.ts b/packages/components/nodes/documentloaders/Confluence/Confluence.ts
new file mode 100644
index 000000000..a17c41b9e
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Confluence/Confluence.ts
@@ -0,0 +1,120 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { ConfluencePagesLoader, ConfluencePagesLoaderParams } from 'langchain/document_loaders/web/confluence'
+import { getCredentialData, getCredentialParam } from '../../../src'
+
+class Confluence_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Confluence'
+ this.name = 'confluence'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'confluence.png'
+ this.category = 'Document Loaders'
+ this.description = `Load data from a Confluence Document`
+ this.baseClasses = [this.type]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['confluenceApi']
+ }
+ this.inputs = [
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Base URL',
+ name: 'baseUrl',
+ type: 'string',
+ placeholder: 'https://example.atlassian.net/wiki'
+ },
+ {
+ label: 'Space Key',
+ name: 'spaceKey',
+ type: 'string',
+ placeholder: '~EXAMPLE362906de5d343d49dcdbae5dEXAMPLE',
+ description:
+ 'Refer to official guide on how to get Confluence Space Key'
+ },
+ {
+ label: 'Limit',
+ name: 'limit',
+ type: 'number',
+ default: 0,
+ optional: true
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const spaceKey = nodeData.inputs?.spaceKey as string
+ const baseUrl = nodeData.inputs?.baseUrl as string
+ const limit = nodeData.inputs?.limit as number
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const metadata = nodeData.inputs?.metadata
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const accessToken = getCredentialParam('accessToken', credentialData, nodeData)
+ const username = getCredentialParam('username', credentialData, nodeData)
+
+ const confluenceOptions: ConfluencePagesLoaderParams = {
+ username,
+ accessToken,
+ baseUrl,
+ spaceKey,
+ limit
+ }
+
+ const loader = new ConfluencePagesLoader(confluenceOptions)
+
+ let docs = []
+
+ if (textSplitter) {
+ docs = await loader.loadAndSplit(textSplitter)
+ } else {
+ docs = await loader.load()
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of docs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+
+ return docs
+ }
+}
+
+module.exports = { nodeClass: Confluence_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Confluence/confluence.png b/packages/components/nodes/documentloaders/Confluence/confluence.png
new file mode 100644
index 000000000..3cbb7b3dc
Binary files /dev/null and b/packages/components/nodes/documentloaders/Confluence/confluence.png differ
diff --git a/packages/components/nodes/documentloaders/Csv/Csv.ts b/packages/components/nodes/documentloaders/Csv/Csv.ts
index f4b36ad03..750490b79 100644
--- a/packages/components/nodes/documentloaders/Csv/Csv.ts
+++ b/packages/components/nodes/documentloaders/Csv/Csv.ts
@@ -5,6 +5,7 @@ import { CSVLoader } from 'langchain/document_loaders/fs/csv'
class Csv_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class Csv_DocumentLoaders implements INode {
constructor() {
this.label = 'Csv File'
this.name = 'csvFile'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'Csv.png'
this.category = 'Document Loaders'
diff --git a/packages/components/nodes/documentloaders/Docx/Docx.ts b/packages/components/nodes/documentloaders/Docx/Docx.ts
index e27991a51..419227755 100644
--- a/packages/components/nodes/documentloaders/Docx/Docx.ts
+++ b/packages/components/nodes/documentloaders/Docx/Docx.ts
@@ -5,6 +5,7 @@ import { DocxLoader } from 'langchain/document_loaders/fs/docx'
class Docx_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class Docx_DocumentLoaders implements INode {
constructor() {
this.label = 'Docx File'
this.name = 'docxFile'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'Docx.png'
this.category = 'Document Loaders'
diff --git a/packages/components/nodes/documentloaders/Figma/Figma.ts b/packages/components/nodes/documentloaders/Figma/Figma.ts
new file mode 100644
index 000000000..3d3130445
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Figma/Figma.ts
@@ -0,0 +1,91 @@
+import { getCredentialData, getCredentialParam } from '../../../src'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { FigmaFileLoader, FigmaLoaderParams } from 'langchain/document_loaders/web/figma'
+
+class Figma_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Figma'
+ this.name = 'figma'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'figma.svg'
+ this.category = 'Document Loaders'
+ this.description = 'Load data from a Figma file'
+ this.baseClasses = [this.type]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['figmaApi']
+ }
+ this.inputs = [
+ {
+ label: 'File Key',
+ name: 'fileKey',
+ type: 'string',
+ placeholder: 'key',
+ description:
+ 'The file key can be read from any Figma file URL: https://www.figma.com/file/:key/:title. For example, in https://www.figma.com/file/12345/Website, the file key is 12345'
+ },
+ {
+ label: 'Node IDs',
+ name: 'nodeIds',
+ type: 'string',
+ placeholder: '0, 1, 2',
+ description:
+ 'A list of Node IDs, seperated by comma. Refer to official guide on how to get Node IDs'
+ },
+ {
+ label: 'Recursive',
+ name: 'recursive',
+ type: 'boolean',
+ optional: true
+ },
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const nodeIds = (nodeData.inputs?.nodeIds as string)?.trim().split(',') || []
+ const fileKey = nodeData.inputs?.fileKey as string
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const accessToken = getCredentialParam('accessToken', credentialData, nodeData)
+
+ const figmaOptions: FigmaLoaderParams = {
+ accessToken,
+ nodeIds,
+ fileKey
+ }
+
+ const loader = new FigmaFileLoader(figmaOptions)
+ const docs = await loader.load()
+
+ return docs
+ }
+}
+
+module.exports = { nodeClass: Figma_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Figma/figma.svg b/packages/components/nodes/documentloaders/Figma/figma.svg
new file mode 100644
index 000000000..c4f85674f
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Figma/figma.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/documentloaders/Folder/Folder.ts b/packages/components/nodes/documentloaders/Folder/Folder.ts
index 2290133e4..f5d0c6402 100644
--- a/packages/components/nodes/documentloaders/Folder/Folder.ts
+++ b/packages/components/nodes/documentloaders/Folder/Folder.ts
@@ -10,6 +10,7 @@ import { DocxLoader } from 'langchain/document_loaders/fs/docx'
class Folder_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -20,6 +21,7 @@ class Folder_DocumentLoaders implements INode {
constructor() {
this.label = 'Folder with Files'
this.name = 'folderFiles'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'folder.svg'
this.category = 'Document Loaders'
@@ -59,7 +61,40 @@ class Folder_DocumentLoaders implements INode {
'.csv': (path) => new CSVLoader(path),
'.docx': (path) => new DocxLoader(path),
// @ts-ignore
- '.pdf': (path) => new PDFLoader(path, { pdfjs: () => import('pdf-parse/lib/pdf.js/v1.10.100/build/pdf.js') })
+ '.pdf': (path) => new PDFLoader(path, { pdfjs: () => import('pdf-parse/lib/pdf.js/v1.10.100/build/pdf.js') }),
+ '.aspx': (path) => new TextLoader(path),
+ '.asp': (path) => new TextLoader(path),
+ '.cpp': (path) => new TextLoader(path), // C++
+ '.c': (path) => new TextLoader(path),
+ '.cs': (path) => new TextLoader(path),
+ '.css': (path) => new TextLoader(path),
+ '.go': (path) => new TextLoader(path), // Go
+ '.h': (path) => new TextLoader(path), // C++ Header files
+ '.java': (path) => new TextLoader(path), // Java
+ '.js': (path) => new TextLoader(path), // JavaScript
+ '.less': (path) => new TextLoader(path), // Less files
+ '.ts': (path) => new TextLoader(path), // TypeScript
+ '.php': (path) => new TextLoader(path), // PHP
+ '.proto': (path) => new TextLoader(path), // Protocol Buffers
+ '.python': (path) => new TextLoader(path), // Python
+ '.py': (path) => new TextLoader(path), // Python
+ '.rst': (path) => new TextLoader(path), // reStructuredText
+ '.ruby': (path) => new TextLoader(path), // Ruby
+ '.rb': (path) => new TextLoader(path), // Ruby
+ '.rs': (path) => new TextLoader(path), // Rust
+ '.scala': (path) => new TextLoader(path), // Scala
+ '.sc': (path) => new TextLoader(path), // Scala
+ '.scss': (path) => new TextLoader(path), // Sass
+ '.sol': (path) => new TextLoader(path), // Solidity
+ '.sql': (path) => new TextLoader(path), //SQL
+ '.swift': (path) => new TextLoader(path), // Swift
+ '.markdown': (path) => new TextLoader(path), // Markdown
+ '.md': (path) => new TextLoader(path), // Markdown
+ '.tex': (path) => new TextLoader(path), // LaTeX
+ '.ltx': (path) => new TextLoader(path), // LaTeX
+ '.html': (path) => new TextLoader(path), // HTML
+ '.vb': (path) => new TextLoader(path), // Visual Basic
+ '.xml': (path) => new TextLoader(path) // XML
})
let docs = []
diff --git a/packages/components/nodes/documentloaders/Gitbook/Gitbook.ts b/packages/components/nodes/documentloaders/Gitbook/Gitbook.ts
new file mode 100644
index 000000000..181fa48d4
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Gitbook/Gitbook.ts
@@ -0,0 +1,84 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { GitbookLoader } from 'langchain/document_loaders/web/gitbook'
+
+class Gitbook_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs?: INodeParams[]
+
+ constructor() {
+ this.label = 'GitBook'
+ this.name = 'gitbook'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'gitbook.svg'
+ this.category = 'Document Loaders'
+ this.description = `Load data from GitBook`
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'Web Path',
+ name: 'webPath',
+ type: 'string',
+ placeholder: 'https://docs.gitbook.com/product-tour/navigation',
+ description: 'If want to load all paths from the GitBook provide only root path e.g.https://docs.gitbook.com/ '
+ },
+ {
+ label: 'Should Load All Paths',
+ name: 'shouldLoadAllPaths',
+ type: 'boolean',
+ description: 'Load from all paths in a given GitBook',
+ optional: true
+ },
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+ async init(nodeData: INodeData): Promise {
+ const webPath = nodeData.inputs?.webPath as string
+ const shouldLoadAllPaths = nodeData.inputs?.shouldLoadAllPaths as boolean
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const metadata = nodeData.inputs?.metadata
+
+ const loader = shouldLoadAllPaths ? new GitbookLoader(webPath, { shouldLoadAllPaths }) : new GitbookLoader(webPath)
+
+ const docs = textSplitter ? await loader.loadAndSplit() : await loader.load()
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ return docs.map((doc) => {
+ return {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ })
+ }
+
+ return docs
+ }
+}
+
+module.exports = {
+ nodeClass: Gitbook_DocumentLoaders
+}
diff --git a/packages/components/nodes/documentloaders/Gitbook/gitbook.svg b/packages/components/nodes/documentloaders/Gitbook/gitbook.svg
new file mode 100644
index 000000000..df16237a5
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Gitbook/gitbook.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/documentloaders/Github/Github.ts b/packages/components/nodes/documentloaders/Github/Github.ts
index 552790abf..079bffb07 100644
--- a/packages/components/nodes/documentloaders/Github/Github.ts
+++ b/packages/components/nodes/documentloaders/Github/Github.ts
@@ -1,25 +1,37 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { TextSplitter } from 'langchain/text_splitter'
import { GithubRepoLoader, GithubRepoLoaderParams } from 'langchain/document_loaders/web/github'
+import { getCredentialData, getCredentialParam } from '../../../src'
class Github_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
category: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Github'
this.name = 'github'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'github.png'
this.category = 'Document Loaders'
this.description = `Load data from a GitHub repository`
this.baseClasses = [this.type]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ description: 'Only needed when accessing private repo',
+ optional: true,
+ credentialNames: ['githubApi']
+ }
this.inputs = [
{
label: 'Repo Link',
@@ -34,10 +46,9 @@ class Github_DocumentLoaders implements INode {
default: 'main'
},
{
- label: 'Access Token',
- name: 'accessToken',
- type: 'password',
- placeholder: '',
+ label: 'Recursive',
+ name: 'recursive',
+ type: 'boolean',
optional: true
},
{
@@ -56,44 +67,38 @@ class Github_DocumentLoaders implements INode {
]
}
- async init(nodeData: INodeData): Promise {
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const repoLink = nodeData.inputs?.repoLink as string
const branch = nodeData.inputs?.branch as string
- const accessToken = nodeData.inputs?.accessToken as string
+ const recursive = nodeData.inputs?.recursive as boolean
const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
const metadata = nodeData.inputs?.metadata
- const options: GithubRepoLoaderParams = {
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const accessToken = getCredentialParam('accessToken', credentialData, nodeData)
+
+ const githubOptions: GithubRepoLoaderParams = {
branch,
- recursive: false,
+ recursive,
unknown: 'warn'
}
- if (accessToken) options.accessToken = accessToken
+ if (accessToken) githubOptions.accessToken = accessToken
- const loader = new GithubRepoLoader(repoLink, options)
- let docs = []
-
- if (textSplitter) {
- docs = await loader.loadAndSplit(textSplitter)
- } else {
- docs = await loader.load()
- }
+ const loader = new GithubRepoLoader(repoLink, githubOptions)
+ const docs = textSplitter ? await loader.loadAndSplit(textSplitter) : await loader.load()
if (metadata) {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
- let finaldocs = []
- for (const doc of docs) {
- const newdoc = {
+ return docs.map((doc) => {
+ return {
...doc,
metadata: {
...doc.metadata,
...parsedMetadata
}
}
- finaldocs.push(newdoc)
- }
- return finaldocs
+ })
}
return docs
diff --git a/packages/components/nodes/documentloaders/Json/Json.ts b/packages/components/nodes/documentloaders/Json/Json.ts
index 9177df5cb..43051251b 100644
--- a/packages/components/nodes/documentloaders/Json/Json.ts
+++ b/packages/components/nodes/documentloaders/Json/Json.ts
@@ -5,6 +5,7 @@ import { JSONLoader } from 'langchain/document_loaders/fs/json'
class Json_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class Json_DocumentLoaders implements INode {
constructor() {
this.label = 'Json File'
this.name = 'jsonFile'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'json.svg'
this.category = 'Document Loaders'
diff --git a/packages/components/nodes/documentloaders/Jsonlines/Jsonlines.ts b/packages/components/nodes/documentloaders/Jsonlines/Jsonlines.ts
new file mode 100644
index 000000000..fcc2fae99
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Jsonlines/Jsonlines.ts
@@ -0,0 +1,108 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { JSONLinesLoader } from 'langchain/document_loaders/fs/json'
+
+class Jsonlines_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Json Lines File'
+ this.name = 'jsonlinesFile'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'jsonlines.svg'
+ this.category = 'Document Loaders'
+ this.description = `Load data from JSON Lines files`
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'Jsonlines File',
+ name: 'jsonlinesFile',
+ type: 'file',
+ fileType: '.jsonl'
+ },
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Pointer Extraction',
+ name: 'pointerName',
+ type: 'string',
+ placeholder: 'Enter pointer name',
+ optional: false
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const jsonLinesFileBase64 = nodeData.inputs?.jsonlinesFile as string
+ const pointerName = nodeData.inputs?.pointerName as string
+ const metadata = nodeData.inputs?.metadata
+
+ let alldocs = []
+ let files: string[] = []
+
+ let pointer = '/' + pointerName.trim()
+
+ if (jsonLinesFileBase64.startsWith('[') && jsonLinesFileBase64.endsWith(']')) {
+ files = JSON.parse(jsonLinesFileBase64)
+ } else {
+ files = [jsonLinesFileBase64]
+ }
+
+ for (const file of files) {
+ const splitDataURI = file.split(',')
+ splitDataURI.pop()
+ const bf = Buffer.from(splitDataURI.pop() || '', 'base64')
+ const blob = new Blob([bf])
+ const loader = new JSONLinesLoader(blob, pointer)
+
+ if (textSplitter) {
+ const docs = await loader.loadAndSplit(textSplitter)
+ alldocs.push(...docs)
+ } else {
+ const docs = await loader.load()
+ alldocs.push(...docs)
+ }
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of alldocs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+
+ return alldocs
+ }
+}
+
+module.exports = { nodeClass: Jsonlines_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Jsonlines/jsonlines.svg b/packages/components/nodes/documentloaders/Jsonlines/jsonlines.svg
new file mode 100644
index 000000000..f3686f0c9
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Jsonlines/jsonlines.svg
@@ -0,0 +1,16 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/documentloaders/Notion/NotionDB.ts b/packages/components/nodes/documentloaders/Notion/NotionDB.ts
new file mode 100644
index 000000000..74879dd2f
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Notion/NotionDB.ts
@@ -0,0 +1,100 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { NotionAPILoader, NotionAPILoaderOptions } from 'langchain/document_loaders/web/notionapi'
+import { getCredentialData, getCredentialParam } from '../../../src'
+
+class NotionDB_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Notion Database'
+ this.name = 'notionDB'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'notion.png'
+ this.category = 'Document Loaders'
+ this.description = 'Load data from Notion Database (each row is a separate document with all properties as metadata)'
+ this.baseClasses = [this.type]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['notionApi']
+ }
+ this.inputs = [
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Notion Database Id',
+ name: 'databaseId',
+ type: 'string',
+ description: 'If your URL looks like - https://www.notion.so/abcdefh?v=long_hash_2, then abcdefh is the database ID'
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const databaseId = nodeData.inputs?.databaseId as string
+ const metadata = nodeData.inputs?.metadata
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const notionIntegrationToken = getCredentialParam('notionIntegrationToken', credentialData, nodeData)
+
+ const obj: NotionAPILoaderOptions = {
+ clientOptions: {
+ auth: notionIntegrationToken
+ },
+ id: databaseId,
+ type: 'database'
+ }
+ const loader = new NotionAPILoader(obj)
+
+ let docs = []
+ if (textSplitter) {
+ docs = await loader.loadAndSplit(textSplitter)
+ } else {
+ docs = await loader.load()
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of docs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+
+ return docs
+ }
+}
+
+module.exports = { nodeClass: NotionDB_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Notion/Notion.ts b/packages/components/nodes/documentloaders/Notion/NotionFolder.ts
similarity index 90%
rename from packages/components/nodes/documentloaders/Notion/Notion.ts
rename to packages/components/nodes/documentloaders/Notion/NotionFolder.ts
index f5bfcb2ad..8b8254a4f 100644
--- a/packages/components/nodes/documentloaders/Notion/Notion.ts
+++ b/packages/components/nodes/documentloaders/Notion/NotionFolder.ts
@@ -2,9 +2,10 @@ import { INode, INodeData, INodeParams } from '../../../src/Interface'
import { TextSplitter } from 'langchain/text_splitter'
import { NotionLoader } from 'langchain/document_loaders/fs/notion'
-class Notion_DocumentLoaders implements INode {
+class NotionFolder_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,10 +16,11 @@ class Notion_DocumentLoaders implements INode {
constructor() {
this.label = 'Notion Folder'
this.name = 'notionFolder'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'notion.png'
this.category = 'Document Loaders'
- this.description = `Load data from Notion folder`
+ this.description = 'Load data from the exported and unzipped Notion folder'
this.baseClasses = [this.type]
this.inputs = [
{
@@ -78,4 +80,4 @@ class Notion_DocumentLoaders implements INode {
}
}
-module.exports = { nodeClass: Notion_DocumentLoaders }
+module.exports = { nodeClass: NotionFolder_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Notion/NotionPage.ts b/packages/components/nodes/documentloaders/Notion/NotionPage.ts
new file mode 100644
index 000000000..b45067ab1
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Notion/NotionPage.ts
@@ -0,0 +1,101 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { NotionAPILoader, NotionAPILoaderOptions } from 'langchain/document_loaders/web/notionapi'
+import { getCredentialData, getCredentialParam } from '../../../src'
+
+class NotionPage_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Notion Page'
+ this.name = 'notionPage'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'notion.png'
+ this.category = 'Document Loaders'
+ this.description = 'Load data from Notion Page (including child pages all as separate documents)'
+ this.baseClasses = [this.type]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['notionApi']
+ }
+ this.inputs = [
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Notion Page Id',
+ name: 'pageId',
+ type: 'string',
+ description:
+ 'The last The 32 char hex in the url path. For example: https://www.notion.so/skarard/LangChain-Notion-API-b34ca03f219c4420a6046fc4bdfdf7b4, b34ca03f219c4420a6046fc4bdfdf7b4 is the Page ID'
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const pageId = nodeData.inputs?.pageId as string
+ const metadata = nodeData.inputs?.metadata
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const notionIntegrationToken = getCredentialParam('notionIntegrationToken', credentialData, nodeData)
+
+ const obj: NotionAPILoaderOptions = {
+ clientOptions: {
+ auth: notionIntegrationToken
+ },
+ id: pageId,
+ type: 'page'
+ }
+ const loader = new NotionAPILoader(obj)
+
+ let docs = []
+ if (textSplitter) {
+ docs = await loader.loadAndSplit(textSplitter)
+ } else {
+ docs = await loader.load()
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of docs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+
+ return docs
+ }
+}
+
+module.exports = { nodeClass: NotionPage_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Pdf/Pdf.ts b/packages/components/nodes/documentloaders/Pdf/Pdf.ts
index bc36f8cb5..a9f6ab23b 100644
--- a/packages/components/nodes/documentloaders/Pdf/Pdf.ts
+++ b/packages/components/nodes/documentloaders/Pdf/Pdf.ts
@@ -5,6 +5,7 @@ import { PDFLoader } from 'langchain/document_loaders/fs/pdf'
class Pdf_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class Pdf_DocumentLoaders implements INode {
constructor() {
this.label = 'Pdf File'
this.name = 'pdfFile'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'pdf.svg'
this.category = 'Document Loaders'
@@ -49,6 +51,13 @@ class Pdf_DocumentLoaders implements INode {
],
default: 'perPage'
},
+ {
+ label: 'Use Legacy Build',
+ name: 'legacyBuild',
+ type: 'boolean',
+ optional: true,
+ additionalParams: true
+ },
{
label: 'Metadata',
name: 'metadata',
@@ -64,6 +73,7 @@ class Pdf_DocumentLoaders implements INode {
const pdfFileBase64 = nodeData.inputs?.pdfFile as string
const usage = nodeData.inputs?.usage as string
const metadata = nodeData.inputs?.metadata
+ const legacyBuild = nodeData.inputs?.legacyBuild as boolean
let alldocs = []
let files: string[] = []
@@ -81,8 +91,9 @@ class Pdf_DocumentLoaders implements INode {
if (usage === 'perFile') {
const loader = new PDFLoader(new Blob([bf]), {
splitPages: false,
- // @ts-ignore
- pdfjs: () => import('pdf-parse/lib/pdf.js/v1.10.100/build/pdf.js')
+ pdfjs: () =>
+ // @ts-ignore
+ legacyBuild ? import('pdfjs-dist/legacy/build/pdf.js') : import('pdf-parse/lib/pdf.js/v1.10.100/build/pdf.js')
})
if (textSplitter) {
const docs = await loader.loadAndSplit(textSplitter)
@@ -92,8 +103,11 @@ class Pdf_DocumentLoaders implements INode {
alldocs.push(...docs)
}
} else {
- // @ts-ignore
- const loader = new PDFLoader(new Blob([bf]), { pdfjs: () => import('pdf-parse/lib/pdf.js/v1.10.100/build/pdf.js') })
+ const loader = new PDFLoader(new Blob([bf]), {
+ pdfjs: () =>
+ // @ts-ignore
+ legacyBuild ? import('pdfjs-dist/legacy/build/pdf.js') : import('pdf-parse/lib/pdf.js/v1.10.100/build/pdf.js')
+ })
if (textSplitter) {
const docs = await loader.loadAndSplit(textSplitter)
alldocs.push(...docs)
diff --git a/packages/components/nodes/documentloaders/Playwright/Playwright.ts b/packages/components/nodes/documentloaders/Playwright/Playwright.ts
new file mode 100644
index 000000000..eb246045c
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Playwright/Playwright.ts
@@ -0,0 +1,202 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { Browser, Page, PlaywrightWebBaseLoader, PlaywrightWebBaseLoaderOptions } from 'langchain/document_loaders/web/playwright'
+import { test } from 'linkifyjs'
+import { webCrawl, xmlScrape } from '../../../src'
+
+class Playwright_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Playwright Web Scraper'
+ this.name = 'playwrightWebScraper'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'playwright.svg'
+ this.category = 'Document Loaders'
+ this.description = `Load data from webpages`
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'URL',
+ name: 'url',
+ type: 'string'
+ },
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Get Relative Links Method',
+ name: 'relativeLinksMethod',
+ type: 'options',
+ description: 'Select a method to retrieve relative links',
+ options: [
+ {
+ label: 'Web Crawl',
+ name: 'webCrawl',
+ description: 'Crawl relative links from HTML URL'
+ },
+ {
+ label: 'Scrape XML Sitemap',
+ name: 'scrapeXMLSitemap',
+ description: 'Scrape relative links from XML sitemap URL'
+ }
+ ],
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Get Relative Links Limit',
+ name: 'limit',
+ type: 'number',
+ optional: true,
+ additionalParams: true,
+ description:
+ 'Only used when "Get Relative Links Method" is selected. Set 0 to retrieve all relative links, default limit is 10.',
+ warning: `Retrieving all links might take long time, and all links will be upserted again if the flow's state changed (eg: different URL, chunk size, etc)`
+ },
+ {
+ label: 'Wait Until',
+ name: 'waitUntilGoToOption',
+ type: 'options',
+ description: 'Select a go to wait until option',
+ options: [
+ {
+ label: 'Load',
+ name: 'load',
+ description: 'Consider operation to be finished when the load event is fired.'
+ },
+ {
+ label: 'DOM Content Loaded',
+ name: 'domcontentloaded',
+ description: 'Consider operation to be finished when the DOMContentLoaded event is fired.'
+ },
+ {
+ label: 'Network Idle',
+ name: 'networkidle',
+ description: 'Navigation is finished when there are no more connections for at least 500 ms.'
+ },
+ {
+ label: 'Commit',
+ name: 'commit',
+ description: 'Consider operation to be finished when network response is received and the document started loading.'
+ }
+ ],
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Wait for selector to load',
+ name: 'waitForSelector',
+ type: 'string',
+ optional: true,
+ additionalParams: true,
+ description: 'CSS selectors like .div or #div'
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const metadata = nodeData.inputs?.metadata
+ const relativeLinksMethod = nodeData.inputs?.relativeLinksMethod as string
+ let limit = nodeData.inputs?.limit as string
+ let waitUntilGoToOption = nodeData.inputs?.waitUntilGoToOption as 'load' | 'domcontentloaded' | 'networkidle' | 'commit' | undefined
+ let waitForSelector = nodeData.inputs?.waitForSelector as string
+
+ let url = nodeData.inputs?.url as string
+ url = url.trim()
+ if (!test(url)) {
+ throw new Error('Invalid URL')
+ }
+
+ async function playwrightLoader(url: string): Promise {
+ try {
+ let docs = []
+ const config: PlaywrightWebBaseLoaderOptions = {
+ launchOptions: {
+ args: ['--no-sandbox'],
+ headless: true
+ }
+ }
+ if (waitUntilGoToOption) {
+ config['gotoOptions'] = {
+ waitUntil: waitUntilGoToOption
+ }
+ }
+ if (waitForSelector) {
+ config['evaluate'] = async (page: Page, _: Browser): Promise => {
+ await page.waitForSelector(waitForSelector)
+
+ const result = await page.evaluate(() => document.body.innerHTML)
+ return result
+ }
+ }
+ const loader = new PlaywrightWebBaseLoader(url, config)
+ if (textSplitter) {
+ docs = await loader.loadAndSplit(textSplitter)
+ } else {
+ docs = await loader.load()
+ }
+ return docs
+ } catch (err) {
+ if (process.env.DEBUG === 'true') console.error(`error in PlaywrightWebBaseLoader: ${err.message}, on page: ${url}`)
+ }
+ }
+
+ let docs = []
+ if (relativeLinksMethod) {
+ if (process.env.DEBUG === 'true') console.info(`Start ${relativeLinksMethod}`)
+ if (!limit) limit = '10'
+ else if (parseInt(limit) < 0) throw new Error('Limit cannot be less than 0')
+ const pages: string[] =
+ relativeLinksMethod === 'webCrawl' ? await webCrawl(url, parseInt(limit)) : await xmlScrape(url, parseInt(limit))
+ if (process.env.DEBUG === 'true') console.info(`pages: ${JSON.stringify(pages)}, length: ${pages.length}`)
+ if (!pages || pages.length === 0) throw new Error('No relative links found')
+ for (const page of pages) {
+ docs.push(...(await playwrightLoader(page)))
+ }
+ if (process.env.DEBUG === 'true') console.info(`Finish ${relativeLinksMethod}`)
+ } else {
+ docs = await playwrightLoader(url)
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of docs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+
+ return docs
+ }
+}
+
+module.exports = { nodeClass: Playwright_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Playwright/playwright.svg b/packages/components/nodes/documentloaders/Playwright/playwright.svg
new file mode 100644
index 000000000..0992832dc
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Playwright/playwright.svg
@@ -0,0 +1,9 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/documentloaders/Puppeteer/Puppeteer.ts b/packages/components/nodes/documentloaders/Puppeteer/Puppeteer.ts
new file mode 100644
index 000000000..4691eb948
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Puppeteer/Puppeteer.ts
@@ -0,0 +1,203 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { Browser, Page, PuppeteerWebBaseLoader, PuppeteerWebBaseLoaderOptions } from 'langchain/document_loaders/web/puppeteer'
+import { test } from 'linkifyjs'
+import { webCrawl, xmlScrape } from '../../../src'
+import { PuppeteerLifeCycleEvent } from 'puppeteer'
+
+class Puppeteer_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Puppeteer Web Scraper'
+ this.name = 'puppeteerWebScraper'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'puppeteer.svg'
+ this.category = 'Document Loaders'
+ this.description = `Load data from webpages`
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'URL',
+ name: 'url',
+ type: 'string'
+ },
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Get Relative Links Method',
+ name: 'relativeLinksMethod',
+ type: 'options',
+ description: 'Select a method to retrieve relative links',
+ options: [
+ {
+ label: 'Web Crawl',
+ name: 'webCrawl',
+ description: 'Crawl relative links from HTML URL'
+ },
+ {
+ label: 'Scrape XML Sitemap',
+ name: 'scrapeXMLSitemap',
+ description: 'Scrape relative links from XML sitemap URL'
+ }
+ ],
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Get Relative Links Limit',
+ name: 'limit',
+ type: 'number',
+ optional: true,
+ additionalParams: true,
+ description:
+ 'Only used when "Get Relative Links Method" is selected. Set 0 to retrieve all relative links, default limit is 10.',
+ warning: `Retrieving all links might take long time, and all links will be upserted again if the flow's state changed (eg: different URL, chunk size, etc)`
+ },
+ {
+ label: 'Wait Until',
+ name: 'waitUntilGoToOption',
+ type: 'options',
+ description: 'Select a go to wait until option',
+ options: [
+ {
+ label: 'Load',
+ name: 'load',
+ description: `When the initial HTML document's DOM has been loaded and parsed`
+ },
+ {
+ label: 'DOM Content Loaded',
+ name: 'domcontentloaded',
+ description: `When the complete HTML document's DOM has been loaded and parsed`
+ },
+ {
+ label: 'Network Idle 0',
+ name: 'networkidle0',
+ description: 'Navigation is finished when there are no more than 0 network connections for at least 500 ms'
+ },
+ {
+ label: 'Network Idle 2',
+ name: 'networkidle2',
+ description: 'Navigation is finished when there are no more than 2 network connections for at least 500 ms'
+ }
+ ],
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Wait for selector to load',
+ name: 'waitForSelector',
+ type: 'string',
+ optional: true,
+ additionalParams: true,
+ description: 'CSS selectors like .div or #div'
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const metadata = nodeData.inputs?.metadata
+ const relativeLinksMethod = nodeData.inputs?.relativeLinksMethod as string
+ let limit = nodeData.inputs?.limit as string
+ let waitUntilGoToOption = nodeData.inputs?.waitUntilGoToOption as PuppeteerLifeCycleEvent
+ let waitForSelector = nodeData.inputs?.waitForSelector as string
+
+ let url = nodeData.inputs?.url as string
+ url = url.trim()
+ if (!test(url)) {
+ throw new Error('Invalid URL')
+ }
+
+ async function puppeteerLoader(url: string): Promise {
+ try {
+ let docs = []
+ const config: PuppeteerWebBaseLoaderOptions = {
+ launchOptions: {
+ args: ['--no-sandbox'],
+ headless: 'new'
+ }
+ }
+ if (waitUntilGoToOption) {
+ config['gotoOptions'] = {
+ waitUntil: waitUntilGoToOption
+ }
+ }
+ if (waitForSelector) {
+ config['evaluate'] = async (page: Page, _: Browser): Promise => {
+ await page.waitForSelector(waitForSelector)
+
+ const result = await page.evaluate(() => document.body.innerHTML)
+ return result
+ }
+ }
+ const loader = new PuppeteerWebBaseLoader(url, config)
+ if (textSplitter) {
+ docs = await loader.loadAndSplit(textSplitter)
+ } else {
+ docs = await loader.load()
+ }
+ return docs
+ } catch (err) {
+ if (process.env.DEBUG === 'true') console.error(`error in PuppeteerWebBaseLoader: ${err.message}, on page: ${url}`)
+ }
+ }
+
+ let docs = []
+ if (relativeLinksMethod) {
+ if (process.env.DEBUG === 'true') console.info(`Start ${relativeLinksMethod}`)
+ if (!limit) limit = '10'
+ else if (parseInt(limit) < 0) throw new Error('Limit cannot be less than 0')
+ const pages: string[] =
+ relativeLinksMethod === 'webCrawl' ? await webCrawl(url, parseInt(limit)) : await xmlScrape(url, parseInt(limit))
+ if (process.env.DEBUG === 'true') console.info(`pages: ${JSON.stringify(pages)}, length: ${pages.length}`)
+ if (!pages || pages.length === 0) throw new Error('No relative links found')
+ for (const page of pages) {
+ docs.push(...(await puppeteerLoader(page)))
+ }
+ if (process.env.DEBUG === 'true') console.info(`Finish ${relativeLinksMethod}`)
+ } else {
+ docs = await puppeteerLoader(url)
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of docs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+
+ return docs
+ }
+}
+
+module.exports = { nodeClass: Puppeteer_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Puppeteer/puppeteer.svg b/packages/components/nodes/documentloaders/Puppeteer/puppeteer.svg
new file mode 100644
index 000000000..8477fc52d
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Puppeteer/puppeteer.svg
@@ -0,0 +1,14 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/documentloaders/Subtitles/Subtitles.ts b/packages/components/nodes/documentloaders/Subtitles/Subtitles.ts
new file mode 100644
index 000000000..f85898b3e
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Subtitles/Subtitles.ts
@@ -0,0 +1,97 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { TextSplitter } from 'langchain/text_splitter'
+import { SRTLoader } from 'langchain/document_loaders/fs/srt'
+
+class Subtitles_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Subtitles File'
+ this.name = 'subtitlesFile'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'subtitlesFile.svg'
+ this.category = 'Document Loaders'
+ this.description = `Load data from subtitles files`
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'Subtitles File',
+ name: 'subtitlesFile',
+ type: 'file',
+ fileType: '.srt'
+ },
+ {
+ label: 'Text Splitter',
+ name: 'textSplitter',
+ type: 'TextSplitter',
+ optional: true
+ },
+ {
+ label: 'Metadata',
+ name: 'metadata',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
+ const subtitlesFileBase64 = nodeData.inputs?.subtitlesFile as string
+ const metadata = nodeData.inputs?.metadata
+
+ let alldocs = []
+ let files: string[] = []
+
+ if (subtitlesFileBase64.startsWith('[') && subtitlesFileBase64.endsWith(']')) {
+ files = JSON.parse(subtitlesFileBase64)
+ } else {
+ files = [subtitlesFileBase64]
+ }
+
+ for (const file of files) {
+ const splitDataURI = file.split(',')
+ splitDataURI.pop()
+ const bf = Buffer.from(splitDataURI.pop() || '', 'base64')
+ const blob = new Blob([bf])
+ const loader = new SRTLoader(blob)
+
+ if (textSplitter) {
+ const docs = await loader.loadAndSplit(textSplitter)
+ alldocs.push(...docs)
+ } else {
+ const docs = await loader.load()
+ alldocs.push(...docs)
+ }
+ }
+
+ if (metadata) {
+ const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
+ let finaldocs = []
+ for (const doc of alldocs) {
+ const newdoc = {
+ ...doc,
+ metadata: {
+ ...doc.metadata,
+ ...parsedMetadata
+ }
+ }
+ finaldocs.push(newdoc)
+ }
+ return finaldocs
+ }
+ return alldocs
+ }
+}
+
+module.exports = { nodeClass: Subtitles_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/Subtitles/subtitlesFile.svg b/packages/components/nodes/documentloaders/Subtitles/subtitlesFile.svg
new file mode 100644
index 000000000..a6ee925bc
--- /dev/null
+++ b/packages/components/nodes/documentloaders/Subtitles/subtitlesFile.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/documentloaders/Text/Text.ts b/packages/components/nodes/documentloaders/Text/Text.ts
index 63e7e0e26..dacf087c9 100644
--- a/packages/components/nodes/documentloaders/Text/Text.ts
+++ b/packages/components/nodes/documentloaders/Text/Text.ts
@@ -5,6 +5,7 @@ import { TextLoader } from 'langchain/document_loaders/fs/text'
class Text_DocumentLoaders implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class Text_DocumentLoaders implements INode {
constructor() {
this.label = 'Text File'
this.name = 'textFile'
+ this.version = 1.0
this.type = 'Document'
this.icon = 'textFile.svg'
this.category = 'Document Loaders'
diff --git a/packages/components/nodes/documentloaders/VectorStoreToDocument/VectorStoreToDocument.ts b/packages/components/nodes/documentloaders/VectorStoreToDocument/VectorStoreToDocument.ts
new file mode 100644
index 000000000..b3f320ce4
--- /dev/null
+++ b/packages/components/nodes/documentloaders/VectorStoreToDocument/VectorStoreToDocument.ts
@@ -0,0 +1,87 @@
+import { VectorStore } from 'langchain/vectorstores/base'
+import { INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
+import { handleEscapeCharacters } from '../../../src/utils'
+
+class VectorStoreToDocument_DocumentLoaders implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+ outputs: INodeOutputsValue[]
+
+ constructor() {
+ this.label = 'VectorStore To Document'
+ this.name = 'vectorStoreToDocument'
+ this.version = 1.0
+ this.type = 'Document'
+ this.icon = 'vectorretriever.svg'
+ this.category = 'Document Loaders'
+ this.description = 'Search documents with scores from vector store'
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'Vector Store',
+ name: 'vectorStore',
+ type: 'VectorStore'
+ },
+ {
+ label: 'Minimum Score (%)',
+ name: 'minScore',
+ type: 'number',
+ optional: true,
+ placeholder: '75',
+ step: 1,
+ description: 'Minumum score for embeddings documents to be included'
+ }
+ ]
+ this.outputs = [
+ {
+ label: 'Document',
+ name: 'document',
+ baseClasses: this.baseClasses
+ },
+ {
+ label: 'Text',
+ name: 'text',
+ baseClasses: ['string', 'json']
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, input: string): Promise {
+ const vectorStore = nodeData.inputs?.vectorStore as VectorStore
+ const minScore = nodeData.inputs?.minScore as number
+ const output = nodeData.outputs?.output as string
+
+ const topK = (vectorStore as any)?.k ?? 4
+
+ const docs = await vectorStore.similaritySearchWithScore(input, topK)
+ // eslint-disable-next-line no-console
+ console.log('\x1b[94m\x1b[1m\n*****VectorStore Documents*****\n\x1b[0m\x1b[0m')
+ // eslint-disable-next-line no-console
+ console.log(docs)
+
+ if (output === 'document') {
+ let finaldocs = []
+ for (const doc of docs) {
+ if (minScore && doc[1] < minScore / 100) continue
+ finaldocs.push(doc[0])
+ }
+ return finaldocs
+ } else {
+ let finaltext = ''
+ for (const doc of docs) {
+ if (minScore && doc[1] < minScore / 100) continue
+ finaltext += `${doc[0].pageContent}\n`
+ }
+ return handleEscapeCharacters(finaltext, false)
+ }
+ }
+}
+
+module.exports = { nodeClass: VectorStoreToDocument_DocumentLoaders }
diff --git a/packages/components/nodes/documentloaders/VectorStoreToDocument/vectorretriever.svg b/packages/components/nodes/documentloaders/VectorStoreToDocument/vectorretriever.svg
new file mode 100644
index 000000000..208a59f14
--- /dev/null
+++ b/packages/components/nodes/documentloaders/VectorStoreToDocument/vectorretriever.svg
@@ -0,0 +1,7 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/embeddings/AzureOpenAIEmbedding/Azure.svg b/packages/components/nodes/embeddings/AzureOpenAIEmbedding/Azure.svg
index 51eb62535..47ad8c440 100644
--- a/packages/components/nodes/embeddings/AzureOpenAIEmbedding/Azure.svg
+++ b/packages/components/nodes/embeddings/AzureOpenAIEmbedding/Azure.svg
@@ -1,5 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/packages/components/nodes/embeddings/AzureOpenAIEmbedding/AzureOpenAIEmbedding.ts b/packages/components/nodes/embeddings/AzureOpenAIEmbedding/AzureOpenAIEmbedding.ts
index 355877e55..b70caa4c2 100644
--- a/packages/components/nodes/embeddings/AzureOpenAIEmbedding/AzureOpenAIEmbedding.ts
+++ b/packages/components/nodes/embeddings/AzureOpenAIEmbedding/AzureOpenAIEmbedding.ts
@@ -1,59 +1,43 @@
import { AzureOpenAIInput } from 'langchain/chat_models/openai'
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { OpenAIEmbeddings, OpenAIEmbeddingsParams } from 'langchain/embeddings/openai'
class AzureOpenAIEmbedding_Embeddings implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Azure OpenAI Embeddings'
this.name = 'azureOpenAIEmbeddings'
+ this.version = 1.0
this.type = 'AzureOpenAIEmbeddings'
this.icon = 'Azure.svg'
this.category = 'Embeddings'
this.description = 'Azure OpenAI API to generate embeddings for a given text'
this.baseClasses = [this.type, ...getBaseClasses(OpenAIEmbeddings)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['azureOpenAIApi']
+ }
this.inputs = [
{
- label: 'Azure OpenAI Api Key',
- name: 'azureOpenAIApiKey',
- type: 'password'
- },
- {
- label: 'Azure OpenAI Api Instance Name',
- name: 'azureOpenAIApiInstanceName',
- type: 'string',
- placeholder: 'YOUR-INSTANCE-NAME'
- },
- {
- label: 'Azure OpenAI Api Deployment Name',
- name: 'azureOpenAIApiDeploymentName',
- type: 'string',
- placeholder: 'YOUR-DEPLOYMENT-NAME'
- },
- {
- label: 'Azure OpenAI Api Version',
- name: 'azureOpenAIApiVersion',
- type: 'options',
- options: [
- {
- label: '2023-03-15-preview',
- name: '2023-03-15-preview'
- },
- {
- label: '2022-12-01',
- name: '2022-12-01'
- }
- ],
- default: '2023-03-15-preview'
+ label: 'Batch Size',
+ name: 'batchSize',
+ type: 'number',
+ default: '1',
+ optional: true,
+ additionalParams: true
},
{
label: 'Timeout',
@@ -65,13 +49,16 @@ class AzureOpenAIEmbedding_Embeddings implements INode {
]
}
- async init(nodeData: INodeData): Promise {
- const azureOpenAIApiKey = nodeData.inputs?.azureOpenAIApiKey as string
- const azureOpenAIApiInstanceName = nodeData.inputs?.azureOpenAIApiInstanceName as string
- const azureOpenAIApiDeploymentName = nodeData.inputs?.azureOpenAIApiDeploymentName as string
- const azureOpenAIApiVersion = nodeData.inputs?.azureOpenAIApiVersion as string
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const batchSize = nodeData.inputs?.batchSize as string
const timeout = nodeData.inputs?.timeout as string
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const azureOpenAIApiKey = getCredentialParam('azureOpenAIApiKey', credentialData, nodeData)
+ const azureOpenAIApiInstanceName = getCredentialParam('azureOpenAIApiInstanceName', credentialData, nodeData)
+ const azureOpenAIApiDeploymentName = getCredentialParam('azureOpenAIApiDeploymentName', credentialData, nodeData)
+ const azureOpenAIApiVersion = getCredentialParam('azureOpenAIApiVersion', credentialData, nodeData)
+
const obj: Partial & Partial = {
azureOpenAIApiKey,
azureOpenAIApiInstanceName,
@@ -79,6 +66,7 @@ class AzureOpenAIEmbedding_Embeddings implements INode {
azureOpenAIApiVersion
}
+ if (batchSize) obj.batchSize = parseInt(batchSize, 10)
if (timeout) obj.timeout = parseInt(timeout, 10)
const model = new OpenAIEmbeddings(obj)
diff --git a/packages/components/nodes/embeddings/CohereEmbedding/CohereEmbedding.ts b/packages/components/nodes/embeddings/CohereEmbedding/CohereEmbedding.ts
index 344713a48..b42a0357e 100644
--- a/packages/components/nodes/embeddings/CohereEmbedding/CohereEmbedding.ts
+++ b/packages/components/nodes/embeddings/CohereEmbedding/CohereEmbedding.ts
@@ -1,31 +1,35 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { CohereEmbeddings, CohereEmbeddingsParams } from 'langchain/embeddings/cohere'
class CohereEmbedding_Embeddings implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Cohere Embeddings'
this.name = 'cohereEmbeddings'
+ this.version = 1.0
this.type = 'CohereEmbeddings'
this.icon = 'cohere.png'
this.category = 'Embeddings'
this.description = 'Cohere API to generate embeddings for a given text'
this.baseClasses = [this.type, ...getBaseClasses(CohereEmbeddings)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['cohereApi']
+ }
this.inputs = [
- {
- label: 'Cohere API Key',
- name: 'cohereApiKey',
- type: 'password'
- },
{
label: 'Model Name',
name: 'modelName',
@@ -50,12 +54,14 @@ class CohereEmbedding_Embeddings implements INode {
]
}
- async init(nodeData: INodeData): Promise {
- const apiKey = nodeData.inputs?.cohereApiKey as string
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const modelName = nodeData.inputs?.modelName as string
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const cohereApiKey = getCredentialParam('cohereApiKey', credentialData, nodeData)
+
const obj: Partial & { apiKey?: string } = {
- apiKey
+ apiKey: cohereApiKey
}
if (modelName) obj.modelName = modelName
diff --git a/packages/components/nodes/embeddings/GoogleVertexAIEmbedding/GoogleVertexAIEmbedding.ts b/packages/components/nodes/embeddings/GoogleVertexAIEmbedding/GoogleVertexAIEmbedding.ts
new file mode 100644
index 000000000..23bd3565e
--- /dev/null
+++ b/packages/components/nodes/embeddings/GoogleVertexAIEmbedding/GoogleVertexAIEmbedding.ts
@@ -0,0 +1,63 @@
+import { GoogleVertexAIEmbeddings, GoogleVertexAIEmbeddingsParams } from 'langchain/embeddings/googlevertexai'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { GoogleAuthOptions } from 'google-auth-library'
+
+class GoogleVertexAIEmbedding_Embeddings implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ description: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'GoogleVertexAI Embeddings'
+ this.name = 'googlevertexaiEmbeddings'
+ this.version = 1.0
+ this.type = 'GoogleVertexAIEmbeddings'
+ this.icon = 'vertexai.svg'
+ this.category = 'Embeddings'
+ this.description = 'Google vertexAI API to generate embeddings for a given text'
+ this.baseClasses = [this.type, ...getBaseClasses(GoogleVertexAIEmbeddings)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['googleVertexAuth']
+ }
+ this.inputs = []
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const googleApplicationCredentialFilePath = getCredentialParam('googleApplicationCredentialFilePath', credentialData, nodeData)
+ const googleApplicationCredential = getCredentialParam('googleApplicationCredential', credentialData, nodeData)
+ const projectID = getCredentialParam('projectID', credentialData, nodeData)
+
+ if (!googleApplicationCredentialFilePath && !googleApplicationCredential)
+ throw new Error('Please specify your Google Application Credential')
+ if (googleApplicationCredentialFilePath && googleApplicationCredential)
+ throw new Error('Please use either Google Application Credential File Path or Google Credential JSON Object')
+
+ const authOptions: GoogleAuthOptions = {}
+ if (googleApplicationCredentialFilePath && !googleApplicationCredential) authOptions.keyFile = googleApplicationCredentialFilePath
+ else if (!googleApplicationCredentialFilePath && googleApplicationCredential)
+ authOptions.credentials = JSON.parse(googleApplicationCredential)
+
+ if (projectID) authOptions.projectId = projectID
+
+ const obj: GoogleVertexAIEmbeddingsParams = {
+ authOptions
+ }
+
+ const model = new GoogleVertexAIEmbeddings(obj)
+ return model
+ }
+}
+
+module.exports = { nodeClass: GoogleVertexAIEmbedding_Embeddings }
diff --git a/packages/components/nodes/embeddings/GoogleVertexAIEmbedding/vertexai.svg b/packages/components/nodes/embeddings/GoogleVertexAIEmbedding/vertexai.svg
new file mode 100644
index 000000000..31244412a
--- /dev/null
+++ b/packages/components/nodes/embeddings/GoogleVertexAIEmbedding/vertexai.svg
@@ -0,0 +1,2 @@
+
+
\ No newline at end of file
diff --git a/packages/components/nodes/embeddings/HuggingFaceInferenceEmbedding/HuggingFaceInferenceEmbedding.ts b/packages/components/nodes/embeddings/HuggingFaceInferenceEmbedding/HuggingFaceInferenceEmbedding.ts
index 6f14325a6..6d75b9559 100644
--- a/packages/components/nodes/embeddings/HuggingFaceInferenceEmbedding/HuggingFaceInferenceEmbedding.ts
+++ b/packages/components/nodes/embeddings/HuggingFaceInferenceEmbedding/HuggingFaceInferenceEmbedding.ts
@@ -1,49 +1,67 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
-import { HuggingFaceInferenceEmbeddings, HuggingFaceInferenceEmbeddingsParams } from 'langchain/embeddings/hf'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { HuggingFaceInferenceEmbeddings, HuggingFaceInferenceEmbeddingsParams } from './core'
class HuggingFaceInferenceEmbedding_Embeddings implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'HuggingFace Inference Embeddings'
this.name = 'huggingFaceInferenceEmbeddings'
+ this.version = 1.0
this.type = 'HuggingFaceInferenceEmbeddings'
this.icon = 'huggingface.png'
this.category = 'Embeddings'
this.description = 'HuggingFace Inference API to generate embeddings for a given text'
this.baseClasses = [this.type, ...getBaseClasses(HuggingFaceInferenceEmbeddings)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['huggingFaceApi']
+ }
this.inputs = [
- {
- label: 'HuggingFace Api Key',
- name: 'apiKey',
- type: 'password'
- },
{
label: 'Model',
name: 'modelName',
type: 'string',
+ description: 'If using own inference endpoint, leave this blank',
+ placeholder: 'sentence-transformers/distilbert-base-nli-mean-tokens',
+ optional: true
+ },
+ {
+ label: 'Endpoint',
+ name: 'endpoint',
+ type: 'string',
+ placeholder: 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/sentence-transformers/all-MiniLM-L6-v2',
+ description: 'Using your own inference endpoint',
optional: true
}
]
}
- async init(nodeData: INodeData): Promise {
- const apiKey = nodeData.inputs?.apiKey as string
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const modelName = nodeData.inputs?.modelName as string
+ const endpoint = nodeData.inputs?.endpoint as string
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData)
const obj: Partial = {
- apiKey
+ apiKey: huggingFaceApiKey
}
if (modelName) obj.model = modelName
+ if (endpoint) obj.endpoint = endpoint
const model = new HuggingFaceInferenceEmbeddings(obj)
return model
diff --git a/packages/components/nodes/embeddings/HuggingFaceInferenceEmbedding/core.ts b/packages/components/nodes/embeddings/HuggingFaceInferenceEmbedding/core.ts
new file mode 100644
index 000000000..c75658d45
--- /dev/null
+++ b/packages/components/nodes/embeddings/HuggingFaceInferenceEmbedding/core.ts
@@ -0,0 +1,55 @@
+import { HfInference } from '@huggingface/inference'
+import { Embeddings, EmbeddingsParams } from 'langchain/embeddings/base'
+import { getEnvironmentVariable } from '../../../src/utils'
+
+export interface HuggingFaceInferenceEmbeddingsParams extends EmbeddingsParams {
+ apiKey?: string
+ model?: string
+ endpoint?: string
+}
+
+export class HuggingFaceInferenceEmbeddings extends Embeddings implements HuggingFaceInferenceEmbeddingsParams {
+ apiKey?: string
+
+ endpoint?: string
+
+ model: string
+
+ client: HfInference
+
+ constructor(fields?: HuggingFaceInferenceEmbeddingsParams) {
+ super(fields ?? {})
+
+ this.model = fields?.model ?? 'sentence-transformers/distilbert-base-nli-mean-tokens'
+ this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
+ this.endpoint = fields?.endpoint ?? ''
+ this.client = new HfInference(this.apiKey)
+ if (this.endpoint) this.client.endpoint(this.endpoint)
+ }
+
+ async _embed(texts: string[]): Promise {
+ // replace newlines, which can negatively affect performance.
+ const clean = texts.map((text) => text.replace(/\n/g, ' '))
+ const hf = new HfInference(this.apiKey)
+ const obj: any = {
+ inputs: clean
+ }
+ if (this.endpoint) {
+ hf.endpoint(this.endpoint)
+ } else {
+ obj.model = this.model
+ }
+
+ const res = await this.caller.callWithOptions({}, hf.featureExtraction.bind(hf), obj)
+ return res as number[][]
+ }
+
+ async embedQuery(document: string): Promise {
+ const res = await this._embed([document])
+ return res[0]
+ }
+
+ async embedDocuments(documents: string[]): Promise {
+ return this._embed(documents)
+ }
+}
diff --git a/packages/components/nodes/embeddings/LocalAIEmbedding/LocalAIEmbedding.ts b/packages/components/nodes/embeddings/LocalAIEmbedding/LocalAIEmbedding.ts
new file mode 100644
index 000000000..557e35d68
--- /dev/null
+++ b/packages/components/nodes/embeddings/LocalAIEmbedding/LocalAIEmbedding.ts
@@ -0,0 +1,55 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { OpenAIEmbeddings, OpenAIEmbeddingsParams } from 'langchain/embeddings/openai'
+
+class LocalAIEmbedding_Embeddings implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ description: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'LocalAI Embeddings'
+ this.name = 'localAIEmbeddings'
+ this.version = 1.0
+ this.type = 'LocalAI Embeddings'
+ this.icon = 'localai.png'
+ this.category = 'Embeddings'
+ this.description = 'Use local embeddings models like llama.cpp'
+ this.baseClasses = [this.type, 'Embeddings']
+ this.inputs = [
+ {
+ label: 'Base Path',
+ name: 'basePath',
+ type: 'string',
+ placeholder: 'http://localhost:8080/v1'
+ },
+ {
+ label: 'Model Name',
+ name: 'modelName',
+ type: 'string',
+ placeholder: 'text-embedding-ada-002'
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const modelName = nodeData.inputs?.modelName as string
+ const basePath = nodeData.inputs?.basePath as string
+
+ const obj: Partial & { openAIApiKey?: string } = {
+ modelName,
+ openAIApiKey: 'sk-'
+ }
+
+ const model = new OpenAIEmbeddings(obj, { basePath })
+
+ return model
+ }
+}
+
+module.exports = { nodeClass: LocalAIEmbedding_Embeddings }
diff --git a/packages/components/nodes/embeddings/LocalAIEmbedding/localai.png b/packages/components/nodes/embeddings/LocalAIEmbedding/localai.png
new file mode 100644
index 000000000..321403973
Binary files /dev/null and b/packages/components/nodes/embeddings/LocalAIEmbedding/localai.png differ
diff --git a/packages/components/nodes/embeddings/OpenAIEmbedding/OpenAIEmbedding.ts b/packages/components/nodes/embeddings/OpenAIEmbedding/OpenAIEmbedding.ts
index 3ccfab820..d21b6dcaa 100644
--- a/packages/components/nodes/embeddings/OpenAIEmbedding/OpenAIEmbedding.ts
+++ b/packages/components/nodes/embeddings/OpenAIEmbedding/OpenAIEmbedding.ts
@@ -1,31 +1,35 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { OpenAIEmbeddings, OpenAIEmbeddingsParams } from 'langchain/embeddings/openai'
class OpenAIEmbedding_Embeddings implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'OpenAI Embeddings'
this.name = 'openAIEmbeddings'
+ this.version = 1.0
this.type = 'OpenAIEmbeddings'
this.icon = 'openai.png'
this.category = 'Embeddings'
this.description = 'OpenAI API to generate embeddings for a given text'
this.baseClasses = [this.type, ...getBaseClasses(OpenAIEmbeddings)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['openAIApi']
+ }
this.inputs = [
- {
- label: 'OpenAI Api Key',
- name: 'openAIApiKey',
- type: 'password'
- },
{
label: 'Strip New Lines',
name: 'stripNewLines',
@@ -46,15 +50,25 @@ class OpenAIEmbedding_Embeddings implements INode {
type: 'number',
optional: true,
additionalParams: true
+ },
+ {
+ label: 'BasePath',
+ name: 'basepath',
+ type: 'string',
+ optional: true,
+ additionalParams: true
}
]
}
- async init(nodeData: INodeData): Promise {
- const openAIApiKey = nodeData.inputs?.openAIApiKey as string
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const stripNewLines = nodeData.inputs?.stripNewLines as boolean
const batchSize = nodeData.inputs?.batchSize as string
const timeout = nodeData.inputs?.timeout as string
+ const basePath = nodeData.inputs?.basepath as string
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, nodeData)
const obj: Partial & { openAIApiKey?: string } = {
openAIApiKey
@@ -64,7 +78,7 @@ class OpenAIEmbedding_Embeddings implements INode {
if (batchSize) obj.batchSize = parseInt(batchSize, 10)
if (timeout) obj.timeout = parseInt(timeout, 10)
- const model = new OpenAIEmbeddings(obj)
+ const model = new OpenAIEmbeddings(obj, { basePath })
return model
}
}
diff --git a/packages/components/nodes/llms/Azure OpenAI/Azure.svg b/packages/components/nodes/llms/Azure OpenAI/Azure.svg
index 51eb62535..47ad8c440 100644
--- a/packages/components/nodes/llms/Azure OpenAI/Azure.svg
+++ b/packages/components/nodes/llms/Azure OpenAI/Azure.svg
@@ -1,5 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/packages/components/nodes/llms/Azure OpenAI/AzureOpenAI.ts b/packages/components/nodes/llms/Azure OpenAI/AzureOpenAI.ts
index b5d7d1e03..f48c4642b 100644
--- a/packages/components/nodes/llms/Azure OpenAI/AzureOpenAI.ts
+++ b/packages/components/nodes/llms/Azure OpenAI/AzureOpenAI.ts
@@ -1,31 +1,35 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { AzureOpenAIInput, OpenAI, OpenAIInput } from 'langchain/llms/openai'
class AzureOpenAI_LLMs implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Azure OpenAI'
this.name = 'azureOpenAI'
+ this.version = 1.0
this.type = 'AzureOpenAI'
this.icon = 'Azure.svg'
this.category = 'LLMs'
this.description = 'Wrapper around Azure OpenAI large language models'
this.baseClasses = [this.type, ...getBaseClasses(OpenAI)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['azureOpenAIApi']
+ }
this.inputs = [
- {
- label: 'Azure OpenAI Api Key',
- name: 'azureOpenAIApiKey',
- type: 'password'
- },
{
label: 'Model Name',
name: 'modelName',
@@ -87,41 +91,15 @@ class AzureOpenAI_LLMs implements INode {
label: 'Temperature',
name: 'temperature',
type: 'number',
+ step: 0.1,
default: 0.9,
optional: true
},
- {
- label: 'Azure OpenAI Api Instance Name',
- name: 'azureOpenAIApiInstanceName',
- type: 'string',
- placeholder: 'YOUR-INSTANCE-NAME'
- },
- {
- label: 'Azure OpenAI Api Deployment Name',
- name: 'azureOpenAIApiDeploymentName',
- type: 'string',
- placeholder: 'YOUR-DEPLOYMENT-NAME'
- },
- {
- label: 'Azure OpenAI Api Version',
- name: 'azureOpenAIApiVersion',
- type: 'options',
- options: [
- {
- label: '2023-03-15-preview',
- name: '2023-03-15-preview'
- },
- {
- label: '2022-12-01',
- name: '2022-12-01'
- }
- ],
- default: '2023-03-15-preview'
- },
{
label: 'Max Tokens',
name: 'maxTokens',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -129,6 +107,7 @@ class AzureOpenAI_LLMs implements INode {
label: 'Top Probability',
name: 'topP',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -136,6 +115,7 @@ class AzureOpenAI_LLMs implements INode {
label: 'Best Of',
name: 'bestOf',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -143,6 +123,7 @@ class AzureOpenAI_LLMs implements INode {
label: 'Frequency Penalty',
name: 'frequencyPenalty',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -150,6 +131,7 @@ class AzureOpenAI_LLMs implements INode {
label: 'Presence Penalty',
name: 'presencePenalty',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -157,39 +139,44 @@ class AzureOpenAI_LLMs implements INode {
label: 'Timeout',
name: 'timeout',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
}
]
}
- async init(nodeData: INodeData): Promise {
- const azureOpenAIApiKey = nodeData.inputs?.azureOpenAIApiKey as string
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
- const azureOpenAIApiInstanceName = nodeData.inputs?.azureOpenAIApiInstanceName as string
- const azureOpenAIApiDeploymentName = nodeData.inputs?.azureOpenAIApiDeploymentName as string
- const azureOpenAIApiVersion = nodeData.inputs?.azureOpenAIApiVersion as string
const maxTokens = nodeData.inputs?.maxTokens as string
const topP = nodeData.inputs?.topP as string
const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
const presencePenalty = nodeData.inputs?.presencePenalty as string
const timeout = nodeData.inputs?.timeout as string
const bestOf = nodeData.inputs?.bestOf as string
+ const streaming = nodeData.inputs?.streaming as boolean
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const azureOpenAIApiKey = getCredentialParam('azureOpenAIApiKey', credentialData, nodeData)
+ const azureOpenAIApiInstanceName = getCredentialParam('azureOpenAIApiInstanceName', credentialData, nodeData)
+ const azureOpenAIApiDeploymentName = getCredentialParam('azureOpenAIApiDeploymentName', credentialData, nodeData)
+ const azureOpenAIApiVersion = getCredentialParam('azureOpenAIApiVersion', credentialData, nodeData)
const obj: Partial & Partial = {
- temperature: parseInt(temperature, 10),
+ temperature: parseFloat(temperature),
modelName,
azureOpenAIApiKey,
azureOpenAIApiInstanceName,
azureOpenAIApiDeploymentName,
- azureOpenAIApiVersion
+ azureOpenAIApiVersion,
+ streaming: streaming ?? true
}
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
- if (topP) obj.topP = parseInt(topP, 10)
- if (frequencyPenalty) obj.frequencyPenalty = parseInt(frequencyPenalty, 10)
- if (presencePenalty) obj.presencePenalty = parseInt(presencePenalty, 10)
+ if (topP) obj.topP = parseFloat(topP)
+ if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
+ if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
if (timeout) obj.timeout = parseInt(timeout, 10)
if (bestOf) obj.bestOf = parseInt(bestOf, 10)
diff --git a/packages/components/nodes/llms/Cohere/Cohere.ts b/packages/components/nodes/llms/Cohere/Cohere.ts
index dc632ec31..4a3a8a807 100644
--- a/packages/components/nodes/llms/Cohere/Cohere.ts
+++ b/packages/components/nodes/llms/Cohere/Cohere.ts
@@ -1,31 +1,35 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
-import { Cohere, CohereInput } from 'langchain/llms/cohere'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { Cohere, CohereInput } from './core'
class Cohere_LLMs implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'Cohere'
this.name = 'cohere'
+ this.version = 1.0
this.type = 'Cohere'
this.icon = 'cohere.png'
this.category = 'LLMs'
this.description = 'Wrapper around Cohere large language models'
this.baseClasses = [this.type, ...getBaseClasses(Cohere)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['cohereApi']
+ }
this.inputs = [
- {
- label: 'Cohere Api Key',
- name: 'cohereApiKey',
- type: 'password'
- },
{
label: 'Model Name',
name: 'modelName',
@@ -63,6 +67,7 @@ class Cohere_LLMs implements INode {
label: 'Temperature',
name: 'temperature',
type: 'number',
+ step: 0.1,
default: 0.7,
optional: true
},
@@ -70,24 +75,27 @@ class Cohere_LLMs implements INode {
label: 'Max Tokens',
name: 'maxTokens',
type: 'number',
+ step: 1,
optional: true
}
]
}
- async init(nodeData: INodeData): Promise {
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
- const apiKey = nodeData.inputs?.cohereApiKey as string
const maxTokens = nodeData.inputs?.maxTokens as string
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const cohereApiKey = getCredentialParam('cohereApiKey', credentialData, nodeData)
+
const obj: CohereInput = {
- apiKey
+ apiKey: cohereApiKey
}
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
if (modelName) obj.model = modelName
- if (temperature) obj.temperature = parseInt(temperature, 10)
+ if (temperature) obj.temperature = parseFloat(temperature)
const model = new Cohere(obj)
return model
diff --git a/packages/components/nodes/llms/Cohere/core.ts b/packages/components/nodes/llms/Cohere/core.ts
new file mode 100644
index 000000000..97c815710
--- /dev/null
+++ b/packages/components/nodes/llms/Cohere/core.ts
@@ -0,0 +1,78 @@
+import { LLM, BaseLLMParams } from 'langchain/llms/base'
+
+export interface CohereInput extends BaseLLMParams {
+ /** Sampling temperature to use */
+ temperature?: number
+
+ /**
+ * Maximum number of tokens to generate in the completion.
+ */
+ maxTokens?: number
+
+ /** Model to use */
+ model?: string
+
+ apiKey?: string
+}
+
+export class Cohere extends LLM implements CohereInput {
+ temperature = 0
+
+ maxTokens = 250
+
+ model: string
+
+ apiKey: string
+
+ constructor(fields?: CohereInput) {
+ super(fields ?? {})
+
+ const apiKey = fields?.apiKey ?? undefined
+
+ if (!apiKey) {
+ throw new Error('Please set the COHERE_API_KEY environment variable or pass it to the constructor as the apiKey field.')
+ }
+
+ this.apiKey = apiKey
+ this.maxTokens = fields?.maxTokens ?? this.maxTokens
+ this.temperature = fields?.temperature ?? this.temperature
+ this.model = fields?.model ?? this.model
+ }
+
+ _llmType() {
+ return 'cohere'
+ }
+
+ /** @ignore */
+ async _call(prompt: string, options: this['ParsedCallOptions']): Promise {
+ const { cohere } = await Cohere.imports()
+
+ cohere.init(this.apiKey)
+
+ // Hit the `generate` endpoint on the `large` model
+ const generateResponse = await this.caller.callWithOptions({ signal: options.signal }, cohere.generate.bind(cohere), {
+ prompt,
+ model: this.model,
+ max_tokens: this.maxTokens,
+ temperature: this.temperature,
+ end_sequences: options.stop
+ })
+ try {
+ return generateResponse.body.generations[0].text
+ } catch {
+ throw new Error('Could not parse response.')
+ }
+ }
+
+ /** @ignore */
+ static async imports(): Promise<{
+ cohere: typeof import('cohere-ai')
+ }> {
+ try {
+ const { default: cohere } = await import('cohere-ai')
+ return { cohere }
+ } catch (e) {
+ throw new Error('Please install cohere-ai as a dependency with, e.g. `yarn add cohere-ai`')
+ }
+ }
+}
diff --git a/packages/components/nodes/llms/GoogleVertexAI/GoogleVertexAI.ts b/packages/components/nodes/llms/GoogleVertexAI/GoogleVertexAI.ts
new file mode 100644
index 000000000..4d9b3aeda
--- /dev/null
+++ b/packages/components/nodes/llms/GoogleVertexAI/GoogleVertexAI.ts
@@ -0,0 +1,117 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { GoogleVertexAI, GoogleVertexAITextInput } from 'langchain/llms/googlevertexai'
+import { GoogleAuthOptions } from 'google-auth-library'
+
+class GoogleVertexAI_LLMs implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ description: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'GoogleVertexAI'
+ this.name = 'googlevertexai'
+ this.version = 1.0
+ this.type = 'GoogleVertexAI'
+ this.icon = 'vertexai.svg'
+ this.category = 'LLMs'
+ this.description = 'Wrapper around GoogleVertexAI large language models'
+ this.baseClasses = [this.type, ...getBaseClasses(GoogleVertexAI)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['googleVertexAuth']
+ }
+ this.inputs = [
+ {
+ label: 'Model Name',
+ name: 'modelName',
+ type: 'options',
+ options: [
+ {
+ label: 'text-bison',
+ name: 'text-bison'
+ },
+ {
+ label: 'code-bison',
+ name: 'code-bison'
+ },
+ {
+ label: 'code-gecko',
+ name: 'code-gecko'
+ }
+ ],
+ default: 'text-bison'
+ },
+ {
+ label: 'Temperature',
+ name: 'temperature',
+ type: 'number',
+ step: 0.1,
+ default: 0.7,
+ optional: true
+ },
+ {
+ label: 'max Output Tokens',
+ name: 'maxOutputTokens',
+ type: 'number',
+ step: 1,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Top Probability',
+ name: 'topP',
+ type: 'number',
+ step: 0.1,
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const googleApplicationCredentialFilePath = getCredentialParam('googleApplicationCredentialFilePath', credentialData, nodeData)
+ const googleApplicationCredential = getCredentialParam('googleApplicationCredential', credentialData, nodeData)
+ const projectID = getCredentialParam('projectID', credentialData, nodeData)
+
+ if (!googleApplicationCredentialFilePath && !googleApplicationCredential)
+ throw new Error('Please specify your Google Application Credential')
+ if (googleApplicationCredentialFilePath && googleApplicationCredential)
+ throw new Error('Please use either Google Application Credential File Path or Google Credential JSON Object')
+
+ const authOptions: GoogleAuthOptions = {}
+ if (googleApplicationCredentialFilePath && !googleApplicationCredential) authOptions.keyFile = googleApplicationCredentialFilePath
+ else if (!googleApplicationCredentialFilePath && googleApplicationCredential)
+ authOptions.credentials = JSON.parse(googleApplicationCredential)
+ if (projectID) authOptions.projectId = projectID
+
+ const temperature = nodeData.inputs?.temperature as string
+ const modelName = nodeData.inputs?.modelName as string
+ const maxOutputTokens = nodeData.inputs?.maxOutputTokens as string
+ const topP = nodeData.inputs?.topP as string
+
+ const obj: Partial = {
+ temperature: parseFloat(temperature),
+ model: modelName,
+ authOptions
+ }
+
+ if (maxOutputTokens) obj.maxOutputTokens = parseInt(maxOutputTokens, 10)
+ if (topP) obj.topP = parseFloat(topP)
+
+ const model = new GoogleVertexAI(obj)
+ return model
+ }
+}
+
+module.exports = { nodeClass: GoogleVertexAI_LLMs }
diff --git a/packages/components/nodes/llms/GoogleVertexAI/vertexai.svg b/packages/components/nodes/llms/GoogleVertexAI/vertexai.svg
new file mode 100644
index 000000000..31244412a
--- /dev/null
+++ b/packages/components/nodes/llms/GoogleVertexAI/vertexai.svg
@@ -0,0 +1,2 @@
+
+
\ No newline at end of file
diff --git a/packages/components/nodes/llms/HuggingFaceInference/HuggingFaceInference.ts b/packages/components/nodes/llms/HuggingFaceInference/HuggingFaceInference.ts
index 6aa3f4f4f..c7f6a37e8 100644
--- a/packages/components/nodes/llms/HuggingFaceInference/HuggingFaceInference.ts
+++ b/packages/components/nodes/llms/HuggingFaceInference/HuggingFaceInference.ts
@@ -1,48 +1,124 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
-import { HuggingFaceInference } from 'langchain/llms/hf'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { HFInput, HuggingFaceInference } from './core'
class HuggingFaceInference_LLMs implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'HuggingFace Inference'
this.name = 'huggingFaceInference_LLMs'
+ this.version = 1.0
this.type = 'HuggingFaceInference'
this.icon = 'huggingface.png'
this.category = 'LLMs'
this.description = 'Wrapper around HuggingFace large language models'
this.baseClasses = [this.type, ...getBaseClasses(HuggingFaceInference)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['huggingFaceApi']
+ }
this.inputs = [
{
label: 'Model',
name: 'model',
type: 'string',
- placeholder: 'gpt2'
+ description: 'If using own inference endpoint, leave this blank',
+ placeholder: 'gpt2',
+ optional: true
},
{
- label: 'HuggingFace Api Key',
- name: 'apiKey',
- type: 'password'
+ label: 'Endpoint',
+ name: 'endpoint',
+ type: 'string',
+ placeholder: 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2',
+ description: 'Using your own inference endpoint',
+ optional: true
+ },
+ {
+ label: 'Temperature',
+ name: 'temperature',
+ type: 'number',
+ step: 0.1,
+ description: 'Temperature parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Max Tokens',
+ name: 'maxTokens',
+ type: 'number',
+ step: 1,
+ description: 'Max Tokens parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Top Probability',
+ name: 'topP',
+ type: 'number',
+ step: 0.1,
+ description: 'Top Probability parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Top K',
+ name: 'hfTopK',
+ type: 'number',
+ step: 0.1,
+ description: 'Top K parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Frequency Penalty',
+ name: 'frequencyPenalty',
+ type: 'number',
+ step: 0.1,
+ description: 'Frequency Penalty parameter may not apply to certain model. Please check available model parameters',
+ optional: true,
+ additionalParams: true
}
]
}
- async init(nodeData: INodeData): Promise {
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const model = nodeData.inputs?.model as string
- const apiKey = nodeData.inputs?.apiKey as string
+ const temperature = nodeData.inputs?.temperature as string
+ const maxTokens = nodeData.inputs?.maxTokens as string
+ const topP = nodeData.inputs?.topP as string
+ const hfTopK = nodeData.inputs?.hfTopK as string
+ const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
+ const endpoint = nodeData.inputs?.endpoint as string
- const huggingFace = new HuggingFaceInference({
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData)
+
+ const obj: Partial = {
model,
- apiKey
- })
+ apiKey: huggingFaceApiKey
+ }
+
+ if (temperature) obj.temperature = parseFloat(temperature)
+ if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
+ if (topP) obj.topP = parseFloat(topP)
+ if (hfTopK) obj.topK = parseFloat(hfTopK)
+ if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
+ if (endpoint) obj.endpoint = endpoint
+
+ const huggingFace = new HuggingFaceInference(obj)
return huggingFace
}
}
diff --git a/packages/components/nodes/llms/HuggingFaceInference/core.ts b/packages/components/nodes/llms/HuggingFaceInference/core.ts
new file mode 100644
index 000000000..416567f0d
--- /dev/null
+++ b/packages/components/nodes/llms/HuggingFaceInference/core.ts
@@ -0,0 +1,113 @@
+import { getEnvironmentVariable } from '../../../src/utils'
+import { LLM, BaseLLMParams } from 'langchain/llms/base'
+
+export interface HFInput {
+ /** Model to use */
+ model: string
+
+ /** Sampling temperature to use */
+ temperature?: number
+
+ /**
+ * Maximum number of tokens to generate in the completion.
+ */
+ maxTokens?: number
+
+ /** Total probability mass of tokens to consider at each step */
+ topP?: number
+
+ /** Integer to define the top tokens considered within the sample operation to create new text. */
+ topK?: number
+
+ /** Penalizes repeated tokens according to frequency */
+ frequencyPenalty?: number
+
+ /** API key to use. */
+ apiKey?: string
+
+ /** Private endpoint to use. */
+ endpoint?: string
+}
+
+export class HuggingFaceInference extends LLM implements HFInput {
+ get lc_secrets(): { [key: string]: string } | undefined {
+ return {
+ apiKey: 'HUGGINGFACEHUB_API_KEY'
+ }
+ }
+
+ model = 'gpt2'
+
+ temperature: number | undefined = undefined
+
+ maxTokens: number | undefined = undefined
+
+ topP: number | undefined = undefined
+
+ topK: number | undefined = undefined
+
+ frequencyPenalty: number | undefined = undefined
+
+ apiKey: string | undefined = undefined
+
+ endpoint: string | undefined = undefined
+
+ constructor(fields?: Partial & BaseLLMParams) {
+ super(fields ?? {})
+
+ this.model = fields?.model ?? this.model
+ this.temperature = fields?.temperature ?? this.temperature
+ this.maxTokens = fields?.maxTokens ?? this.maxTokens
+ this.topP = fields?.topP ?? this.topP
+ this.topK = fields?.topK ?? this.topK
+ this.frequencyPenalty = fields?.frequencyPenalty ?? this.frequencyPenalty
+ this.endpoint = fields?.endpoint ?? ''
+ this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
+ if (!this.apiKey) {
+ throw new Error(
+ 'Please set an API key for HuggingFace Hub in the environment variable HUGGINGFACEHUB_API_KEY or in the apiKey field of the HuggingFaceInference constructor.'
+ )
+ }
+ }
+
+ _llmType() {
+ return 'hf'
+ }
+
+ /** @ignore */
+ async _call(prompt: string, options: this['ParsedCallOptions']): Promise {
+ const { HfInference } = await HuggingFaceInference.imports()
+ const hf = new HfInference(this.apiKey)
+ const obj: any = {
+ parameters: {
+ // make it behave similar to openai, returning only the generated text
+ return_full_text: false,
+ temperature: this.temperature,
+ max_new_tokens: this.maxTokens,
+ top_p: this.topP,
+ top_k: this.topK,
+ repetition_penalty: this.frequencyPenalty
+ },
+ inputs: prompt
+ }
+ if (this.endpoint) {
+ hf.endpoint(this.endpoint)
+ } else {
+ obj.model = this.model
+ }
+ const res = await this.caller.callWithOptions({ signal: options.signal }, hf.textGeneration.bind(hf), obj)
+ return res.generated_text
+ }
+
+ /** @ignore */
+ static async imports(): Promise<{
+ HfInference: typeof import('@huggingface/inference').HfInference
+ }> {
+ try {
+ const { HfInference } = await import('@huggingface/inference')
+ return { HfInference }
+ } catch (e) {
+ throw new Error('Please install huggingface as a dependency with, e.g. `yarn add @huggingface/inference`')
+ }
+ }
+}
diff --git a/packages/components/nodes/llms/OpenAI/OpenAI.ts b/packages/components/nodes/llms/OpenAI/OpenAI.ts
index af44965e3..4e35d659f 100644
--- a/packages/components/nodes/llms/OpenAI/OpenAI.ts
+++ b/packages/components/nodes/llms/OpenAI/OpenAI.ts
@@ -1,31 +1,35 @@
-import { INode, INodeData, INodeParams } from '../../../src/Interface'
-import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { OpenAI, OpenAIInput } from 'langchain/llms/openai'
class OpenAI_LLMs implements INode {
label: string
name: string
+ version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
+ credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'OpenAI'
this.name = 'openAI'
+ this.version = 1.0
this.type = 'OpenAI'
this.icon = 'openai.png'
this.category = 'LLMs'
this.description = 'Wrapper around OpenAI large language models'
this.baseClasses = [this.type, ...getBaseClasses(OpenAI)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['openAIApi']
+ }
this.inputs = [
- {
- label: 'OpenAI Api Key',
- name: 'openAIApiKey',
- type: 'password'
- },
{
label: 'Model Name',
name: 'modelName',
@@ -55,6 +59,7 @@ class OpenAI_LLMs implements INode {
label: 'Temperature',
name: 'temperature',
type: 'number',
+ step: 0.1,
default: 0.7,
optional: true
},
@@ -62,6 +67,7 @@ class OpenAI_LLMs implements INode {
label: 'Max Tokens',
name: 'maxTokens',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -69,6 +75,7 @@ class OpenAI_LLMs implements INode {
label: 'Top Probability',
name: 'topP',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -76,6 +83,7 @@ class OpenAI_LLMs implements INode {
label: 'Best Of',
name: 'bestOf',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -83,6 +91,7 @@ class OpenAI_LLMs implements INode {
label: 'Frequency Penalty',
name: 'frequencyPenalty',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -90,6 +99,7 @@ class OpenAI_LLMs implements INode {
label: 'Presence Penalty',
name: 'presencePenalty',
type: 'number',
+ step: 0.1,
optional: true,
additionalParams: true
},
@@ -97,6 +107,7 @@ class OpenAI_LLMs implements INode {
label: 'Batch Size',
name: 'batchSize',
type: 'number',
+ step: 1,
optional: true,
additionalParams: true
},
@@ -104,16 +115,23 @@ class OpenAI_LLMs implements INode {
label: 'Timeout',
name: 'timeout',
type: 'number',
+ step: 1,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'BasePath',
+ name: 'basepath',
+ type: 'string',
optional: true,
additionalParams: true
}
]
}
- async init(nodeData: INodeData): Promise {
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
- const openAIApiKey = nodeData.inputs?.openAIApiKey as string
const maxTokens = nodeData.inputs?.maxTokens as string
const topP = nodeData.inputs?.topP as string
const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
@@ -121,22 +139,30 @@ class OpenAI_LLMs implements INode {
const timeout = nodeData.inputs?.timeout as string
const batchSize = nodeData.inputs?.batchSize as string
const bestOf = nodeData.inputs?.bestOf as string
+ const streaming = nodeData.inputs?.streaming as boolean
+ const basePath = nodeData.inputs?.basepath as string
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, nodeData)
const obj: Partial & { openAIApiKey?: string } = {
- temperature: parseInt(temperature, 10),
+ temperature: parseFloat(temperature),
modelName,
- openAIApiKey
+ openAIApiKey,
+ streaming: streaming ?? true
}
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
- if (topP) obj.topP = parseInt(topP, 10)
- if (frequencyPenalty) obj.frequencyPenalty = parseInt(frequencyPenalty, 10)
- if (presencePenalty) obj.presencePenalty = parseInt(presencePenalty, 10)
+ if (topP) obj.topP = parseFloat(topP)
+ if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
+ if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
if (timeout) obj.timeout = parseInt(timeout, 10)
if (batchSize) obj.batchSize = parseInt(batchSize, 10)
if (bestOf) obj.bestOf = parseInt(bestOf, 10)
- const model = new OpenAI(obj)
+ const model = new OpenAI(obj, {
+ basePath
+ })
return model
}
}
diff --git a/packages/components/nodes/llms/Replicate/Replicate.ts b/packages/components/nodes/llms/Replicate/Replicate.ts
new file mode 100644
index 000000000..22c6e93aa
--- /dev/null
+++ b/packages/components/nodes/llms/Replicate/Replicate.ts
@@ -0,0 +1,128 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { Replicate, ReplicateInput } from 'langchain/llms/replicate'
+
+class Replicate_LLMs implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ description: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Replicate'
+ this.name = 'replicate'
+ this.version = 1.0
+ this.type = 'Replicate'
+ this.icon = 'replicate.svg'
+ this.category = 'LLMs'
+ this.description = 'Use Replicate to run open source models on cloud'
+ this.baseClasses = [this.type, 'BaseChatModel', ...getBaseClasses(Replicate)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['replicateApi']
+ }
+ this.inputs = [
+ {
+ label: 'Model',
+ name: 'model',
+ type: 'string',
+ placeholder: 'a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5',
+ optional: true
+ },
+ {
+ label: 'Temperature',
+ name: 'temperature',
+ type: 'number',
+ step: 0.1,
+ description:
+ 'Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.',
+ default: 0.7,
+ optional: true
+ },
+ {
+ label: 'Max Tokens',
+ name: 'maxTokens',
+ type: 'number',
+ step: 1,
+ description: 'Maximum number of tokens to generate. A word is generally 2-3 tokens',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Top Probability',
+ name: 'topP',
+ type: 'number',
+ step: 0.1,
+ description:
+ 'When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Repetition Penalty',
+ name: 'repetitionPenalty',
+ type: 'number',
+ step: 0.1,
+ description:
+ 'Penalty for repeated words in generated text; 1 is no penalty, values greater than 1 discourage repetition, less than 1 encourage it. (minimum: 0.01; maximum: 5)',
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Additional Inputs',
+ name: 'additionalInputs',
+ type: 'json',
+ description:
+ 'Each model has different parameters, refer to the specific model accepted inputs. For example: llama13b-v2',
+ additionalParams: true,
+ optional: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const modelName = nodeData.inputs?.model as string
+ const temperature = nodeData.inputs?.temperature as string
+ const maxTokens = nodeData.inputs?.maxTokens as string
+ const topP = nodeData.inputs?.topP as string
+ const repetitionPenalty = nodeData.inputs?.repetitionPenalty as string
+ const additionalInputs = nodeData.inputs?.additionalInputs as string
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const apiKey = getCredentialParam('replicateApiKey', credentialData, nodeData)
+
+ const version = modelName.split(':').pop()
+ const name = modelName.split(':')[0].split('/').pop()
+ const org = modelName.split(':')[0].split('/')[0]
+
+ const obj: ReplicateInput = {
+ model: `${org}/${name}:${version}`,
+ apiKey
+ }
+
+ let inputs: any = {}
+ if (maxTokens) inputs.max_length = parseInt(maxTokens, 10)
+ if (temperature) inputs.temperature = parseFloat(temperature)
+ if (topP) inputs.top_p = parseFloat(topP)
+ if (repetitionPenalty) inputs.repetition_penalty = parseFloat(repetitionPenalty)
+ if (additionalInputs) {
+ const parsedInputs =
+ typeof additionalInputs === 'object' ? additionalInputs : additionalInputs ? JSON.parse(additionalInputs) : {}
+ inputs = { ...inputs, ...parsedInputs }
+ }
+ if (Object.keys(inputs).length) obj.input = inputs
+
+ const model = new Replicate(obj)
+ return model
+ }
+}
+
+module.exports = { nodeClass: Replicate_LLMs }
diff --git a/packages/components/nodes/llms/Replicate/replicate.svg b/packages/components/nodes/llms/Replicate/replicate.svg
new file mode 100644
index 000000000..2e46453f8
--- /dev/null
+++ b/packages/components/nodes/llms/Replicate/replicate.svg
@@ -0,0 +1,7 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/memory/BufferMemory/BufferMemory.ts b/packages/components/nodes/memory/BufferMemory/BufferMemory.ts
index fd635ff47..7793d96d4 100644
--- a/packages/components/nodes/memory/BufferMemory/BufferMemory.ts
+++ b/packages/components/nodes/memory/BufferMemory/BufferMemory.ts
@@ -5,6 +5,7 @@ import { BufferMemory } from 'langchain/memory'
class BufferMemory_Memory implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class BufferMemory_Memory implements INode {
constructor() {
this.label = 'Buffer Memory'
this.name = 'bufferMemory'
+ this.version = 1.0
this.type = 'BufferMemory'
this.icon = 'memory.svg'
this.category = 'Memory'
diff --git a/packages/components/nodes/memory/BufferWindowMemory/BufferWindowMemory.ts b/packages/components/nodes/memory/BufferWindowMemory/BufferWindowMemory.ts
new file mode 100644
index 000000000..cf8e7f1dc
--- /dev/null
+++ b/packages/components/nodes/memory/BufferWindowMemory/BufferWindowMemory.ts
@@ -0,0 +1,64 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses } from '../../../src/utils'
+import { BufferWindowMemory, BufferWindowMemoryInput } from 'langchain/memory'
+
+class BufferWindowMemory_Memory implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Buffer Window Memory'
+ this.name = 'bufferWindowMemory'
+ this.version = 1.0
+ this.type = 'BufferWindowMemory'
+ this.icon = 'memory.svg'
+ this.category = 'Memory'
+ this.description = 'Uses a window of size k to surface the last k back-and-forths to use as memory'
+ this.baseClasses = [this.type, ...getBaseClasses(BufferWindowMemory)]
+ this.inputs = [
+ {
+ label: 'Memory Key',
+ name: 'memoryKey',
+ type: 'string',
+ default: 'chat_history'
+ },
+ {
+ label: 'Input Key',
+ name: 'inputKey',
+ type: 'string',
+ default: 'input'
+ },
+ {
+ label: 'Size',
+ name: 'k',
+ type: 'number',
+ default: '4',
+ description: 'Window of size k to surface the last k back-and-forths to use as memory.'
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const memoryKey = nodeData.inputs?.memoryKey as string
+ const inputKey = nodeData.inputs?.inputKey as string
+ const k = nodeData.inputs?.k as string
+
+ const obj: Partial = {
+ returnMessages: true,
+ memoryKey: memoryKey,
+ inputKey: inputKey,
+ k: parseInt(k, 10)
+ }
+
+ return new BufferWindowMemory(obj)
+ }
+}
+
+module.exports = { nodeClass: BufferWindowMemory_Memory }
diff --git a/packages/components/nodes/memory/BufferWindowMemory/memory.svg b/packages/components/nodes/memory/BufferWindowMemory/memory.svg
new file mode 100644
index 000000000..ca8e17da1
--- /dev/null
+++ b/packages/components/nodes/memory/BufferWindowMemory/memory.svg
@@ -0,0 +1,8 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/memory/ConversationSummaryMemory/ConversationSummaryMemory.ts b/packages/components/nodes/memory/ConversationSummaryMemory/ConversationSummaryMemory.ts
new file mode 100644
index 000000000..332d73aa9
--- /dev/null
+++ b/packages/components/nodes/memory/ConversationSummaryMemory/ConversationSummaryMemory.ts
@@ -0,0 +1,63 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses } from '../../../src/utils'
+import { ConversationSummaryMemory, ConversationSummaryMemoryInput } from 'langchain/memory'
+import { BaseLanguageModel } from 'langchain/base_language'
+
+class ConversationSummaryMemory_Memory implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Conversation Summary Memory'
+ this.name = 'conversationSummaryMemory'
+ this.version = 1.0
+ this.type = 'ConversationSummaryMemory'
+ this.icon = 'memory.svg'
+ this.category = 'Memory'
+ this.description = 'Summarizes the conversation and stores the current summary in memory'
+ this.baseClasses = [this.type, ...getBaseClasses(ConversationSummaryMemory)]
+ this.inputs = [
+ {
+ label: 'Chat Model',
+ name: 'model',
+ type: 'BaseChatModel'
+ },
+ {
+ label: 'Memory Key',
+ name: 'memoryKey',
+ type: 'string',
+ default: 'chat_history'
+ },
+ {
+ label: 'Input Key',
+ name: 'inputKey',
+ type: 'string',
+ default: 'input'
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const model = nodeData.inputs?.model as BaseLanguageModel
+ const memoryKey = nodeData.inputs?.memoryKey as string
+ const inputKey = nodeData.inputs?.inputKey as string
+
+ const obj: ConversationSummaryMemoryInput = {
+ llm: model,
+ returnMessages: true,
+ memoryKey,
+ inputKey
+ }
+
+ return new ConversationSummaryMemory(obj)
+ }
+}
+
+module.exports = { nodeClass: ConversationSummaryMemory_Memory }
diff --git a/packages/components/nodes/memory/ConversationSummaryMemory/memory.svg b/packages/components/nodes/memory/ConversationSummaryMemory/memory.svg
new file mode 100644
index 000000000..ca8e17da1
--- /dev/null
+++ b/packages/components/nodes/memory/ConversationSummaryMemory/memory.svg
@@ -0,0 +1,8 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/memory/DynamoDb/DynamoDb.ts b/packages/components/nodes/memory/DynamoDb/DynamoDb.ts
new file mode 100644
index 000000000..68b09b7b2
--- /dev/null
+++ b/packages/components/nodes/memory/DynamoDb/DynamoDb.ts
@@ -0,0 +1,133 @@
+import { ICommonObject, INode, INodeData, INodeParams, getBaseClasses, getCredentialData, getCredentialParam } from '../../../src'
+import { DynamoDBChatMessageHistory } from 'langchain/stores/message/dynamodb'
+import { BufferMemory, BufferMemoryInput } from 'langchain/memory'
+
+class DynamoDb_Memory implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'DynamoDB Chat Memory'
+ this.name = 'DynamoDBChatMemory'
+ this.version = 1.0
+ this.type = 'DynamoDBChatMemory'
+ this.icon = 'dynamodb.svg'
+ this.category = 'Memory'
+ this.description = 'Stores the conversation in dynamo db table'
+ this.baseClasses = [this.type, ...getBaseClasses(BufferMemory)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['dynamodbMemoryApi']
+ }
+ this.inputs = [
+ {
+ label: 'Table Name',
+ name: 'tableName',
+ type: 'string'
+ },
+ {
+ label: 'Partition Key',
+ name: 'partitionKey',
+ type: 'string'
+ },
+ {
+ label: 'Region',
+ name: 'region',
+ type: 'string',
+ description: 'The aws region in which table is located',
+ placeholder: 'us-east-1'
+ },
+ {
+ label: 'Session ID',
+ name: 'sessionId',
+ type: 'string',
+ description: 'If not specified, the first CHAT_MESSAGE_ID will be used as sessionId',
+ default: '',
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'Memory Key',
+ name: 'memoryKey',
+ type: 'string',
+ default: 'chat_history',
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ return initalizeDynamoDB(nodeData, options)
+ }
+
+ async clearSessionMemory(nodeData: INodeData, options: ICommonObject): Promise {
+ const dynamodbMemory = await initalizeDynamoDB(nodeData, options)
+ const sessionId = nodeData.inputs?.sessionId as string
+ const chatId = options?.chatId as string
+ options.logger.info(`Clearing DynamoDb memory session ${sessionId ? sessionId : chatId}`)
+ await dynamodbMemory.clear()
+ options.logger.info(`Successfully cleared DynamoDb memory session ${sessionId ? sessionId : chatId}`)
+ }
+}
+
+const initalizeDynamoDB = async (nodeData: INodeData, options: ICommonObject): Promise => {
+ const tableName = nodeData.inputs?.tableName as string
+ const partitionKey = nodeData.inputs?.partitionKey as string
+ const sessionId = nodeData.inputs?.sessionId as string
+ const region = nodeData.inputs?.region as string
+ const memoryKey = nodeData.inputs?.memoryKey as string
+ const chatId = options.chatId
+
+ let isSessionIdUsingChatMessageId = false
+ if (!sessionId && chatId) isSessionIdUsingChatMessageId = true
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const accessKeyId = getCredentialParam('accessKey', credentialData, nodeData)
+ const secretAccessKey = getCredentialParam('secretAccessKey', credentialData, nodeData)
+
+ const dynamoDb = new DynamoDBChatMessageHistory({
+ tableName,
+ partitionKey,
+ sessionId: sessionId ? sessionId : chatId,
+ config: {
+ region,
+ credentials: {
+ accessKeyId,
+ secretAccessKey
+ }
+ }
+ })
+
+ const memory = new BufferMemoryExtended({
+ memoryKey,
+ chatHistory: dynamoDb,
+ returnMessages: true,
+ isSessionIdUsingChatMessageId
+ })
+ return memory
+}
+
+interface BufferMemoryExtendedInput {
+ isSessionIdUsingChatMessageId: boolean
+}
+
+class BufferMemoryExtended extends BufferMemory {
+ isSessionIdUsingChatMessageId? = false
+
+ constructor(fields: BufferMemoryInput & Partial) {
+ super(fields)
+ this.isSessionIdUsingChatMessageId = fields.isSessionIdUsingChatMessageId
+ }
+}
+
+module.exports = { nodeClass: DynamoDb_Memory }
diff --git a/packages/components/nodes/memory/DynamoDb/dynamodb.svg b/packages/components/nodes/memory/DynamoDb/dynamodb.svg
new file mode 100644
index 000000000..f2798350a
--- /dev/null
+++ b/packages/components/nodes/memory/DynamoDb/dynamodb.svg
@@ -0,0 +1,18 @@
+
+
\ No newline at end of file
diff --git a/packages/components/nodes/memory/MotorheadMemory/MotorheadMemory.ts b/packages/components/nodes/memory/MotorheadMemory/MotorheadMemory.ts
new file mode 100644
index 000000000..0ec2f42ad
--- /dev/null
+++ b/packages/components/nodes/memory/MotorheadMemory/MotorheadMemory.ts
@@ -0,0 +1,149 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { ICommonObject } from '../../../src'
+import { MotorheadMemory, MotorheadMemoryInput } from 'langchain/memory'
+import fetch from 'node-fetch'
+
+class MotorMemory_Memory implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Motorhead Memory'
+ this.name = 'motorheadMemory'
+ this.version = 1.0
+ this.type = 'MotorheadMemory'
+ this.icon = 'motorhead.png'
+ this.category = 'Memory'
+ this.description = 'Use Motorhead Memory to store chat conversations'
+ this.baseClasses = [this.type, ...getBaseClasses(MotorheadMemory)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ optional: true,
+ description: 'Only needed when using hosted solution - https://getmetal.io',
+ credentialNames: ['motorheadMemoryApi']
+ }
+ this.inputs = [
+ {
+ label: 'Base URL',
+ name: 'baseURL',
+ type: 'string',
+ optional: true,
+ description: 'To use the online version, leave the URL blank. More details at https://getmetal.io.'
+ },
+ {
+ label: 'Session Id',
+ name: 'sessionId',
+ type: 'string',
+ description: 'If not specified, the first CHAT_MESSAGE_ID will be used as sessionId',
+ default: '',
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'Memory Key',
+ name: 'memoryKey',
+ type: 'string',
+ default: 'chat_history',
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ return initalizeMotorhead(nodeData, options)
+ }
+
+ async clearSessionMemory(nodeData: INodeData, options: ICommonObject): Promise {
+ const motorhead = await initalizeMotorhead(nodeData, options)
+ const sessionId = nodeData.inputs?.sessionId as string
+ const chatId = options?.chatId as string
+ options.logger.info(`Clearing Motorhead memory session ${sessionId ? sessionId : chatId}`)
+ await motorhead.clear()
+ options.logger.info(`Successfully cleared Motorhead memory session ${sessionId ? sessionId : chatId}`)
+ }
+}
+
+const initalizeMotorhead = async (nodeData: INodeData, options: ICommonObject): Promise => {
+ const memoryKey = nodeData.inputs?.memoryKey as string
+ const baseURL = nodeData.inputs?.baseURL as string
+ const sessionId = nodeData.inputs?.sessionId as string
+ const chatId = options?.chatId as string
+
+ let isSessionIdUsingChatMessageId = false
+ if (!sessionId && chatId) isSessionIdUsingChatMessageId = true
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const apiKey = getCredentialParam('apiKey', credentialData, nodeData)
+ const clientId = getCredentialParam('clientId', credentialData, nodeData)
+
+ let obj: MotorheadMemoryInput & Partial = {
+ returnMessages: true,
+ sessionId: sessionId ? sessionId : chatId,
+ memoryKey
+ }
+
+ if (baseURL) {
+ obj = {
+ ...obj,
+ url: baseURL
+ }
+ } else {
+ obj = {
+ ...obj,
+ apiKey,
+ clientId
+ }
+ }
+
+ if (isSessionIdUsingChatMessageId) obj.isSessionIdUsingChatMessageId = true
+
+ const motorheadMemory = new MotorheadMemoryExtended(obj)
+
+ // Get messages from sessionId
+ await motorheadMemory.init()
+
+ return motorheadMemory
+}
+
+interface MotorheadMemoryExtendedInput {
+ isSessionIdUsingChatMessageId: boolean
+}
+
+class MotorheadMemoryExtended extends MotorheadMemory {
+ isSessionIdUsingChatMessageId? = false
+
+ constructor(fields: MotorheadMemoryInput & Partial) {
+ super(fields)
+ this.isSessionIdUsingChatMessageId = fields.isSessionIdUsingChatMessageId
+ }
+
+ async clear(): Promise {
+ try {
+ await this.caller.call(fetch, `${this.url}/sessions/${this.sessionId}/memory`, {
+ //@ts-ignore
+ signal: this.timeout ? AbortSignal.timeout(this.timeout) : undefined,
+ headers: this._getHeaders() as ICommonObject,
+ method: 'DELETE'
+ })
+ } catch (error) {
+ console.error('Error deleting session: ', error)
+ }
+
+ // Clear the superclass's chat history
+ await this.chatHistory.clear()
+ await super.clear()
+ }
+}
+
+module.exports = { nodeClass: MotorMemory_Memory }
diff --git a/packages/components/nodes/memory/MotorheadMemory/motorhead.png b/packages/components/nodes/memory/MotorheadMemory/motorhead.png
new file mode 100644
index 000000000..e1dfbde08
Binary files /dev/null and b/packages/components/nodes/memory/MotorheadMemory/motorhead.png differ
diff --git a/packages/components/nodes/memory/RedisBackedChatMemory/RedisBackedChatMemory.ts b/packages/components/nodes/memory/RedisBackedChatMemory/RedisBackedChatMemory.ts
new file mode 100644
index 000000000..f10f25ce8
--- /dev/null
+++ b/packages/components/nodes/memory/RedisBackedChatMemory/RedisBackedChatMemory.ts
@@ -0,0 +1,123 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses } from '../../../src/utils'
+import { ICommonObject } from '../../../src'
+import { BufferMemory, BufferMemoryInput } from 'langchain/memory'
+import { RedisChatMessageHistory, RedisChatMessageHistoryInput } from 'langchain/stores/message/redis'
+import { createClient } from 'redis'
+
+class RedisBackedChatMemory_Memory implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Redis-Backed Chat Memory'
+ this.name = 'RedisBackedChatMemory'
+ this.version = 1.0
+ this.type = 'RedisBackedChatMemory'
+ this.icon = 'redis.svg'
+ this.category = 'Memory'
+ this.description = 'Summarizes the conversation and stores the memory in Redis server'
+ this.baseClasses = [this.type, ...getBaseClasses(BufferMemory)]
+ this.inputs = [
+ {
+ label: 'Base URL',
+ name: 'baseURL',
+ type: 'string',
+ default: 'redis://localhost:6379'
+ },
+ {
+ label: 'Session Id',
+ name: 'sessionId',
+ type: 'string',
+ description: 'If not specified, the first CHAT_MESSAGE_ID will be used as sessionId',
+ default: '',
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'Session Timeouts',
+ name: 'sessionTTL',
+ type: 'number',
+ description: 'Omit this parameter to make sessions never expire',
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'Memory Key',
+ name: 'memoryKey',
+ type: 'string',
+ default: 'chat_history',
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ return initalizeRedis(nodeData, options)
+ }
+
+ async clearSessionMemory(nodeData: INodeData, options: ICommonObject): Promise {
+ const redis = initalizeRedis(nodeData, options)
+ const sessionId = nodeData.inputs?.sessionId as string
+ const chatId = options?.chatId as string
+ options.logger.info(`Clearing Redis memory session ${sessionId ? sessionId : chatId}`)
+ await redis.clear()
+ options.logger.info(`Successfully cleared Redis memory session ${sessionId ? sessionId : chatId}`)
+ }
+}
+
+const initalizeRedis = (nodeData: INodeData, options: ICommonObject): BufferMemory => {
+ const baseURL = nodeData.inputs?.baseURL as string
+ const sessionId = nodeData.inputs?.sessionId as string
+ const sessionTTL = nodeData.inputs?.sessionTTL as number
+ const memoryKey = nodeData.inputs?.memoryKey as string
+ const chatId = options?.chatId as string
+
+ let isSessionIdUsingChatMessageId = false
+ if (!sessionId && chatId) isSessionIdUsingChatMessageId = true
+
+ const redisClient = createClient({ url: baseURL })
+ let obj: RedisChatMessageHistoryInput = {
+ sessionId: sessionId ? sessionId : chatId,
+ client: redisClient
+ }
+
+ if (sessionTTL) {
+ obj = {
+ ...obj,
+ sessionTTL
+ }
+ }
+
+ const redisChatMessageHistory = new RedisChatMessageHistory(obj)
+
+ const memory = new BufferMemoryExtended({
+ memoryKey,
+ chatHistory: redisChatMessageHistory,
+ returnMessages: true,
+ isSessionIdUsingChatMessageId
+ })
+ return memory
+}
+
+interface BufferMemoryExtendedInput {
+ isSessionIdUsingChatMessageId: boolean
+}
+
+class BufferMemoryExtended extends BufferMemory {
+ isSessionIdUsingChatMessageId? = false
+
+ constructor(fields: BufferMemoryInput & Partial) {
+ super(fields)
+ this.isSessionIdUsingChatMessageId = fields.isSessionIdUsingChatMessageId
+ }
+}
+
+module.exports = { nodeClass: RedisBackedChatMemory_Memory }
diff --git a/packages/components/nodes/memory/RedisBackedChatMemory/redis.svg b/packages/components/nodes/memory/RedisBackedChatMemory/redis.svg
new file mode 100644
index 000000000..903590697
--- /dev/null
+++ b/packages/components/nodes/memory/RedisBackedChatMemory/redis.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/memory/ZepMemory/ZepMemory.ts b/packages/components/nodes/memory/ZepMemory/ZepMemory.ts
new file mode 100644
index 000000000..0c05563a3
--- /dev/null
+++ b/packages/components/nodes/memory/ZepMemory/ZepMemory.ts
@@ -0,0 +1,196 @@
+import { SystemMessage } from 'langchain/schema'
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { ZepMemory, ZepMemoryInput } from 'langchain/memory/zep'
+import { ICommonObject } from '../../../src'
+
+class ZepMemory_Memory implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Zep Memory'
+ this.name = 'ZepMemory'
+ this.version = 1.0
+ this.type = 'ZepMemory'
+ this.icon = 'zep.png'
+ this.category = 'Memory'
+ this.description = 'Summarizes the conversation and stores the memory in zep server'
+ this.baseClasses = [this.type, ...getBaseClasses(ZepMemory)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ optional: true,
+ description: 'Configure JWT authentication on your Zep instance (Optional)',
+ credentialNames: ['zepMemoryApi']
+ }
+ this.inputs = [
+ {
+ label: 'Base URL',
+ name: 'baseURL',
+ type: 'string',
+ default: 'http://127.0.0.1:8000'
+ },
+ {
+ label: 'Auto Summary',
+ name: 'autoSummary',
+ type: 'boolean',
+ default: true
+ },
+ {
+ label: 'Session Id',
+ name: 'sessionId',
+ type: 'string',
+ description: 'If not specified, the first CHAT_MESSAGE_ID will be used as sessionId',
+ default: '',
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'Size',
+ name: 'k',
+ type: 'number',
+ default: '10',
+ description: 'Window of size k to surface the last k back-and-forths to use as memory.'
+ },
+ {
+ label: 'Auto Summary Template',
+ name: 'autoSummaryTemplate',
+ type: 'string',
+ default: 'This is the summary of the following conversation:\n{summary}',
+ additionalParams: true
+ },
+ {
+ label: 'AI Prefix',
+ name: 'aiPrefix',
+ type: 'string',
+ default: 'ai',
+ additionalParams: true
+ },
+ {
+ label: 'Human Prefix',
+ name: 'humanPrefix',
+ type: 'string',
+ default: 'human',
+ additionalParams: true
+ },
+ {
+ label: 'Memory Key',
+ name: 'memoryKey',
+ type: 'string',
+ default: 'chat_history',
+ additionalParams: true
+ },
+ {
+ label: 'Input Key',
+ name: 'inputKey',
+ type: 'string',
+ default: 'input',
+ additionalParams: true
+ },
+ {
+ label: 'Output Key',
+ name: 'outputKey',
+ type: 'string',
+ default: 'text',
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const autoSummaryTemplate = nodeData.inputs?.autoSummaryTemplate as string
+ const autoSummary = nodeData.inputs?.autoSummary as boolean
+
+ const k = nodeData.inputs?.k as string
+
+ let zep = await initalizeZep(nodeData, options)
+
+ // hack to support summary
+ let tmpFunc = zep.loadMemoryVariables
+ zep.loadMemoryVariables = async (values) => {
+ let data = await tmpFunc.bind(zep, values)()
+ if (autoSummary && zep.returnMessages && data[zep.memoryKey] && data[zep.memoryKey].length) {
+ const zepClient = await zep.zepClientPromise
+ const memory = await zepClient.memory.getMemory(zep.sessionId, parseInt(k, 10) ?? 10)
+ if (memory?.summary) {
+ let summary = autoSummaryTemplate.replace(/{summary}/g, memory.summary.content)
+ // eslint-disable-next-line no-console
+ console.log('[ZepMemory] auto summary:', summary)
+ data[zep.memoryKey].unshift(new SystemMessage(summary))
+ }
+ }
+ // for langchain zep memory compatibility, or we will get "Missing value for input variable chat_history"
+ if (data instanceof Array) {
+ data = {
+ [zep.memoryKey]: data
+ }
+ }
+ return data
+ }
+ return zep
+ }
+
+ async clearSessionMemory(nodeData: INodeData, options: ICommonObject): Promise {
+ const zep = await initalizeZep(nodeData, options)
+ const sessionId = nodeData.inputs?.sessionId as string
+ const chatId = options?.chatId as string
+ options.logger.info(`Clearing Zep memory session ${sessionId ? sessionId : chatId}`)
+ await zep.clear()
+ options.logger.info(`Successfully cleared Zep memory session ${sessionId ? sessionId : chatId}`)
+ }
+}
+
+const initalizeZep = async (nodeData: INodeData, options: ICommonObject): Promise => {
+ const baseURL = nodeData.inputs?.baseURL as string
+ const aiPrefix = nodeData.inputs?.aiPrefix as string
+ const humanPrefix = nodeData.inputs?.humanPrefix as string
+ const memoryKey = nodeData.inputs?.memoryKey as string
+ const inputKey = nodeData.inputs?.inputKey as string
+ const sessionId = nodeData.inputs?.sessionId as string
+ const chatId = options?.chatId as string
+
+ let isSessionIdUsingChatMessageId = false
+ if (!sessionId && chatId) isSessionIdUsingChatMessageId = true
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const apiKey = getCredentialParam('apiKey', credentialData, nodeData)
+
+ const obj: ZepMemoryInput & Partial = {
+ baseURL,
+ sessionId: sessionId ? sessionId : chatId,
+ aiPrefix,
+ humanPrefix,
+ returnMessages: true,
+ memoryKey,
+ inputKey
+ }
+ if (apiKey) obj.apiKey = apiKey
+ if (isSessionIdUsingChatMessageId) obj.isSessionIdUsingChatMessageId = true
+
+ return new ZepMemoryExtended(obj)
+}
+
+interface ZepMemoryExtendedInput {
+ isSessionIdUsingChatMessageId: boolean
+}
+
+class ZepMemoryExtended extends ZepMemory {
+ isSessionIdUsingChatMessageId? = false
+
+ constructor(fields: ZepMemoryInput & Partial) {
+ super(fields)
+ this.isSessionIdUsingChatMessageId = fields.isSessionIdUsingChatMessageId
+ }
+}
+
+module.exports = { nodeClass: ZepMemory_Memory }
diff --git a/packages/components/nodes/memory/ZepMemory/zep.png b/packages/components/nodes/memory/ZepMemory/zep.png
new file mode 100644
index 000000000..2fdb23827
Binary files /dev/null and b/packages/components/nodes/memory/ZepMemory/zep.png differ
diff --git a/packages/components/nodes/prompts/ChatPromptTemplate/ChatPromptTemplate.ts b/packages/components/nodes/prompts/ChatPromptTemplate/ChatPromptTemplate.ts
index c3c4d77f6..c9ec751d8 100644
--- a/packages/components/nodes/prompts/ChatPromptTemplate/ChatPromptTemplate.ts
+++ b/packages/components/nodes/prompts/ChatPromptTemplate/ChatPromptTemplate.ts
@@ -5,6 +5,7 @@ import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemp
class ChatPromptTemplate_Prompts implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class ChatPromptTemplate_Prompts implements INode {
constructor() {
this.label = 'Chat Prompt Template'
this.name = 'chatPromptTemplate'
+ this.version = 1.0
this.type = 'ChatPromptTemplate'
this.icon = 'prompt.svg'
this.category = 'Prompts'
@@ -38,12 +40,7 @@ class ChatPromptTemplate_Prompts implements INode {
{
label: 'Format Prompt Values',
name: 'promptValues',
- type: 'string',
- rows: 4,
- placeholder: `{
- "input_language": "English",
- "output_language": "French"
-}`,
+ type: 'json',
optional: true,
acceptVariable: true,
list: true
@@ -63,7 +60,7 @@ class ChatPromptTemplate_Prompts implements INode {
let promptValues: ICommonObject = {}
if (promptValuesStr) {
- promptValues = JSON.parse(promptValuesStr.replace(/\s/g, ''))
+ promptValues = JSON.parse(promptValuesStr)
}
// @ts-ignore
prompt.promptValues = promptValues
diff --git a/packages/components/nodes/prompts/FewShotPromptTemplate/FewShotPromptTemplate.ts b/packages/components/nodes/prompts/FewShotPromptTemplate/FewShotPromptTemplate.ts
index a42a1d088..ed1d3cb21 100644
--- a/packages/components/nodes/prompts/FewShotPromptTemplate/FewShotPromptTemplate.ts
+++ b/packages/components/nodes/prompts/FewShotPromptTemplate/FewShotPromptTemplate.ts
@@ -7,6 +7,7 @@ import { TemplateFormat } from 'langchain/dist/prompts/template'
class FewShotPromptTemplate_Prompts implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -17,6 +18,7 @@ class FewShotPromptTemplate_Prompts implements INode {
constructor() {
this.label = 'Few Shot Prompt Template'
this.name = 'fewShotPromptTemplate'
+ this.version = 1.0
this.type = 'FewShotPromptTemplate'
this.icon = 'prompt.svg'
this.category = 'Prompts'
@@ -86,7 +88,7 @@ class FewShotPromptTemplate_Prompts implements INode {
const examplePrompt = nodeData.inputs?.examplePrompt as PromptTemplate
const inputVariables = getInputVariables(suffix)
- const examples: Example[] = JSON.parse(examplesStr.replace(/\s/g, ''))
+ const examples: Example[] = JSON.parse(examplesStr)
try {
const obj: FewShotPromptTemplateInput = {
diff --git a/packages/components/nodes/prompts/PromptTemplate/PromptTemplate.ts b/packages/components/nodes/prompts/PromptTemplate/PromptTemplate.ts
index f976d64c6..a401e2823 100644
--- a/packages/components/nodes/prompts/PromptTemplate/PromptTemplate.ts
+++ b/packages/components/nodes/prompts/PromptTemplate/PromptTemplate.ts
@@ -5,6 +5,7 @@ import { PromptTemplateInput } from 'langchain/prompts'
class PromptTemplate_Prompts implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class PromptTemplate_Prompts implements INode {
constructor() {
this.label = 'Prompt Template'
this.name = 'promptTemplate'
+ this.version = 1.0
this.type = 'PromptTemplate'
this.icon = 'prompt.svg'
this.category = 'Prompts'
@@ -31,12 +33,7 @@ class PromptTemplate_Prompts implements INode {
{
label: 'Format Prompt Values',
name: 'promptValues',
- type: 'string',
- rows: 4,
- placeholder: `{
- "input_language": "English",
- "output_language": "French"
-}`,
+ type: 'json',
optional: true,
acceptVariable: true,
list: true
@@ -50,7 +47,7 @@ class PromptTemplate_Prompts implements INode {
let promptValues: ICommonObject = {}
if (promptValuesStr) {
- promptValues = JSON.parse(promptValuesStr.replace(/\s/g, ''))
+ promptValues = JSON.parse(promptValuesStr)
}
const inputVariables = getInputVariables(template)
diff --git a/packages/components/nodes/retrievers/HydeRetriever/HydeRetriever.ts b/packages/components/nodes/retrievers/HydeRetriever/HydeRetriever.ts
new file mode 100644
index 000000000..2baf677eb
--- /dev/null
+++ b/packages/components/nodes/retrievers/HydeRetriever/HydeRetriever.ts
@@ -0,0 +1,123 @@
+import { VectorStore } from 'langchain/vectorstores/base'
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { HydeRetriever, HydeRetrieverOptions, PromptKey } from 'langchain/retrievers/hyde'
+import { BaseLanguageModel } from 'langchain/base_language'
+import { PromptTemplate } from 'langchain/prompts'
+
+class HydeRetriever_Retrievers implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Hyde Retriever'
+ this.name = 'HydeRetriever'
+ this.version = 1.0
+ this.type = 'HydeRetriever'
+ this.icon = 'hyderetriever.svg'
+ this.category = 'Retrievers'
+ this.description = 'Use HyDE retriever to retrieve from a vector store'
+ this.baseClasses = [this.type, 'BaseRetriever']
+ this.inputs = [
+ {
+ label: 'Language Model',
+ name: 'model',
+ type: 'BaseLanguageModel'
+ },
+ {
+ label: 'Vector Store',
+ name: 'vectorStore',
+ type: 'VectorStore'
+ },
+ {
+ label: 'Prompt Key',
+ name: 'promptKey',
+ type: 'options',
+ options: [
+ {
+ label: 'websearch',
+ name: 'websearch'
+ },
+ {
+ label: 'scifact',
+ name: 'scifact'
+ },
+ {
+ label: 'arguana',
+ name: 'arguana'
+ },
+ {
+ label: 'trec-covid',
+ name: 'trec-covid'
+ },
+ {
+ label: 'fiqa',
+ name: 'fiqa'
+ },
+ {
+ label: 'dbpedia-entity',
+ name: 'dbpedia-entity'
+ },
+ {
+ label: 'trec-news',
+ name: 'trec-news'
+ },
+ {
+ label: 'mr-tydi',
+ name: 'mr-tydi'
+ }
+ ],
+ default: 'websearch'
+ },
+ {
+ label: 'Custom Prompt',
+ name: 'customPrompt',
+ description: 'If custom prompt is used, this will override Prompt Key',
+ placeholder: 'Please write a passage to answer the question\nQuestion: {question}\nPassage:',
+ type: 'string',
+ rows: 4,
+ additionalParams: true,
+ optional: true
+ },
+ {
+ label: 'Top K',
+ name: 'topK',
+ description: 'Number of top results to fetch. Default to 4',
+ placeholder: '4',
+ type: 'number',
+ default: 4,
+ additionalParams: true,
+ optional: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const llm = nodeData.inputs?.model as BaseLanguageModel
+ const vectorStore = nodeData.inputs?.vectorStore as VectorStore
+ const promptKey = nodeData.inputs?.promptKey as PromptKey
+ const customPrompt = nodeData.inputs?.customPrompt as string
+ const topK = nodeData.inputs?.topK as string
+ const k = topK ? parseInt(topK, 10) : 4
+
+ const obj: HydeRetrieverOptions = {
+ llm,
+ vectorStore,
+ k
+ }
+
+ if (customPrompt) obj.promptTemplate = PromptTemplate.fromTemplate(customPrompt)
+ else if (promptKey) obj.promptTemplate = promptKey
+
+ const retriever = new HydeRetriever(obj)
+ return retriever
+ }
+}
+
+module.exports = { nodeClass: HydeRetriever_Retrievers }
diff --git a/packages/components/nodes/retrievers/HydeRetriever/hyderetriever.svg b/packages/components/nodes/retrievers/HydeRetriever/hyderetriever.svg
new file mode 100644
index 000000000..da3a9f207
--- /dev/null
+++ b/packages/components/nodes/retrievers/HydeRetriever/hyderetriever.svg
@@ -0,0 +1,9 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/retrievers/PromptRetriever/PromptRetriever.ts b/packages/components/nodes/retrievers/PromptRetriever/PromptRetriever.ts
new file mode 100644
index 000000000..7ffaa64fa
--- /dev/null
+++ b/packages/components/nodes/retrievers/PromptRetriever/PromptRetriever.ts
@@ -0,0 +1,64 @@
+import { INode, INodeData, INodeParams, PromptRetriever, PromptRetrieverInput } from '../../../src/Interface'
+
+class PromptRetriever_Retrievers implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Prompt Retriever'
+ this.name = 'promptRetriever'
+ this.version = 1.0
+ this.type = 'PromptRetriever'
+ this.icon = 'promptretriever.svg'
+ this.category = 'Retrievers'
+ this.description = 'Store prompt template with name & description to be later queried by MultiPromptChain'
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'Prompt Name',
+ name: 'name',
+ type: 'string',
+ placeholder: 'physics-qa'
+ },
+ {
+ label: 'Prompt Description',
+ name: 'description',
+ type: 'string',
+ rows: 3,
+ description: 'Description of what the prompt does and when it should be used',
+ placeholder: 'Good for answering questions about physics'
+ },
+ {
+ label: 'Prompt System Message',
+ name: 'systemMessage',
+ type: 'string',
+ rows: 4,
+ placeholder: `You are a very smart physics professor. You are great at answering questions about physics in a concise and easy to understand manner. When you don't know the answer to a question you admit that you don't know.`
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const name = nodeData.inputs?.name as string
+ const description = nodeData.inputs?.description as string
+ const systemMessage = nodeData.inputs?.systemMessage as string
+
+ const obj = {
+ name,
+ description,
+ systemMessage
+ } as PromptRetrieverInput
+
+ const retriever = new PromptRetriever(obj)
+ return retriever
+ }
+}
+
+module.exports = { nodeClass: PromptRetriever_Retrievers }
diff --git a/packages/components/nodes/retrievers/PromptRetriever/promptretriever.svg b/packages/components/nodes/retrievers/PromptRetriever/promptretriever.svg
new file mode 100644
index 000000000..db48e8a51
--- /dev/null
+++ b/packages/components/nodes/retrievers/PromptRetriever/promptretriever.svg
@@ -0,0 +1,8 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/retrievers/VectorStoreRetriever/VectorStoreRetriever.ts b/packages/components/nodes/retrievers/VectorStoreRetriever/VectorStoreRetriever.ts
new file mode 100644
index 000000000..41f665719
--- /dev/null
+++ b/packages/components/nodes/retrievers/VectorStoreRetriever/VectorStoreRetriever.ts
@@ -0,0 +1,63 @@
+import { VectorStore } from 'langchain/vectorstores/base'
+import { INode, INodeData, INodeParams, VectorStoreRetriever, VectorStoreRetrieverInput } from '../../../src/Interface'
+
+class VectorStoreRetriever_Retrievers implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Vector Store Retriever'
+ this.name = 'vectorStoreRetriever'
+ this.version = 1.0
+ this.type = 'VectorStoreRetriever'
+ this.icon = 'vectorretriever.svg'
+ this.category = 'Retrievers'
+ this.description = 'Store vector store as retriever to be later queried by MultiRetrievalQAChain'
+ this.baseClasses = [this.type]
+ this.inputs = [
+ {
+ label: 'Vector Store',
+ name: 'vectorStore',
+ type: 'VectorStore'
+ },
+ {
+ label: 'Retriever Name',
+ name: 'name',
+ type: 'string',
+ placeholder: 'netflix movies'
+ },
+ {
+ label: 'Retriever Description',
+ name: 'description',
+ type: 'string',
+ rows: 3,
+ description: 'Description of when to use the vector store retriever',
+ placeholder: 'Good for answering questions about netflix movies'
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const name = nodeData.inputs?.name as string
+ const description = nodeData.inputs?.description as string
+ const vectorStore = nodeData.inputs?.vectorStore as VectorStore
+
+ const obj = {
+ name,
+ description,
+ vectorStore
+ } as VectorStoreRetrieverInput
+
+ const retriever = new VectorStoreRetriever(obj)
+ return retriever
+ }
+}
+
+module.exports = { nodeClass: VectorStoreRetriever_Retrievers }
diff --git a/packages/components/nodes/retrievers/VectorStoreRetriever/vectorretriever.svg b/packages/components/nodes/retrievers/VectorStoreRetriever/vectorretriever.svg
new file mode 100644
index 000000000..da3a9f207
--- /dev/null
+++ b/packages/components/nodes/retrievers/VectorStoreRetriever/vectorretriever.svg
@@ -0,0 +1,9 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/textsplitters/CharacterTextSplitter/CharacterTextSplitter.ts b/packages/components/nodes/textsplitters/CharacterTextSplitter/CharacterTextSplitter.ts
index 90387e8b6..f9427d10a 100644
--- a/packages/components/nodes/textsplitters/CharacterTextSplitter/CharacterTextSplitter.ts
+++ b/packages/components/nodes/textsplitters/CharacterTextSplitter/CharacterTextSplitter.ts
@@ -5,6 +5,7 @@ import { CharacterTextSplitter, CharacterTextSplitterParams } from 'langchain/te
class CharacterTextSplitter_TextSplitters implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class CharacterTextSplitter_TextSplitters implements INode {
constructor() {
this.label = 'Character Text Splitter'
this.name = 'characterTextSplitter'
+ this.version = 1.0
this.type = 'CharacterTextSplitter'
this.icon = 'textsplitter.svg'
this.category = 'Text Splitters'
diff --git a/packages/components/nodes/textsplitters/CodeTextSplitter/CodeTextSplitter.ts b/packages/components/nodes/textsplitters/CodeTextSplitter/CodeTextSplitter.ts
new file mode 100644
index 000000000..ed643f330
--- /dev/null
+++ b/packages/components/nodes/textsplitters/CodeTextSplitter/CodeTextSplitter.ts
@@ -0,0 +1,130 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses } from '../../../src/utils'
+import {
+ RecursiveCharacterTextSplitter,
+ RecursiveCharacterTextSplitterParams,
+ SupportedTextSplitterLanguage
+} from 'langchain/text_splitter'
+
+class CodeTextSplitter_TextSplitters implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+ constructor() {
+ this.label = 'Code Text Splitter'
+ this.name = 'codeTextSplitter'
+ this.version = 1.0
+ this.type = 'CodeTextSplitter'
+ this.icon = 'codeTextSplitter.svg'
+ this.category = 'Text Splitters'
+ this.description = `Split documents based on language-specific syntax`
+ this.baseClasses = [this.type, ...getBaseClasses(RecursiveCharacterTextSplitter)]
+ this.inputs = [
+ {
+ label: 'Language',
+ name: 'language',
+ type: 'options',
+ options: [
+ {
+ label: 'cpp',
+ name: 'cpp'
+ },
+ {
+ label: 'go',
+ name: 'go'
+ },
+ {
+ label: 'java',
+ name: 'java'
+ },
+ {
+ label: 'js',
+ name: 'js'
+ },
+ {
+ label: 'php',
+ name: 'php'
+ },
+ {
+ label: 'proto',
+ name: 'proto'
+ },
+ {
+ label: 'python',
+ name: 'python'
+ },
+ {
+ label: 'rst',
+ name: 'rst'
+ },
+ {
+ label: 'ruby',
+ name: 'ruby'
+ },
+ {
+ label: 'rust',
+ name: 'rust'
+ },
+ {
+ label: 'scala',
+ name: 'scala'
+ },
+ {
+ label: 'swift',
+ name: 'swift'
+ },
+ {
+ label: 'markdown',
+ name: 'markdown'
+ },
+ {
+ label: 'latex',
+ name: 'latex'
+ },
+ {
+ label: 'html',
+ name: 'html'
+ },
+ {
+ label: 'sol',
+ name: 'sol'
+ }
+ ]
+ },
+ {
+ label: 'Chunk Size',
+ name: 'chunkSize',
+ type: 'number',
+ default: 1000,
+ optional: true
+ },
+ {
+ label: 'Chunk Overlap',
+ name: 'chunkOverlap',
+ type: 'number',
+ optional: true
+ }
+ ]
+ }
+ async init(nodeData: INodeData): Promise {
+ const chunkSize = nodeData.inputs?.chunkSize as string
+ const chunkOverlap = nodeData.inputs?.chunkOverlap as string
+ const language = nodeData.inputs?.language as SupportedTextSplitterLanguage
+
+ const obj = {} as RecursiveCharacterTextSplitterParams
+
+ if (chunkSize) obj.chunkSize = parseInt(chunkSize, 10)
+ if (chunkOverlap) obj.chunkOverlap = parseInt(chunkOverlap, 10)
+
+ const splitter = RecursiveCharacterTextSplitter.fromLanguage(language, obj)
+
+ return splitter
+ }
+}
+module.exports = { nodeClass: CodeTextSplitter_TextSplitters }
diff --git a/packages/components/nodes/textsplitters/CodeTextSplitter/codeTextSplitter.svg b/packages/components/nodes/textsplitters/CodeTextSplitter/codeTextSplitter.svg
new file mode 100644
index 000000000..d3b3d188a
--- /dev/null
+++ b/packages/components/nodes/textsplitters/CodeTextSplitter/codeTextSplitter.svg
@@ -0,0 +1,8 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/textsplitters/HtmlToMarkdownTextSplitter/HtmlToMarkdownTextSplitter.ts b/packages/components/nodes/textsplitters/HtmlToMarkdownTextSplitter/HtmlToMarkdownTextSplitter.ts
new file mode 100644
index 000000000..699764e54
--- /dev/null
+++ b/packages/components/nodes/textsplitters/HtmlToMarkdownTextSplitter/HtmlToMarkdownTextSplitter.ts
@@ -0,0 +1,72 @@
+import { INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses } from '../../../src/utils'
+import { MarkdownTextSplitter, MarkdownTextSplitterParams } from 'langchain/text_splitter'
+import { NodeHtmlMarkdown } from 'node-html-markdown'
+
+class HtmlToMarkdownTextSplitter_TextSplitters implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'HtmlToMarkdown Text Splitter'
+ this.name = 'htmlToMarkdownTextSplitter'
+ this.version = 1.0
+ this.type = 'HtmlToMarkdownTextSplitter'
+ this.icon = 'htmlToMarkdownTextSplitter.svg'
+ this.category = 'Text Splitters'
+ this.description = `Converts Html to Markdown and then split your content into documents based on the Markdown headers`
+ this.baseClasses = [this.type, ...getBaseClasses(HtmlToMarkdownTextSplitter)]
+ this.inputs = [
+ {
+ label: 'Chunk Size',
+ name: 'chunkSize',
+ type: 'number',
+ default: 1000,
+ optional: true
+ },
+ {
+ label: 'Chunk Overlap',
+ name: 'chunkOverlap',
+ type: 'number',
+ optional: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData): Promise {
+ const chunkSize = nodeData.inputs?.chunkSize as string
+ const chunkOverlap = nodeData.inputs?.chunkOverlap as string
+
+ const obj = {} as MarkdownTextSplitterParams
+
+ if (chunkSize) obj.chunkSize = parseInt(chunkSize, 10)
+ if (chunkOverlap) obj.chunkOverlap = parseInt(chunkOverlap, 10)
+
+ const splitter = new HtmlToMarkdownTextSplitter(obj)
+
+ return splitter
+ }
+}
+class HtmlToMarkdownTextSplitter extends MarkdownTextSplitter implements MarkdownTextSplitterParams {
+ constructor(fields?: Partial) {
+ {
+ super(fields)
+ }
+ }
+ splitText(text: string): Promise {
+ return new Promise((resolve) => {
+ const markdown = NodeHtmlMarkdown.translate(text)
+ super.splitText(markdown).then((result) => {
+ resolve(result)
+ })
+ })
+ }
+}
+module.exports = { nodeClass: HtmlToMarkdownTextSplitter_TextSplitters }
diff --git a/packages/components/nodes/textsplitters/HtmlToMarkdownTextSplitter/htmlToMarkdownTextSplitter.svg b/packages/components/nodes/textsplitters/HtmlToMarkdownTextSplitter/htmlToMarkdownTextSplitter.svg
new file mode 100644
index 000000000..f7d45d603
--- /dev/null
+++ b/packages/components/nodes/textsplitters/HtmlToMarkdownTextSplitter/htmlToMarkdownTextSplitter.svg
@@ -0,0 +1,6 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/textsplitters/MarkdownTextSplitter/MarkdownTextSplitter.ts b/packages/components/nodes/textsplitters/MarkdownTextSplitter/MarkdownTextSplitter.ts
index 02c37d8d5..0a12845ae 100644
--- a/packages/components/nodes/textsplitters/MarkdownTextSplitter/MarkdownTextSplitter.ts
+++ b/packages/components/nodes/textsplitters/MarkdownTextSplitter/MarkdownTextSplitter.ts
@@ -5,6 +5,7 @@ import { MarkdownTextSplitter, MarkdownTextSplitterParams } from 'langchain/text
class MarkdownTextSplitter_TextSplitters implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class MarkdownTextSplitter_TextSplitters implements INode {
constructor() {
this.label = 'Markdown Text Splitter'
this.name = 'markdownTextSplitter'
+ this.version = 1.0
this.type = 'MarkdownTextSplitter'
this.icon = 'markdownTextSplitter.svg'
this.category = 'Text Splitters'
diff --git a/packages/components/nodes/textsplitters/RecursiveCharacterTextSplitter/RecursiveCharacterTextSplitter.ts b/packages/components/nodes/textsplitters/RecursiveCharacterTextSplitter/RecursiveCharacterTextSplitter.ts
index 432b5ca90..dcca70ba2 100644
--- a/packages/components/nodes/textsplitters/RecursiveCharacterTextSplitter/RecursiveCharacterTextSplitter.ts
+++ b/packages/components/nodes/textsplitters/RecursiveCharacterTextSplitter/RecursiveCharacterTextSplitter.ts
@@ -5,6 +5,7 @@ import { RecursiveCharacterTextSplitter, RecursiveCharacterTextSplitterParams }
class RecursiveCharacterTextSplitter_TextSplitters implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class RecursiveCharacterTextSplitter_TextSplitters implements INode {
constructor() {
this.label = 'Recursive Character Text Splitter'
this.name = 'recursiveCharacterTextSplitter'
+ this.version = 1.0
this.type = 'RecursiveCharacterTextSplitter'
this.icon = 'textsplitter.svg'
this.category = 'Text Splitters'
diff --git a/packages/components/nodes/textsplitters/TokenTextSplitter/TokenTextSplitter.ts b/packages/components/nodes/textsplitters/TokenTextSplitter/TokenTextSplitter.ts
index 8c8d6abea..0b11eebc9 100644
--- a/packages/components/nodes/textsplitters/TokenTextSplitter/TokenTextSplitter.ts
+++ b/packages/components/nodes/textsplitters/TokenTextSplitter/TokenTextSplitter.ts
@@ -6,6 +6,7 @@ import { TiktokenEncoding } from '@dqbd/tiktoken'
class TokenTextSplitter_TextSplitters implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -16,6 +17,7 @@ class TokenTextSplitter_TextSplitters implements INode {
constructor() {
this.label = 'Token Text Splitter'
this.name = 'tokenTextSplitter'
+ this.version = 1.0
this.type = 'TokenTextSplitter'
this.icon = 'tiktoken.svg'
this.category = 'Text Splitters'
diff --git a/packages/components/nodes/tools/AIPlugin/AIPlugin.ts b/packages/components/nodes/tools/AIPlugin/AIPlugin.ts
index ad21f8dbc..e9c0fa3dc 100644
--- a/packages/components/nodes/tools/AIPlugin/AIPlugin.ts
+++ b/packages/components/nodes/tools/AIPlugin/AIPlugin.ts
@@ -5,6 +5,7 @@ import { getBaseClasses } from '../../../src/utils'
class AIPlugin implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -15,6 +16,7 @@ class AIPlugin implements INode {
constructor() {
this.label = 'AI Plugin'
this.name = 'aiPlugin'
+ this.version = 1.0
this.type = 'AIPlugin'
this.icon = 'aiplugin.svg'
this.category = 'Tools'
diff --git a/packages/components/nodes/tools/BraveSearchAPI/BraveSearchAPI.ts b/packages/components/nodes/tools/BraveSearchAPI/BraveSearchAPI.ts
new file mode 100644
index 000000000..9e9c760d0
--- /dev/null
+++ b/packages/components/nodes/tools/BraveSearchAPI/BraveSearchAPI.ts
@@ -0,0 +1,42 @@
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { BraveSearch } from 'langchain/tools'
+
+class BraveSearchAPI_Tools implements INode {
+ label: string
+ name: string
+ version: number
+ description: string
+ type: string
+ icon: string
+ category: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'BraveSearch API'
+ this.name = 'braveSearchAPI'
+ this.version = 1.0
+ this.type = 'BraveSearchAPI'
+ this.icon = 'brave.svg'
+ this.category = 'Tools'
+ this.description = 'Wrapper around BraveSearch API - a real-time API to access Brave search results'
+ this.inputs = []
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['braveSearchApi']
+ }
+ this.baseClasses = [this.type, ...getBaseClasses(BraveSearch)]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const braveApiKey = getCredentialParam('braveApiKey', credentialData, nodeData)
+ return new BraveSearch({ apiKey: braveApiKey })
+ }
+}
+
+module.exports = { nodeClass: BraveSearchAPI_Tools }
diff --git a/packages/components/nodes/tools/BraveSearchAPI/brave.svg b/packages/components/nodes/tools/BraveSearchAPI/brave.svg
new file mode 100644
index 000000000..0c0c0e86e
--- /dev/null
+++ b/packages/components/nodes/tools/BraveSearchAPI/brave.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/packages/components/nodes/tools/Calculator/Calculator.ts b/packages/components/nodes/tools/Calculator/Calculator.ts
index 85284f0fd..db1e0b2b7 100644
--- a/packages/components/nodes/tools/Calculator/Calculator.ts
+++ b/packages/components/nodes/tools/Calculator/Calculator.ts
@@ -5,6 +5,7 @@ import { Calculator } from 'langchain/tools/calculator'
class Calculator_Tools implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -14,6 +15,7 @@ class Calculator_Tools implements INode {
constructor() {
this.label = 'Calculator'
this.name = 'calculator'
+ this.version = 1.0
this.type = 'Calculator'
this.icon = 'calculator.svg'
this.category = 'Tools'
diff --git a/packages/components/nodes/tools/ChainTool/ChainTool.ts b/packages/components/nodes/tools/ChainTool/ChainTool.ts
index 32e414af7..42b5e6e1e 100644
--- a/packages/components/nodes/tools/ChainTool/ChainTool.ts
+++ b/packages/components/nodes/tools/ChainTool/ChainTool.ts
@@ -1,11 +1,12 @@
import { INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses } from '../../../src/utils'
-import { ChainTool } from 'langchain/tools'
import { BaseChain } from 'langchain/chains'
+import { ChainTool } from './core'
class ChainTool_Tools implements INode {
label: string
name: string
+ version: number
description: string
type: string
icon: string
@@ -16,6 +17,7 @@ class ChainTool_Tools implements INode {
constructor() {
this.label = 'Chain Tool'
this.name = 'chainTool'
+ this.version = 1.0
this.type = 'ChainTool'
this.icon = 'chaintool.svg'
this.category = 'Tools'
diff --git a/packages/components/nodes/tools/ChainTool/chaintool.svg b/packages/components/nodes/tools/ChainTool/chaintool.svg
index c5bd0fbcc..ab76749b4 100644
--- a/packages/components/nodes/tools/ChainTool/chaintool.svg
+++ b/packages/components/nodes/tools/ChainTool/chaintool.svg
@@ -1,4 +1,8 @@
-