Merge branch 'main' into feature/Milvus
# Conflicts: # packages/components/package.json
|
|
@ -0,0 +1,13 @@
|
|||
# These are supported funding model platforms
|
||||
|
||||
github: [FlowiseAI] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
|
||||
patreon: # Replace with a single Patreon username
|
||||
open_collective: # Replace with a single Open Collective username
|
||||
ko_fi: # Replace with a single Ko-fi username
|
||||
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
|
||||
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
|
||||
liberapay: # Replace with a single Liberapay username
|
||||
issuehunt: # Replace with a single IssueHunt username
|
||||
otechie: # Replace with a single Otechie username
|
||||
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
|
||||
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
|
||||
|
|
@ -23,9 +23,14 @@ A clear and concise description of what you expected to happen.
|
|||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Flow**
|
||||
If applicable, add exported flow in order to help replicating the problem.
|
||||
|
||||
**Setup**
|
||||
|
||||
- OS: [e.g. iOS, Windows, Linux]
|
||||
- Installation [e.g. docker, `npx flowise start`, `yarn start`]
|
||||
- Flowise Version [e.g. 1.2.11]
|
||||
- OS: [e.g. macOS, Windows, Linux]
|
||||
- Browser [e.g. chrome, safari]
|
||||
|
||||
**Additional context**
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ name: Node CI
|
|||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
- main
|
||||
|
||||
pull_request:
|
||||
branches:
|
||||
|
|
@ -19,7 +19,8 @@ jobs:
|
|||
platform: [ubuntu-latest]
|
||||
node-version: [18.15.0]
|
||||
runs-on: ${{ matrix.platform }}
|
||||
|
||||
env:
|
||||
PUPPETEER_SKIP_DOWNLOAD: true
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Use Node.js ${{ matrix.node-version }}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,20 @@
|
|||
name: Test Docker Build
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
pull_request:
|
||||
branches:
|
||||
- '*'
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
PUPPETEER_SKIP_DOWNLOAD: true
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- run: docker build --no-cache -t flowise .
|
||||
|
|
@ -8,6 +8,7 @@
|
|||
**/yarn.lock
|
||||
|
||||
## logs
|
||||
**/logs
|
||||
**/*.log
|
||||
|
||||
## build
|
||||
|
|
@ -42,4 +43,4 @@
|
|||
**/uploads
|
||||
|
||||
## compressed
|
||||
**/*.tgz
|
||||
**/*.tgz
|
||||
|
|
|
|||
|
|
@ -0,0 +1,49 @@
|
|||
<!-- markdownlint-disable MD030 -->
|
||||
|
||||
# 贡献者公约行为准则
|
||||
|
||||
[English](./CODE_OF_CONDUCT.md) | 中文
|
||||
|
||||
## 我们的承诺
|
||||
|
||||
为了促进一个开放和友好的环境,我们作为贡献者和维护者承诺,使参与我们的项目和社区的体验对每个人来说都是无骚扰的,无论年龄、体型、残疾、种族、性别认同和表达、经验水平、国籍、个人形象、种族、宗教或性取向如何。
|
||||
|
||||
## 我们的标准
|
||||
|
||||
有助于创建积极环境的行为示例包括:
|
||||
|
||||
- 使用友好和包容性的语言
|
||||
- 尊重不同的观点和经验
|
||||
- 优雅地接受建设性的批评
|
||||
- 关注社区最有利的事情
|
||||
- 向其他社区成员表达同理心
|
||||
|
||||
参与者不可接受的行为示例包括:
|
||||
|
||||
- 使用性暗示的语言或图像和不受欢迎的性关注或进展
|
||||
- 恶作剧、侮辱/贬低的评论和个人或政治攻击
|
||||
- 公开或私下骚扰
|
||||
- 未经明确许可发布他人的私人信息,如实际或电子地址
|
||||
- 在专业环境中可能被合理认为不适当的其他行为
|
||||
|
||||
## 我们的责任
|
||||
|
||||
项目维护者有责任明确可接受行为的标准,并预期对任何不可接受行为的情况采取适当和公正的纠正措施。
|
||||
|
||||
项目维护者有权和责任删除、编辑或拒绝不符合本行为准则的评论、提交、代码、维基编辑、问题和其他贡献,或者临时或永久禁止任何贡献者,如果他们认为其行为不适当、威胁、冒犯或有害。
|
||||
|
||||
## 适用范围
|
||||
|
||||
本行为准则适用于项目空间和公共空间,当个人代表项目或其社区时。代表项目或社区的示例包括使用官方项目电子邮件地址、通过官方社交媒体账号发布或在线或离线活动中担任指定代表。项目的代表可以由项目维护者进一步定义和澄清。
|
||||
|
||||
## 执法
|
||||
|
||||
可以通过联系项目团队 hello@flowiseai.com 来报告滥用、骚扰或其他不可接受的行为。所有投诉将经过审核和调查,并将得出视情况认为必要和适当的回应。项目团队有义务对事件举报人保持机密。具体执行政策的更多细节可能会单独发布。
|
||||
|
||||
如果项目维护者不诚信地遵守或执行行为准则,可能会面临其他项目领导成员决定的临时或永久的后果。
|
||||
|
||||
## 归属
|
||||
|
||||
该行为准则的内容来自于[贡献者公约](http://contributor-covenant.org/)1.4 版,可在[http://contributor-covenant.org/version/1/4](http://contributor-covenant.org/version/1/4)上获取。
|
||||
|
||||
[主页]: http://contributor-covenant.org
|
||||
|
|
@ -1,5 +1,7 @@
|
|||
# Contributor Covenant Code of Conduct
|
||||
|
||||
English | [中文](./CODE_OF_CONDUCT-ZH.md)
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as
|
||||
|
|
|
|||
|
|
@ -0,0 +1,155 @@
|
|||
<!-- markdownlint-disable MD030 -->
|
||||
|
||||
# 贡献给 Flowise
|
||||
|
||||
[English](./CONTRIBUTING.md) | 中文
|
||||
|
||||
我们欢迎任何形式的贡献。
|
||||
|
||||
## ⭐ 点赞
|
||||
|
||||
点赞并分享[Github 仓库](https://github.com/FlowiseAI/Flowise)。
|
||||
|
||||
## 🙋 问题和回答
|
||||
|
||||
在[问题和回答](https://github.com/FlowiseAI/Flowise/discussions/categories/q-a)部分搜索任何问题,如果找不到,可以毫不犹豫地创建一个。这可能会帮助到其他有类似问题的人。
|
||||
|
||||
## 🙌 分享 Chatflow
|
||||
|
||||
是的!分享你如何使用 Flowise 是一种贡献方式。将你的 Chatflow 导出为 JSON,附上截图并在[展示和分享](https://github.com/FlowiseAI/Flowise/discussions/categories/show-and-tell)部分分享。
|
||||
|
||||
## 💡 想法
|
||||
|
||||
欢迎各种想法,如新功能、应用集成和区块链网络。在[想法](https://github.com/FlowiseAI/Flowise/discussions/categories/ideas)部分提交。
|
||||
|
||||
## 🐞 报告错误
|
||||
|
||||
发现问题了吗?[报告它](https://github.com/FlowiseAI/Flowise/issues/new/choose)。
|
||||
|
||||
## 👨💻 贡献代码
|
||||
|
||||
不确定要贡献什么?一些想法:
|
||||
|
||||
- 从 Langchain 创建新组件
|
||||
- 更新现有组件,如扩展功能、修复错误
|
||||
- 添加新的 Chatflow 想法
|
||||
|
||||
### 开发人员
|
||||
|
||||
Flowise 在一个单一的单体存储库中有 3 个不同的模块。
|
||||
|
||||
- `server`:用于提供 API 逻辑的 Node 后端
|
||||
- `ui`:React 前端
|
||||
- `components`:Langchain 组件
|
||||
|
||||
#### 先决条件
|
||||
|
||||
- 安装 [Yarn v1](https://classic.yarnpkg.com/en/docs/install)
|
||||
```bash
|
||||
npm i -g yarn
|
||||
```
|
||||
|
||||
#### 逐步指南
|
||||
|
||||
1. Fork 官方的[Flowise Github 仓库](https://github.com/FlowiseAI/Flowise)。
|
||||
|
||||
2. 克隆你 fork 的存储库。
|
||||
|
||||
3. 创建一个新的分支,参考[指南](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-and-deleting-branches-within-your-repository)。命名约定:
|
||||
|
||||
- 对于功能分支:`feature/<你的新功能>`
|
||||
- 对于 bug 修复分支:`bugfix/<你的新bug修复>`。
|
||||
|
||||
4. 切换到新创建的分支。
|
||||
|
||||
5. 进入存储库文件夹
|
||||
|
||||
```bash
|
||||
cd Flowise
|
||||
```
|
||||
|
||||
6. 安装所有模块的依赖项:
|
||||
|
||||
```bash
|
||||
yarn install
|
||||
```
|
||||
|
||||
7. 构建所有代码:
|
||||
|
||||
```bash
|
||||
yarn build
|
||||
```
|
||||
|
||||
8. 在[http://localhost:3000](http://localhost:3000)上启动应用程序
|
||||
|
||||
```bash
|
||||
yarn start
|
||||
```
|
||||
|
||||
9. 开发时:
|
||||
|
||||
- 在`packages/ui`中创建`.env`文件并指定`PORT`(参考`.env.example`)
|
||||
- 在`packages/server`中创建`.env`文件并指定`PORT`(参考`.env.example`)
|
||||
- 运行
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
|
||||
对`packages/ui`或`packages/server`进行的任何更改都将反映在[http://localhost:8080](http://localhost:8080)上
|
||||
|
||||
对于`packages/components`中进行的更改,再次运行`yarn build`以应用更改。
|
||||
|
||||
10. 做完所有的更改后,运行以下命令来确保在生产环境中一切正常:
|
||||
|
||||
```bash
|
||||
yarn build
|
||||
```
|
||||
|
||||
和
|
||||
|
||||
```bash
|
||||
yarn start
|
||||
```
|
||||
|
||||
11. 提交代码并从指向 [Flowise 主分支](https://github.com/FlowiseAI/Flowise/tree/master) 的分叉分支上提交 Pull Request。
|
||||
|
||||
## 🌱 环境变量
|
||||
|
||||
Flowise 支持不同的环境变量来配置您的实例。您可以在 `packages/server` 文件夹中的 `.env` 文件中指定以下变量。阅读[更多信息](https://docs.flowiseai.com/environment-variables)
|
||||
|
||||
| 变量名 | 描述 | 类型 | 默认值 |
|
||||
| -------------------------- | ------------------------------------------------------ | ----------------------------------------------- | ----------------------------------- |
|
||||
| PORT | Flowise 运行的 HTTP 端口 | 数字 | 3000 |
|
||||
| FLOWISE_USERNAME | 登录用户名 | 字符串 | |
|
||||
| FLOWISE_PASSWORD | 登录密码 | 字符串 | |
|
||||
| DEBUG | 打印组件的日志 | 布尔值 | |
|
||||
| LOG_PATH | 存储日志文件的位置 | 字符串 | `your-path/Flowise/logs` |
|
||||
| LOG_LEVEL | 日志的不同级别 | 枚举字符串: `error`, `info`, `verbose`, `debug` | `info` |
|
||||
| APIKEY_PATH | 存储 API 密钥的位置 | 字符串 | `your-path/Flowise/packages/server` |
|
||||
| TOOL_FUNCTION_BUILTIN_DEP | 用于工具函数的 NodeJS 内置模块 | 字符串 | |
|
||||
| TOOL_FUNCTION_EXTERNAL_DEP | 用于工具函数的外部模块 | 字符串 | |
|
||||
| OVERRIDE_DATABASE | 是否使用默认值覆盖当前数据库 | 枚举字符串: `true`, `false` | `true` |
|
||||
| DATABASE_TYPE | 存储 flowise 数据的数据库类型 | 枚举字符串: `sqlite`, `mysql`, `postgres` | `sqlite` |
|
||||
| DATABASE_PATH | 数据库保存的位置(当 DATABASE_TYPE 是 sqlite 时) | 字符串 | `your-home-dir/.flowise` |
|
||||
| DATABASE_HOST | 主机 URL 或 IP 地址(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| DATABASE_PORT | 数据库端口(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| DATABASE_USERNAME | 数据库用户名(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| DATABASE_PASSWORD | 数据库密码(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| DATABASE_NAME | 数据库名称(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
|
||||
您也可以在使用 `npx` 时指定环境变量。例如:
|
||||
|
||||
```
|
||||
npx flowise start --PORT=3000 --DEBUG=true
|
||||
```
|
||||
|
||||
## 📖 贡献文档
|
||||
|
||||
[Flowise 文档](https://github.com/FlowiseAI/FlowiseDocs)
|
||||
|
||||
## 🏷️ Pull Request 流程
|
||||
|
||||
当您打开一个 Pull Request 时,FlowiseAI 团队的成员将自动收到通知/指派。您也可以在 [Discord](https://discord.gg/jbaHfsRVBW) 上联系我们。
|
||||
|
||||
##
|
||||
|
|
@ -2,6 +2,8 @@
|
|||
|
||||
# Contributing to Flowise
|
||||
|
||||
English | [中文](./CONTRIBUTING-ZH.md)
|
||||
|
||||
We appreciate any form of contributions.
|
||||
|
||||
## ⭐ Star
|
||||
|
|
@ -42,7 +44,7 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
#### Prerequisite
|
||||
|
||||
- Install Yarn
|
||||
- Install [Yarn v1](https://classic.yarnpkg.com/en/docs/install)
|
||||
```bash
|
||||
npm i -g yarn
|
||||
```
|
||||
|
|
@ -84,7 +86,11 @@ Flowise has 3 different modules in a single mono repository.
|
|||
yarn start
|
||||
```
|
||||
|
||||
9. For development, run
|
||||
9. For development:
|
||||
|
||||
- Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/ui`
|
||||
- Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/server`
|
||||
- Run
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
|
|
@ -110,9 +116,41 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
11. Commit code and submit Pull Request from forked branch pointing to [Flowise master](https://github.com/FlowiseAI/Flowise/tree/master).
|
||||
|
||||
## 🌱 Env Variables
|
||||
|
||||
Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://docs.flowiseai.com/environment-variables)
|
||||
|
||||
| Variable | Description | Type | Default |
|
||||
| -------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
|
||||
| PORT | The HTTP port Flowise runs on | Number | 3000 |
|
||||
| FLOWISE_USERNAME | Username to login | String | |
|
||||
| FLOWISE_PASSWORD | Password to login | String | |
|
||||
| DEBUG | Print logs from components | Boolean | |
|
||||
| LOG_PATH | Location where log files are stored | String | `your-path/Flowise/logs` |
|
||||
| LOG_LEVEL | Different levels of logs | Enum String: `error`, `info`, `verbose`, `debug` | `info` |
|
||||
| APIKEY_PATH | Location where api keys are saved | String | `your-path/Flowise/packages/server` |
|
||||
| TOOL_FUNCTION_BUILTIN_DEP | NodeJS built-in modules to be used for Tool Function | String | |
|
||||
| TOOL_FUNCTION_EXTERNAL_DEP | External modules to be used for Tool Function | String | |
|
||||
| OVERRIDE_DATABASE | Override current database with default | Enum String: `true`, `false` | `true` |
|
||||
| DATABASE_TYPE | Type of database to store the flowise data | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite` |
|
||||
| DATABASE_PATH | Location where database is saved (When DATABASE_TYPE is sqlite) | String | `your-home-dir/.flowise` |
|
||||
| DATABASE_HOST | Host URL or IP address (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_PORT | Database port (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_USER | Database username (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_PASSWORD | Database password (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_NAME | Database name (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| PASSPHRASE | Passphrase used to create encryption key | String | `MYPASSPHRASE` |
|
||||
| SECRETKEY_PATH | Location where encryption key (used to encrypt/decrypt credentials) is saved | String | `your-path/Flowise/packages/server` |
|
||||
|
||||
You can also specify the env variables when using `npx`. For example:
|
||||
|
||||
```
|
||||
npx flowise start --PORT=3000 --DEBUG=true
|
||||
```
|
||||
|
||||
## 📖 Contribute to Docs
|
||||
|
||||
In-Progress
|
||||
[Flowise Docs](https://github.com/FlowiseAI/FlowiseDocs)
|
||||
|
||||
## 🏷️ Pull Request process
|
||||
|
||||
|
|
|
|||
14
Dockerfile
|
|
@ -1,14 +1,24 @@
|
|||
# Build local monorepo image
|
||||
# docker build --no-cache -t flowise .
|
||||
|
||||
# Run image
|
||||
# docker run -d -p 3000:3000 flowise
|
||||
|
||||
FROM node:18-alpine
|
||||
RUN apk add --update libc6-compat python3 make g++
|
||||
# needed for pdfjs-dist
|
||||
RUN apk add --no-cache build-base cairo-dev pango-dev
|
||||
|
||||
# Install Chromium
|
||||
RUN apk add --no-cache chromium
|
||||
|
||||
ENV PUPPETEER_SKIP_DOWNLOAD=true
|
||||
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
|
||||
|
||||
WORKDIR /usr/src/packages
|
||||
|
||||
# Copy root package.json and lockfile
|
||||
COPY package.json ./
|
||||
COPY yarn.lock ./
|
||||
COPY package.json yarn.loc[k] ./
|
||||
|
||||
# Copy components package.json
|
||||
COPY packages/components/package.json ./packages/components/package.json
|
||||
|
|
|
|||
|
|
@ -0,0 +1,188 @@
|
|||
<!-- markdownlint-disable MD030 -->
|
||||
|
||||
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.png?raw=true"></a>
|
||||
|
||||
# Flowise - 轻松构建 LLM 应用程序
|
||||
|
||||
[](https://github.com/FlowiseAI/Flowise/releases)
|
||||
[](https://discord.gg/jbaHfsRVBW)
|
||||
[](https://twitter.com/FlowiseAI)
|
||||
[](https://star-history.com/#FlowiseAI/Flowise)
|
||||
[](https://github.com/FlowiseAI/Flowise/fork)
|
||||
|
||||
[English](./README.md) | 中文
|
||||
|
||||
<h3>拖放界面构建定制化的LLM流程</h3>
|
||||
<a href="https://github.com/FlowiseAI/Flowise">
|
||||
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true"></a>
|
||||
|
||||
## ⚡ 快速入门
|
||||
|
||||
下载并安装 [NodeJS](https://nodejs.org/en/download) >= 18.15.0
|
||||
|
||||
1. 安装 Flowise
|
||||
```bash
|
||||
npm install -g flowise
|
||||
```
|
||||
2. 启动 Flowise
|
||||
|
||||
```bash
|
||||
npx flowise start
|
||||
```
|
||||
|
||||
使用用户名和密码
|
||||
|
||||
```bash
|
||||
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
3. 打开 [http://localhost:3000](http://localhost:3000)
|
||||
|
||||
## 🐳 Docker
|
||||
|
||||
### Docker Compose
|
||||
|
||||
1. 进入项目根目录下的 `docker` 文件夹
|
||||
2. 创建 `.env` 文件并指定 `PORT`(参考 `.env.example`)
|
||||
3. 运行 `docker-compose up -d`
|
||||
4. 打开 [http://localhost:3000](http://localhost:3000)
|
||||
5. 可以通过 `docker-compose stop` 停止容器
|
||||
|
||||
### Docker 镜像
|
||||
|
||||
1. 本地构建镜像:
|
||||
```bash
|
||||
docker build --no-cache -t flowise .
|
||||
```
|
||||
2. 运行镜像:
|
||||
|
||||
```bash
|
||||
docker run -d --name flowise -p 3000:3000 flowise
|
||||
```
|
||||
|
||||
3. 停止镜像:
|
||||
```bash
|
||||
docker stop flowise
|
||||
```
|
||||
|
||||
## 👨💻 开发者
|
||||
|
||||
Flowise 在一个单一的代码库中有 3 个不同的模块。
|
||||
|
||||
- `server`:用于提供 API 逻辑的 Node 后端
|
||||
- `ui`:React 前端
|
||||
- `components`:Langchain 组件
|
||||
|
||||
### 先决条件
|
||||
|
||||
- 安装 [Yarn v1](https://classic.yarnpkg.com/en/docs/install)
|
||||
```bash
|
||||
npm i -g yarn
|
||||
```
|
||||
|
||||
### 设置
|
||||
|
||||
1. 克隆仓库
|
||||
|
||||
```bash
|
||||
git clone https://github.com/FlowiseAI/Flowise.git
|
||||
```
|
||||
|
||||
2. 进入仓库文件夹
|
||||
|
||||
```bash
|
||||
cd Flowise
|
||||
```
|
||||
|
||||
3. 安装所有模块的依赖:
|
||||
|
||||
```bash
|
||||
yarn install
|
||||
```
|
||||
|
||||
4. 构建所有代码:
|
||||
|
||||
```bash
|
||||
yarn build
|
||||
```
|
||||
|
||||
5. 启动应用:
|
||||
|
||||
```bash
|
||||
yarn start
|
||||
```
|
||||
|
||||
现在可以在 [http://localhost:3000](http://localhost:3000) 访问应用
|
||||
|
||||
6. 用于开发构建:
|
||||
|
||||
- 在 `packages/ui` 中创建 `.env` 文件并指定 `PORT`(参考 `.env.example`)
|
||||
- 在 `packages/server` 中创建 `.env` 文件并指定 `PORT`(参考 `.env.example`)
|
||||
- 运行
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
|
||||
任何代码更改都会自动重新加载应用程序,访问 [http://localhost:8080](http://localhost:8080)
|
||||
|
||||
## 🔒 认证
|
||||
|
||||
要启用应用程序级身份验证,在 `packages/server` 的 `.env` 文件中添加 `FLOWISE_USERNAME` 和 `FLOWISE_PASSWORD`:
|
||||
|
||||
```
|
||||
FLOWISE_USERNAME=user
|
||||
FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
## 🌱 环境变量
|
||||
|
||||
Flowise 支持不同的环境变量来配置您的实例。您可以在 `packages/server` 文件夹中的 `.env` 文件中指定以下变量。了解更多信息,请阅读[文档](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
|
||||
|
||||
## 📖 文档
|
||||
|
||||
[Flowise 文档](https://docs.flowiseai.com/)
|
||||
|
||||
## 🌐 自托管
|
||||
|
||||
### [Railway](https://docs.flowiseai.com/deployment/railway)
|
||||
|
||||
[](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
|
||||
|
||||
### [Render](https://docs.flowiseai.com/deployment/render)
|
||||
|
||||
[](https://docs.flowiseai.com/deployment/render)
|
||||
|
||||
### [HuggingFace Spaces](https://docs.flowiseai.com/deployment/hugging-face)
|
||||
|
||||
<a href="https://huggingface.co/spaces/FlowiseAI/Flowise"><img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="HuggingFace Spaces"></a>
|
||||
|
||||
### [AWS](https://docs.flowiseai.com/deployment/aws)
|
||||
|
||||
### [Azure](https://docs.flowiseai.com/deployment/azure)
|
||||
|
||||
### [DigitalOcean](https://docs.flowiseai.com/deployment/digital-ocean)
|
||||
|
||||
### [GCP](https://docs.flowiseai.com/deployment/gcp)
|
||||
|
||||
## 💻 云托管
|
||||
|
||||
即将推出
|
||||
|
||||
## 🙋 支持
|
||||
|
||||
在[讨论区](https://github.com/FlowiseAI/Flowise/discussions)中随时提问、提出问题和请求新功能
|
||||
|
||||
## 🙌 贡献
|
||||
|
||||
感谢这些了不起的贡献者
|
||||
|
||||
<a href="https://github.com/FlowiseAI/Flowise/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=FlowiseAI/Flowise" />
|
||||
</a>
|
||||
|
||||
参见[贡献指南](CONTRIBUTING.md)。如果您有任何问题或问题,请在[Discord](https://discord.gg/jbaHfsRVBW)上与我们联系。
|
||||
|
||||
## 📄 许可证
|
||||
|
||||
此代码库中的源代码在[MIT 许可证](LICENSE.md)下提供。
|
||||
101
README.md
|
|
@ -1,14 +1,25 @@
|
|||
<!-- markdownlint-disable MD030 -->
|
||||
|
||||
# Flowise - LangchainJS UI
|
||||
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.png?raw=true"></a>
|
||||
|
||||
# Flowise - Build LLM Apps Easily
|
||||
|
||||
[](https://github.com/FlowiseAI/Flowise/releases)
|
||||
[](https://discord.gg/jbaHfsRVBW)
|
||||
[](https://twitter.com/FlowiseAI)
|
||||
[](https://star-history.com/#FlowiseAI/Flowise)
|
||||
[](https://github.com/FlowiseAI/Flowise/fork)
|
||||
|
||||
English | [中文](./README-ZH.md)
|
||||
|
||||
<h3>Drag & drop UI to build your customized LLM flow</h3>
|
||||
<a href="https://github.com/FlowiseAI/Flowise">
|
||||
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise.gif?raw=true"></a>
|
||||
|
||||
Drag & drop UI to build your customized LLM flow using [LangchainJS](https://github.com/hwchase17/langchainjs)
|
||||
|
||||
## ⚡Quick Start
|
||||
|
||||
Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
|
||||
|
||||
1. Install Flowise
|
||||
```bash
|
||||
npm install -g flowise
|
||||
|
|
@ -19,16 +30,41 @@ Drag & drop UI to build your customized LLM flow using [LangchainJS](https://git
|
|||
npx flowise start
|
||||
```
|
||||
|
||||
With username & password
|
||||
|
||||
```bash
|
||||
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
3. Open [http://localhost:3000](http://localhost:3000)
|
||||
|
||||
## 🐳 Docker
|
||||
|
||||
### Docker Compose
|
||||
|
||||
1. Go to `docker` folder at the root of the project
|
||||
2. Create `.env` file and specify the `PORT` (refer to `.env.example`)
|
||||
2. Copy `.env.example` file, paste it into the same location, and rename to `.env`
|
||||
3. `docker-compose up -d`
|
||||
4. Open [http://localhost:3000](http://localhost:3000)
|
||||
5. You can bring the containers down by `docker-compose stop`
|
||||
|
||||
### Docker Image
|
||||
|
||||
1. Build the image locally:
|
||||
```bash
|
||||
docker build --no-cache -t flowise .
|
||||
```
|
||||
2. Run image:
|
||||
|
||||
```bash
|
||||
docker run -d --name flowise -p 3000:3000 flowise
|
||||
```
|
||||
|
||||
3. Stop image:
|
||||
```bash
|
||||
docker stop flowise
|
||||
```
|
||||
|
||||
## 👨💻 Developers
|
||||
|
||||
Flowise has 3 different modules in a single mono repository.
|
||||
|
|
@ -39,7 +75,7 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
### Prerequisite
|
||||
|
||||
- Install Yarn
|
||||
- Install [Yarn v1](https://classic.yarnpkg.com/en/docs/install)
|
||||
```bash
|
||||
npm i -g yarn
|
||||
```
|
||||
|
|
@ -80,31 +116,57 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
6. For development build:
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
- Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/ui`
|
||||
- Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/server`
|
||||
- Run
|
||||
|
||||
```bash
|
||||
yarn dev
|
||||
```
|
||||
|
||||
Any code changes will reload the app automatically on [http://localhost:8080](http://localhost:8080)
|
||||
|
||||
## 🔒 Authentication
|
||||
|
||||
To enable app level authentication, add `USERNAME` and `PASSWORD` to the `.env` file in `packages/server`:
|
||||
To enable app level authentication, add `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `.env` file in `packages/server`:
|
||||
|
||||
```
|
||||
USERNAME=user
|
||||
PASSWORD=1234
|
||||
FLOWISE_USERNAME=user
|
||||
FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
## 🌱 Env Variables
|
||||
|
||||
Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
|
||||
|
||||
## 📖 Documentation
|
||||
|
||||
Coming soon
|
||||
|
||||
## 💻 Cloud Hosted
|
||||
|
||||
Coming soon
|
||||
[Flowise Docs](https://docs.flowiseai.com/)
|
||||
|
||||
## 🌐 Self Host
|
||||
|
||||
### [Railway](https://docs.flowiseai.com/deployment/railway)
|
||||
|
||||
[](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
|
||||
|
||||
### [Render](https://docs.flowiseai.com/deployment/render)
|
||||
|
||||
[](https://docs.flowiseai.com/deployment/render)
|
||||
|
||||
### [HuggingFace Spaces](https://docs.flowiseai.com/deployment/hugging-face)
|
||||
|
||||
<a href="https://huggingface.co/spaces/FlowiseAI/Flowise"><img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="HuggingFace Spaces"></a>
|
||||
|
||||
### [AWS](https://docs.flowiseai.com/deployment/aws)
|
||||
|
||||
### [Azure](https://docs.flowiseai.com/deployment/azure)
|
||||
|
||||
### [DigitalOcean](https://docs.flowiseai.com/deployment/digital-ocean)
|
||||
|
||||
### [GCP](https://docs.flowiseai.com/deployment/gcp)
|
||||
|
||||
## 💻 Cloud Hosted
|
||||
|
||||
Coming soon
|
||||
|
||||
## 🙋 Support
|
||||
|
|
@ -113,7 +175,14 @@ Feel free to ask any questions, raise problems, and request new features in [dis
|
|||
|
||||
## 🙌 Contributing
|
||||
|
||||
Thanks go to these awesome contributors
|
||||
|
||||
<a href="https://github.com/FlowiseAI/Flowise/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=FlowiseAI/Flowise" />
|
||||
</a>
|
||||
|
||||
See [contributing guide](CONTRIBUTING.md). Reach out to us at [Discord](https://discord.gg/jbaHfsRVBW) if you have any questions or issues.
|
||||
[](https://star-history.com/#FlowiseAI/Flowise&Date)
|
||||
|
||||
## 📄 License
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,36 @@
|
|||
# npm install -g artillery@latest
|
||||
# artillery run artillery-load-test.yml
|
||||
# Refer https://www.artillery.io/docs
|
||||
|
||||
config:
|
||||
target: http://128.128.128.128:3000 # replace with your url
|
||||
phases:
|
||||
- duration: 1
|
||||
arrivalRate: 1
|
||||
rampTo: 2
|
||||
name: Warm up phase
|
||||
- duration: 1
|
||||
arrivalRate: 2
|
||||
rampTo: 3
|
||||
name: Ramp up load
|
||||
- duration: 1
|
||||
arrivalRate: 3
|
||||
name: Sustained peak load
|
||||
scenarios:
|
||||
- flow:
|
||||
- loop:
|
||||
- post:
|
||||
url: '/api/v1/prediction/chatflow-id' # replace with your chatflowid
|
||||
json:
|
||||
question: 'hello' # replace with your question
|
||||
count: 1 # how many request each user make
|
||||
|
||||
# User __
|
||||
# 3 /
|
||||
# 2 /
|
||||
# 1 _/
|
||||
# 1 2 3
|
||||
# Seconds
|
||||
# Total Users = 2 + 3 + 3 = 8
|
||||
# Each making 1 HTTP call
|
||||
# Over a duration of 3 seconds
|
||||
|
|
@ -1 +1,26 @@
|
|||
PORT=3000
|
||||
PORT=3000
|
||||
PASSPHRASE=MYPASSPHRASE # Passphrase used to create encryption key
|
||||
DATABASE_PATH=/root/.flowise
|
||||
APIKEY_PATH=/root/.flowise
|
||||
SECRETKEY_PATH=/root/.flowise
|
||||
LOG_PATH=/root/.flowise/logs
|
||||
|
||||
# DATABASE_TYPE=postgres
|
||||
# DATABASE_PORT=""
|
||||
# DATABASE_HOST=""
|
||||
# DATABASE_NAME="flowise"
|
||||
# DATABASE_USER=""
|
||||
# DATABASE_PASSWORD=""
|
||||
# OVERRIDE_DATABASE=true
|
||||
|
||||
# FLOWISE_USERNAME=user
|
||||
# FLOWISE_PASSWORD=1234
|
||||
# DEBUG=true
|
||||
# LOG_LEVEL=debug (error | warn | info | verbose | debug)
|
||||
# TOOL_FUNCTION_BUILTIN_DEP=crypto,fs
|
||||
# TOOL_FUNCTION_EXTERNAL_DEP=moment,lodash
|
||||
|
||||
# LANGCHAIN_TRACING_V2=true
|
||||
# LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
|
||||
# LANGCHAIN_API_KEY=your_api_key
|
||||
# LANGCHAIN_PROJECT=your_project
|
||||
|
|
@ -4,6 +4,14 @@ USER root
|
|||
|
||||
RUN apk add --no-cache git
|
||||
RUN apk add --no-cache python3 py3-pip make g++
|
||||
# needed for pdfjs-dist
|
||||
RUN apk add --no-cache build-base cairo-dev pango-dev
|
||||
|
||||
# Install Chromium
|
||||
RUN apk add --no-cache chromium
|
||||
|
||||
ENV PUPPETEER_SKIP_DOWNLOAD=true
|
||||
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
|
||||
|
||||
# You can install a specific version like: flowise@1.0.0
|
||||
RUN npm install -g flowise
|
||||
|
|
|
|||
|
|
@ -0,0 +1,35 @@
|
|||
# Flowise Docker Hub Image
|
||||
|
||||
Starts Flowise from [DockerHub Image](https://hub.docker.com/repository/docker/flowiseai/flowise/general)
|
||||
|
||||
## Usage
|
||||
|
||||
1. Create `.env` file and specify the `PORT` (refer to `.env.example`)
|
||||
2. `docker-compose up -d`
|
||||
3. Open [http://localhost:3000](http://localhost:3000)
|
||||
4. You can bring the containers down by `docker-compose stop`
|
||||
|
||||
## 🔒 Authentication
|
||||
|
||||
1. Create `.env` file and specify the `PORT`, `FLOWISE_USERNAME`, and `FLOWISE_PASSWORD` (refer to `.env.example`)
|
||||
2. Pass `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `docker-compose.yml` file:
|
||||
```
|
||||
environment:
|
||||
- PORT=${PORT}
|
||||
- FLOWISE_USERNAME=${FLOWISE_USERNAME}
|
||||
- FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
|
||||
```
|
||||
3. `docker-compose up -d`
|
||||
4. Open [http://localhost:3000](http://localhost:3000)
|
||||
5. You can bring the containers down by `docker-compose stop`
|
||||
|
||||
## 🌱 Env Variables
|
||||
|
||||
If you like to persist your data (flows, logs, apikeys, credentials), set these variables in the `.env` file inside `docker` folder:
|
||||
|
||||
- DATABASE_PATH=/root/.flowise
|
||||
- APIKEY_PATH=/root/.flowise
|
||||
- LOG_PATH=/root/.flowise/logs
|
||||
- SECRETKEY_PATH=/root/.flowise
|
||||
|
||||
Flowise also support different environment variables to configure your instance. Read [more](https://docs.flowiseai.com/environment-variables)
|
||||
|
|
@ -6,6 +6,15 @@ services:
|
|||
restart: always
|
||||
environment:
|
||||
- PORT=${PORT}
|
||||
- PASSPHRASE=${PASSPHRASE}
|
||||
- FLOWISE_USERNAME=${FLOWISE_USERNAME}
|
||||
- FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
|
||||
- DEBUG=${DEBUG}
|
||||
- DATABASE_PATH=${DATABASE_PATH}
|
||||
- APIKEY_PATH=${APIKEY_PATH}
|
||||
- SECRETKEY_PATH=${SECRETKEY_PATH}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- LOG_PATH=${LOG_PATH}
|
||||
ports:
|
||||
- '${PORT}:${PORT}'
|
||||
volumes:
|
||||
|
|
|
|||
|
After Width: | Height: | Size: 524 KiB |
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "flowise",
|
||||
"version": "1.2.6",
|
||||
"version": "1.3.3",
|
||||
"private": true,
|
||||
"homepage": "https://flowiseai.com",
|
||||
"workspaces": [
|
||||
|
|
@ -28,7 +28,6 @@
|
|||
"*.{js,jsx,ts,tsx,json,md}": "eslint --fix"
|
||||
},
|
||||
"devDependencies": {
|
||||
"turbo": "1.7.4",
|
||||
"@babel/preset-env": "^7.19.4",
|
||||
"@babel/preset-typescript": "7.18.6",
|
||||
"@types/express": "^4.17.13",
|
||||
|
|
@ -48,6 +47,7 @@
|
|||
"pretty-quick": "^3.1.3",
|
||||
"rimraf": "^3.0.2",
|
||||
"run-script-os": "^1.1.6",
|
||||
"turbo": "1.7.4",
|
||||
"typescript": "^4.8.4"
|
||||
},
|
||||
"engines": {
|
||||
|
|
|
|||
|
|
@ -1 +0,0 @@
|
|||
DEBUG=true
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
<!-- markdownlint-disable MD030 -->
|
||||
|
||||
# 流式组件
|
||||
|
||||
[English](./README.md) | 中文
|
||||
|
||||
Flowise 的应用集成。包含节点和凭据。
|
||||
|
||||

|
||||
|
||||
安装:
|
||||
|
||||
```bash
|
||||
npm i flowise-components
|
||||
```
|
||||
|
||||
## 许可证
|
||||
|
||||
此存储库中的源代码在[MIT 许可证](https://github.com/FlowiseAI/Flowise/blob/master/LICENSE.md)下提供。
|
||||
|
|
@ -2,6 +2,8 @@
|
|||
|
||||
# Flowise Components
|
||||
|
||||
English | [中文](./README-ZH.md)
|
||||
|
||||
Apps integration for Flowise. Contain Nodes and Credentials.
|
||||
|
||||

|
||||
|
|
@ -12,14 +14,6 @@ Install:
|
|||
npm i flowise-components
|
||||
```
|
||||
|
||||
## Debug
|
||||
|
||||
To view all the logs, create an `.env` file and add:
|
||||
|
||||
```
|
||||
DEBUG=true
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
Source code in this repository is made available under the [MIT License](https://github.com/FlowiseAI/Flowise/blob/master/LICENSE.md).
|
||||
|
|
|
|||
|
|
@ -0,0 +1,27 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class AirtableApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Airtable API'
|
||||
this.name = 'airtableApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Refer to <a target="_blank" href="https://support.airtable.com/docs/creating-and-using-api-keys-and-access-tokens">official guide</a> on how to get accessToken on Airtable'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Access Token',
|
||||
name: 'accessToken',
|
||||
type: 'password',
|
||||
placeholder: '<AIRTABLE_ACCESS_TOKEN>'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: AirtableApi }
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class AnthropicApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Anthropic API'
|
||||
this.name = 'anthropicApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Anthropic Api Key',
|
||||
name: 'anthropicApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: AnthropicApi }
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class ApifyApiCredential implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Apify API'
|
||||
this.name = 'apifyApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find the Apify API token on your <a target="_blank" href="https://console.apify.com/account#/integrations">Apify account</a> page.'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Apify API',
|
||||
name: 'apifyApiToken',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: ApifyApiCredential }
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class AzureOpenAIApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Azure OpenAI API'
|
||||
this.name = 'azureOpenAIApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Refer to <a target="_blank" href="https://azure.microsoft.com/en-us/products/cognitive-services/openai-service">official guide</a> of how to use Azure OpenAI service'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Azure OpenAI Api Key',
|
||||
name: 'azureOpenAIApiKey',
|
||||
type: 'password',
|
||||
description: `Refer to <a target="_blank" href="https://learn.microsoft.com/en-us/azure/cognitive-services/openai/quickstart?tabs=command-line&pivots=rest-api#set-up">official guide</a> on how to create API key on Azure OpenAI`
|
||||
},
|
||||
{
|
||||
label: 'Azure OpenAI Api Instance Name',
|
||||
name: 'azureOpenAIApiInstanceName',
|
||||
type: 'string',
|
||||
placeholder: 'YOUR-INSTANCE-NAME'
|
||||
},
|
||||
{
|
||||
label: 'Azure OpenAI Api Deployment Name',
|
||||
name: 'azureOpenAIApiDeploymentName',
|
||||
type: 'string',
|
||||
placeholder: 'YOUR-DEPLOYMENT-NAME'
|
||||
},
|
||||
{
|
||||
label: 'Azure OpenAI Api Version',
|
||||
name: 'azureOpenAIApiVersion',
|
||||
type: 'string',
|
||||
placeholder: '2023-06-01-preview',
|
||||
description:
|
||||
'Description of Supported API Versions. Please refer <a target="_blank" href="https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference#chat-completions">examples</a>'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: AzureOpenAIApi }
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class BraveSearchApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Brave Search API'
|
||||
this.name = 'braveSearchApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'BraveSearch Api Key',
|
||||
name: 'braveApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: BraveSearchApi }
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class ChromaApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
description: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Chroma API'
|
||||
this.name = 'chromaApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Chroma Api Key',
|
||||
name: 'chromaApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: ChromaApi }
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class CohereApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Cohere API'
|
||||
this.name = 'cohereApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Cohere Api Key',
|
||||
name: 'cohereApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: CohereApi }
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class ConfluenceApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Confluence API'
|
||||
this.name = 'confluenceApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Refer to <a target="_blank" href="https://support.atlassian.com/confluence-cloud/docs/manage-oauth-access-tokens/">official guide</a> on how to get accessToken on Confluence'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Access Token',
|
||||
name: 'accessToken',
|
||||
type: 'password',
|
||||
placeholder: '<CONFLUENCE_ACCESS_TOKEN>'
|
||||
},
|
||||
{
|
||||
label: 'Username',
|
||||
name: 'username',
|
||||
type: 'string',
|
||||
placeholder: '<CONFLUENCE_USERNAME>'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: ConfluenceApi }
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class DynamodbMemoryApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'DynamodbMemory API'
|
||||
this.name = 'dynamodbMemoryApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Access Key',
|
||||
name: 'accessKey',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Secret Access Key',
|
||||
name: 'secretAccessKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: DynamodbMemoryApi }
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class FigmaApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Figma API'
|
||||
this.name = 'figmaApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Refer to <a target="_blank" href="https://www.figma.com/developers/api#access-tokens">official guide</a> on how to get accessToken on Figma'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Access Token',
|
||||
name: 'accessToken',
|
||||
type: 'password',
|
||||
placeholder: '<FIGMA_ACCESS_TOKEN>'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: FigmaApi }
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class GithubApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Github API'
|
||||
this.name = 'githubApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Refer to <a target="_blank" href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens">official guide</a> on how to get accessToken on Github'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Access Token',
|
||||
name: 'accessToken',
|
||||
type: 'password',
|
||||
placeholder: '<GITHUB_ACCESS_TOKEN>'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: GithubApi }
|
||||
|
|
@ -0,0 +1,55 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class GoogleVertexAuth implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Google Vertex Auth'
|
||||
this.name = 'googleVertexAuth'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Google Application Credential File Path',
|
||||
name: 'googleApplicationCredentialFilePath',
|
||||
description:
|
||||
'Path to your google application credential json file. You can also use the credential JSON object (either one)',
|
||||
placeholder: 'your-path/application_default_credentials.json',
|
||||
type: 'string',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Google Credential JSON Object',
|
||||
name: 'googleApplicationCredential',
|
||||
description: 'JSON object of your google application credential. You can also use the file path (either one)',
|
||||
placeholder: `{
|
||||
"type": ...,
|
||||
"project_id": ...,
|
||||
"private_key_id": ...,
|
||||
"private_key": ...,
|
||||
"client_email": ...,
|
||||
"client_id": ...,
|
||||
"auth_uri": ...,
|
||||
"token_uri": ...,
|
||||
"auth_provider_x509_cert_url": ...,
|
||||
"client_x509_cert_url": ...
|
||||
}`,
|
||||
type: 'string',
|
||||
rows: 4,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Project ID',
|
||||
name: 'projectID',
|
||||
description: 'Project ID of GCP. If not provided, it will be read from the credential file',
|
||||
type: 'string',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: GoogleVertexAuth }
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class GoogleSearchApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Google Custom Search API'
|
||||
this.name = 'googleCustomSearchApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Please refer to the <a target="_blank" href="https://console.cloud.google.com/apis/credentials">Google Cloud Console</a> for instructions on how to create an API key, and visit the <a target="_blank" href="https://programmablesearchengine.google.com/controlpanel/create">Search Engine Creation page</a> to learn how to generate your Search Engine ID.'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Google Custom Search Api Key',
|
||||
name: 'googleCustomSearchApiKey',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Programmable Search Engine ID',
|
||||
name: 'googleCustomSearchApiId',
|
||||
type: 'string'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: GoogleSearchApi }
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class HuggingFaceApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'HuggingFace API'
|
||||
this.name = 'huggingFaceApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'HuggingFace Api Key',
|
||||
name: 'huggingFaceApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: HuggingFaceApi }
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class MotorheadMemoryApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Motorhead Memory API'
|
||||
this.name = 'motorheadMemoryApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Refer to <a target="_blank" href="https://docs.getmetal.io/misc-get-keys">official guide</a> on how to create API key and Client ID on Motorhead Memory'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Client ID',
|
||||
name: 'clientId',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'API Key',
|
||||
name: 'apiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: MotorheadMemoryApi }
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class NotionApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Notion API'
|
||||
this.name = 'notionApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find integration token <a target="_blank" href="https://developers.notion.com/docs/create-a-notion-integration#step-1-create-an-integration">here</a>'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Notion Integration Token',
|
||||
name: 'notionIntegrationToken',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: NotionApi }
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class OpenAIApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'OpenAI API'
|
||||
this.name = 'openAIApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'OpenAI Api Key',
|
||||
name: 'openAIApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: OpenAIApi }
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class OpenAPIAuth implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'OpenAPI Auth Token'
|
||||
this.name = 'openAPIAuth'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'OpenAPI Token',
|
||||
name: 'openAPIToken',
|
||||
type: 'password',
|
||||
description: 'Auth Token. For example: Bearer <TOKEN>'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: OpenAPIAuth }
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class PineconeApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Pinecone API'
|
||||
this.name = 'pineconeApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Pinecone Api Key',
|
||||
name: 'pineconeApiKey',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Pinecone Environment',
|
||||
name: 'pineconeEnv',
|
||||
type: 'string'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: PineconeApi }
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class QdrantApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Qdrant API'
|
||||
this.name = 'qdrantApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Qdrant API Key',
|
||||
name: 'qdrantApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: QdrantApi }
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class ReplicateApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Replicate API'
|
||||
this.name = 'replicateApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Replicate Api Key',
|
||||
name: 'replicateApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: ReplicateApi }
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class SerpApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Serp API'
|
||||
this.name = 'serpApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Serp Api Key',
|
||||
name: 'serpApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: SerpApi }
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class SerperApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Serper API'
|
||||
this.name = 'serperApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Serper Api Key',
|
||||
name: 'serperApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: SerperApi }
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class SingleStoreApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'SingleStore API'
|
||||
this.name = 'singleStoreApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'User',
|
||||
name: 'user',
|
||||
type: 'string',
|
||||
placeholder: '<SINGLESTORE_USERNAME>'
|
||||
},
|
||||
{
|
||||
label: 'Password',
|
||||
name: 'password',
|
||||
type: 'password',
|
||||
placeholder: '<SINGLESTORE_PASSWORD>'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: SingleStoreApi }
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class SupabaseApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Supabase API'
|
||||
this.name = 'supabaseApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Supabase API Key',
|
||||
name: 'supabaseApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: SupabaseApi }
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class VectaraAPI implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Vectara API'
|
||||
this.name = 'vectaraApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Vectara Customer ID',
|
||||
name: 'customerID',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Vectara Corpus ID',
|
||||
name: 'corpusID',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Vectara API Key',
|
||||
name: 'apiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: VectaraAPI }
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class WeaviateApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Weaviate API'
|
||||
this.name = 'weaviateApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Weaviate API Key',
|
||||
name: 'weaviateApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: WeaviateApi }
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class ZapierNLAApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Zapier NLA API'
|
||||
this.name = 'zapierNLAApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Zapier NLA Api Key',
|
||||
name: 'zapierNLAApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: ZapierNLAApi }
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class ZepMemoryApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Zep Memory API'
|
||||
this.name = 'zepMemoryApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Refer to <a target="_blank" href="https://docs.getzep.com/deployment/auth/">official guide</a> on how to create API key on Zep'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'API Key',
|
||||
name: 'apiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: ZepMemoryApi }
|
||||
|
|
@ -0,0 +1,232 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams, PromptTemplate } from '../../../src/Interface'
|
||||
import { AgentExecutor } from 'langchain/agents'
|
||||
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { LoadPyodide, finalSystemPrompt, systemPrompt } from './core'
|
||||
import { LLMChain } from 'langchain/chains'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
import axios from 'axios'
|
||||
|
||||
class Airtable_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
credential: INodeParams
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Airtable Agent'
|
||||
this.name = 'airtableAgent'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'airtable.svg'
|
||||
this.description = 'Agent used to to answer queries on Airtable table'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
|
||||
this.credential = {
|
||||
label: 'Connect Credential',
|
||||
name: 'credential',
|
||||
type: 'credential',
|
||||
credentialNames: ['airtableApi']
|
||||
}
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Language Model',
|
||||
name: 'model',
|
||||
type: 'BaseLanguageModel'
|
||||
},
|
||||
{
|
||||
label: 'Base Id',
|
||||
name: 'baseId',
|
||||
type: 'string',
|
||||
placeholder: 'app11RobdGoX0YNsC',
|
||||
description:
|
||||
'If your table URL looks like: https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYICO/viw9UrP77Id0CE4ee, app11RovdGoX0YNsC is the base id'
|
||||
},
|
||||
{
|
||||
label: 'Table Id',
|
||||
name: 'tableId',
|
||||
type: 'string',
|
||||
placeholder: 'tblJdmvbrgizbYICO',
|
||||
description:
|
||||
'If your table URL looks like: https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYICO/viw9UrP77Id0CE4ee, tblJdmvbrgizbYICO is the table id'
|
||||
},
|
||||
{
|
||||
label: 'Return All',
|
||||
name: 'returnAll',
|
||||
type: 'boolean',
|
||||
default: true,
|
||||
additionalParams: true,
|
||||
description: 'If all results should be returned or only up to a given limit'
|
||||
},
|
||||
{
|
||||
label: 'Limit',
|
||||
name: 'limit',
|
||||
type: 'number',
|
||||
default: 100,
|
||||
additionalParams: true,
|
||||
description: 'Number of results to return'
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(): Promise<any> {
|
||||
// Not used
|
||||
return undefined
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const baseId = nodeData.inputs?.baseId as string
|
||||
const tableId = nodeData.inputs?.tableId as string
|
||||
const returnAll = nodeData.inputs?.returnAll as boolean
|
||||
const limit = nodeData.inputs?.limit as string
|
||||
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const accessToken = getCredentialParam('accessToken', credentialData, nodeData)
|
||||
|
||||
let airtableData: ICommonObject[] = []
|
||||
|
||||
if (returnAll) {
|
||||
airtableData = await loadAll(baseId, tableId, accessToken)
|
||||
} else {
|
||||
airtableData = await loadLimit(limit ? parseInt(limit, 10) : 100, baseId, tableId, accessToken)
|
||||
}
|
||||
|
||||
let base64String = Buffer.from(JSON.stringify(airtableData)).toString('base64')
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
|
||||
|
||||
const pyodide = await LoadPyodide()
|
||||
|
||||
// First load the csv file and get the dataframe dictionary of column types
|
||||
// For example using titanic.csv: {'PassengerId': 'int64', 'Survived': 'int64', 'Pclass': 'int64', 'Name': 'object', 'Sex': 'object', 'Age': 'float64', 'SibSp': 'int64', 'Parch': 'int64', 'Ticket': 'object', 'Fare': 'float64', 'Cabin': 'object', 'Embarked': 'object'}
|
||||
let dataframeColDict = ''
|
||||
try {
|
||||
const code = `import pandas as pd
|
||||
import base64
|
||||
import json
|
||||
|
||||
base64_string = "${base64String}"
|
||||
|
||||
decoded_data = base64.b64decode(base64_string)
|
||||
|
||||
json_data = json.loads(decoded_data)
|
||||
|
||||
df = pd.DataFrame(json_data)
|
||||
my_dict = df.dtypes.astype(str).to_dict()
|
||||
print(my_dict)
|
||||
json.dumps(my_dict)`
|
||||
dataframeColDict = await pyodide.runPythonAsync(code)
|
||||
} catch (error) {
|
||||
throw new Error(error)
|
||||
}
|
||||
|
||||
// Then tell GPT to come out with ONLY python code
|
||||
// For example: len(df), df[df['SibSp'] > 3]['PassengerId'].count()
|
||||
let pythonCode = ''
|
||||
if (dataframeColDict) {
|
||||
const chain = new LLMChain({
|
||||
llm: model,
|
||||
prompt: PromptTemplate.fromTemplate(systemPrompt),
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
const inputs = {
|
||||
dict: dataframeColDict,
|
||||
question: input
|
||||
}
|
||||
const res = await chain.call(inputs, [loggerHandler])
|
||||
pythonCode = res?.text
|
||||
}
|
||||
|
||||
// Then run the code using Pyodide
|
||||
let finalResult = ''
|
||||
if (pythonCode) {
|
||||
try {
|
||||
const code = `import pandas as pd\n${pythonCode}`
|
||||
finalResult = await pyodide.runPythonAsync(code)
|
||||
} catch (error) {
|
||||
throw new Error(`Sorry, I'm unable to find answer for question: "${input}" using follwoing code: "${pythonCode}"`)
|
||||
}
|
||||
}
|
||||
|
||||
// Finally, return a complete answer
|
||||
if (finalResult) {
|
||||
const chain = new LLMChain({
|
||||
llm: model,
|
||||
prompt: PromptTemplate.fromTemplate(finalSystemPrompt),
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
const inputs = {
|
||||
question: input,
|
||||
answer: finalResult
|
||||
}
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const result = await chain.call(inputs, [loggerHandler, handler])
|
||||
return result?.text
|
||||
} else {
|
||||
const result = await chain.call(inputs, [loggerHandler])
|
||||
return result?.text
|
||||
}
|
||||
}
|
||||
|
||||
return pythonCode
|
||||
}
|
||||
}
|
||||
|
||||
interface AirtableLoaderResponse {
|
||||
records: AirtableLoaderPage[]
|
||||
offset?: string
|
||||
}
|
||||
|
||||
interface AirtableLoaderPage {
|
||||
id: string
|
||||
createdTime: string
|
||||
fields: ICommonObject
|
||||
}
|
||||
|
||||
const fetchAirtableData = async (url: string, params: ICommonObject, accessToken: string): Promise<AirtableLoaderResponse> => {
|
||||
try {
|
||||
const headers = {
|
||||
Authorization: `Bearer ${accessToken}`,
|
||||
'Content-Type': 'application/json',
|
||||
Accept: 'application/json'
|
||||
}
|
||||
const response = await axios.get(url, { params, headers })
|
||||
return response.data
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to fetch ${url} from Airtable: ${error}`)
|
||||
}
|
||||
}
|
||||
|
||||
const loadAll = async (baseId: string, tableId: string, accessToken: string): Promise<ICommonObject[]> => {
|
||||
const params: ICommonObject = { pageSize: 100 }
|
||||
let data: AirtableLoaderResponse
|
||||
let returnPages: AirtableLoaderPage[] = []
|
||||
|
||||
do {
|
||||
data = await fetchAirtableData(`https://api.airtable.com/v0/${baseId}/${tableId}`, params, accessToken)
|
||||
returnPages.push.apply(returnPages, data.records)
|
||||
params.offset = data.offset
|
||||
} while (data.offset !== undefined)
|
||||
|
||||
return data.records.map((page) => page.fields)
|
||||
}
|
||||
|
||||
const loadLimit = async (limit: number, baseId: string, tableId: string, accessToken: string): Promise<ICommonObject[]> => {
|
||||
const params = { maxRecords: limit }
|
||||
const data = await fetchAirtableData(`https://api.airtable.com/v0/${baseId}/${tableId}`, params, accessToken)
|
||||
if (data.records.length === 0) {
|
||||
return []
|
||||
}
|
||||
return data.records.map((page) => page.fields)
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: Airtable_Agents }
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg width="256px" height="215px" viewBox="0 0 256 215" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" preserveAspectRatio="xMidYMid">
|
||||
<g>
|
||||
<path d="M114.25873,2.70101695 L18.8604023,42.1756384 C13.5552723,44.3711638 13.6102328,51.9065311 18.9486282,54.0225085 L114.746142,92.0117514 C123.163769,95.3498757 132.537419,95.3498757 140.9536,92.0117514 L236.75256,54.0225085 C242.08951,51.9065311 242.145916,44.3711638 236.83934,42.1756384 L141.442459,2.70101695 C132.738459,-0.900338983 122.961284,-0.900338983 114.25873,2.70101695" fill="#FFBF00"></path>
|
||||
<path d="M136.349071,112.756863 L136.349071,207.659101 C136.349071,212.173089 140.900664,215.263892 145.096461,213.600615 L251.844122,172.166219 C254.281184,171.200072 255.879376,168.845451 255.879376,166.224705 L255.879376,71.3224678 C255.879376,66.8084791 251.327783,63.7176768 247.131986,65.3809537 L140.384325,106.815349 C137.94871,107.781496 136.349071,110.136118 136.349071,112.756863" fill="#26B5F8"></path>
|
||||
<path d="M111.422771,117.65355 L79.742409,132.949912 L76.5257763,134.504714 L9.65047684,166.548104 C5.4112904,168.593211 0.000578531073,165.503855 0.000578531073,160.794612 L0.000578531073,71.7210757 C0.000578531073,70.0173017 0.874160452,68.5463864 2.04568588,67.4384994 C2.53454463,66.9481944 3.08848814,66.5446689 3.66412655,66.2250305 C5.26231864,65.2661153 7.54173107,65.0101153 9.47981017,65.7766689 L110.890522,105.957098 C116.045234,108.002206 116.450206,115.225166 111.422771,117.65355" fill="#ED3049"></path>
|
||||
<path d="M111.422771,117.65355 L79.742409,132.949912 L2.04568588,67.4384994 C2.53454463,66.9481944 3.08848814,66.5446689 3.66412655,66.2250305 C5.26231864,65.2661153 7.54173107,65.0101153 9.47981017,65.7766689 L110.890522,105.957098 C116.045234,108.002206 116.450206,115.225166 111.422771,117.65355" fill-opacity="0.25" fill="#000000"></path>
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 1.9 KiB |
|
|
@ -0,0 +1,29 @@
|
|||
import type { PyodideInterface } from 'pyodide'
|
||||
import * as path from 'path'
|
||||
import { getUserHome } from '../../../src/utils'
|
||||
|
||||
let pyodideInstance: PyodideInterface | undefined
|
||||
|
||||
export async function LoadPyodide(): Promise<PyodideInterface> {
|
||||
if (pyodideInstance === undefined) {
|
||||
const { loadPyodide } = await import('pyodide')
|
||||
const obj: any = { packageCacheDir: path.join(getUserHome(), '.flowise', 'pyodideCacheDir') }
|
||||
pyodideInstance = await loadPyodide(obj)
|
||||
await pyodideInstance.loadPackage(['pandas', 'numpy'])
|
||||
}
|
||||
|
||||
return pyodideInstance
|
||||
}
|
||||
|
||||
export const systemPrompt = `You are working with a pandas dataframe in Python. The name of the dataframe is df.
|
||||
|
||||
The columns and data types of a dataframe are given below as a Python dictionary with keys showing column names and values showing the data types.
|
||||
{dict}
|
||||
|
||||
I will ask question, and you will output the Python code using pandas dataframe to answer my question. Do not provide any explanations. Do not respond with anything except the output of the code.
|
||||
|
||||
Question: {question}
|
||||
Output Code:`
|
||||
|
||||
export const finalSystemPrompt = `You are given the question: {question}. You have an answer to the question: {answer}. Rephrase the answer into a standalone answer.
|
||||
Standalone Answer:`
|
||||
|
|
@ -3,10 +3,12 @@ import { BaseChatModel } from 'langchain/chat_models/base'
|
|||
import { AutoGPT } from 'langchain/experimental/autogpt'
|
||||
import { Tool } from 'langchain/tools'
|
||||
import { VectorStoreRetriever } from 'langchain/vectorstores/base'
|
||||
import { flatten } from 'lodash'
|
||||
|
||||
class AutoGPT_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
|
|
@ -17,6 +19,7 @@ class AutoGPT_Agents implements INode {
|
|||
constructor() {
|
||||
this.label = 'AutoGPT'
|
||||
this.name = 'autoGPT'
|
||||
this.version = 1.0
|
||||
this.type = 'AutoGPT'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'autogpt.png'
|
||||
|
|
@ -67,7 +70,7 @@ class AutoGPT_Agents implements INode {
|
|||
const model = nodeData.inputs?.model as BaseChatModel
|
||||
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as VectorStoreRetriever
|
||||
let tools = nodeData.inputs?.tools as Tool[]
|
||||
tools = tools.flat()
|
||||
tools = flatten(tools)
|
||||
const aiName = (nodeData.inputs?.aiName as string) || 'AutoGPT'
|
||||
const aiRole = (nodeData.inputs?.aiRole as string) || 'Assistant'
|
||||
const maxLoop = nodeData.inputs?.maxLoop as string
|
||||
|
|
@ -89,7 +92,6 @@ class AutoGPT_Agents implements INode {
|
|||
const res = await executor.run([input])
|
||||
return res || 'I have completed all my tasks.'
|
||||
} catch (e) {
|
||||
console.error(e)
|
||||
throw new Error(e)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import { VectorStore } from 'langchain/vectorstores'
|
|||
class BabyAGI_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
|
|
@ -16,6 +17,7 @@ class BabyAGI_Agents implements INode {
|
|||
constructor() {
|
||||
this.label = 'BabyAGI'
|
||||
this.name = 'babyAGI'
|
||||
this.version = 1.0
|
||||
this.type = 'BabyAGI'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'babyagi.jpg'
|
||||
|
|
@ -45,8 +47,9 @@ class BabyAGI_Agents implements INode {
|
|||
const model = nodeData.inputs?.model as BaseChatModel
|
||||
const vectorStore = nodeData.inputs?.vectorStore as VectorStore
|
||||
const taskLoop = nodeData.inputs?.taskLoop as string
|
||||
const k = (vectorStore as any)?.k ?? 4
|
||||
|
||||
const babyAgi = BabyAGI.fromLLM(model, vectorStore, parseInt(taskLoop, 10))
|
||||
const babyAgi = BabyAGI.fromLLM(model, vectorStore, parseInt(taskLoop, 10), k)
|
||||
return babyAgi
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -154,18 +154,22 @@ export class BabyAGI {
|
|||
|
||||
maxIterations = 3
|
||||
|
||||
topK = 4
|
||||
|
||||
constructor(
|
||||
taskCreationChain: TaskCreationChain,
|
||||
taskPrioritizationChain: TaskPrioritizationChain,
|
||||
executionChain: ExecutionChain,
|
||||
vectorStore: VectorStore,
|
||||
maxIterations: number
|
||||
maxIterations: number,
|
||||
topK: number
|
||||
) {
|
||||
this.taskCreationChain = taskCreationChain
|
||||
this.taskPrioritizationChain = taskPrioritizationChain
|
||||
this.executionChain = executionChain
|
||||
this.vectorStore = vectorStore
|
||||
this.maxIterations = maxIterations
|
||||
this.topK = topK
|
||||
}
|
||||
|
||||
addTask(task: Task) {
|
||||
|
|
@ -219,7 +223,7 @@ export class BabyAGI {
|
|||
this.printNextTask(task)
|
||||
|
||||
// Step 2: Execute the task
|
||||
const result = await executeTask(this.vectorStore, this.executionChain, objective, task.task_name)
|
||||
const result = await executeTask(this.vectorStore, this.executionChain, objective, task.task_name, this.topK)
|
||||
const thisTaskId = task.task_id
|
||||
finalResult = result
|
||||
this.printTaskResult(result)
|
||||
|
|
@ -257,10 +261,10 @@ export class BabyAGI {
|
|||
return finalResult
|
||||
}
|
||||
|
||||
static fromLLM(llm: BaseChatModel, vectorstore: VectorStore, maxIterations = 3): BabyAGI {
|
||||
static fromLLM(llm: BaseChatModel, vectorstore: VectorStore, maxIterations = 3, topK = 4): BabyAGI {
|
||||
const taskCreationChain = TaskCreationChain.from_llm(llm)
|
||||
const taskPrioritizationChain = TaskPrioritizationChain.from_llm(llm)
|
||||
const executionChain = ExecutionChain.from_llm(llm)
|
||||
return new BabyAGI(taskCreationChain, taskPrioritizationChain, executionChain, vectorstore, maxIterations)
|
||||
return new BabyAGI(taskCreationChain, taskPrioritizationChain, executionChain, vectorstore, maxIterations, topK)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,164 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams, PromptTemplate } from '../../../src/Interface'
|
||||
import { AgentExecutor } from 'langchain/agents'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { LoadPyodide, finalSystemPrompt, systemPrompt } from './core'
|
||||
import { LLMChain } from 'langchain/chains'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class CSV_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'CSV Agent'
|
||||
this.name = 'csvAgent'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'csvagent.png'
|
||||
this.description = 'Agent used to to answer queries on CSV data'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Csv File',
|
||||
name: 'csvFile',
|
||||
type: 'file',
|
||||
fileType: '.csv'
|
||||
},
|
||||
{
|
||||
label: 'Language Model',
|
||||
name: 'model',
|
||||
type: 'BaseLanguageModel'
|
||||
},
|
||||
{
|
||||
label: 'System Message',
|
||||
name: 'systemMessagePrompt',
|
||||
type: 'string',
|
||||
rows: 4,
|
||||
additionalParams: true,
|
||||
optional: true,
|
||||
placeholder:
|
||||
'I want you to act as a document that I am having a conversation with. Your name is "AI Assistant". You will provide me with answers from the given info. If the answer is not included, say exactly "Hmm, I am not sure." and stop after that. Refuse to answer any question not about the info. Never break character.'
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(): Promise<any> {
|
||||
// Not used
|
||||
return undefined
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const csvFileBase64 = nodeData.inputs?.csvFile as string
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const systemMessagePrompt = nodeData.inputs?.systemMessagePrompt as string
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
|
||||
|
||||
let files: string[] = []
|
||||
|
||||
if (csvFileBase64.startsWith('[') && csvFileBase64.endsWith(']')) {
|
||||
files = JSON.parse(csvFileBase64)
|
||||
} else {
|
||||
files = [csvFileBase64]
|
||||
}
|
||||
|
||||
let base64String = ''
|
||||
|
||||
for (const file of files) {
|
||||
const splitDataURI = file.split(',')
|
||||
splitDataURI.pop()
|
||||
base64String = splitDataURI.pop() ?? ''
|
||||
}
|
||||
|
||||
const pyodide = await LoadPyodide()
|
||||
|
||||
// First load the csv file and get the dataframe dictionary of column types
|
||||
// For example using titanic.csv: {'PassengerId': 'int64', 'Survived': 'int64', 'Pclass': 'int64', 'Name': 'object', 'Sex': 'object', 'Age': 'float64', 'SibSp': 'int64', 'Parch': 'int64', 'Ticket': 'object', 'Fare': 'float64', 'Cabin': 'object', 'Embarked': 'object'}
|
||||
let dataframeColDict = ''
|
||||
try {
|
||||
const code = `import pandas as pd
|
||||
import base64
|
||||
from io import StringIO
|
||||
import json
|
||||
|
||||
base64_string = "${base64String}"
|
||||
|
||||
decoded_data = base64.b64decode(base64_string)
|
||||
|
||||
csv_data = StringIO(decoded_data.decode('utf-8'))
|
||||
|
||||
df = pd.read_csv(csv_data)
|
||||
my_dict = df.dtypes.astype(str).to_dict()
|
||||
print(my_dict)
|
||||
json.dumps(my_dict)`
|
||||
dataframeColDict = await pyodide.runPythonAsync(code)
|
||||
} catch (error) {
|
||||
throw new Error(error)
|
||||
}
|
||||
|
||||
// Then tell GPT to come out with ONLY python code
|
||||
// For example: len(df), df[df['SibSp'] > 3]['PassengerId'].count()
|
||||
let pythonCode = ''
|
||||
if (dataframeColDict) {
|
||||
const chain = new LLMChain({
|
||||
llm: model,
|
||||
prompt: PromptTemplate.fromTemplate(systemPrompt),
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
const inputs = {
|
||||
dict: dataframeColDict,
|
||||
question: input
|
||||
}
|
||||
const res = await chain.call(inputs, [loggerHandler])
|
||||
pythonCode = res?.text
|
||||
}
|
||||
|
||||
// Then run the code using Pyodide
|
||||
let finalResult = ''
|
||||
if (pythonCode) {
|
||||
try {
|
||||
const code = `import pandas as pd\n${pythonCode}`
|
||||
finalResult = await pyodide.runPythonAsync(code)
|
||||
} catch (error) {
|
||||
throw new Error(`Sorry, I'm unable to find answer for question: "${input}" using follwoing code: "${pythonCode}"`)
|
||||
}
|
||||
}
|
||||
|
||||
// Finally, return a complete answer
|
||||
if (finalResult) {
|
||||
const chain = new LLMChain({
|
||||
llm: model,
|
||||
prompt: PromptTemplate.fromTemplate(
|
||||
systemMessagePrompt ? `${systemMessagePrompt}\n${finalSystemPrompt}` : finalSystemPrompt
|
||||
),
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
const inputs = {
|
||||
question: input,
|
||||
answer: finalResult
|
||||
}
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const result = await chain.call(inputs, [loggerHandler, handler])
|
||||
return result?.text
|
||||
} else {
|
||||
const result = await chain.call(inputs, [loggerHandler])
|
||||
return result?.text
|
||||
}
|
||||
}
|
||||
|
||||
return pythonCode
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: CSV_Agents }
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
import type { PyodideInterface } from 'pyodide'
|
||||
import * as path from 'path'
|
||||
import { getUserHome } from '../../../src/utils'
|
||||
|
||||
let pyodideInstance: PyodideInterface | undefined
|
||||
|
||||
export async function LoadPyodide(): Promise<PyodideInterface> {
|
||||
if (pyodideInstance === undefined) {
|
||||
const { loadPyodide } = await import('pyodide')
|
||||
const obj: any = { packageCacheDir: path.join(getUserHome(), '.flowise', 'pyodideCacheDir') }
|
||||
pyodideInstance = await loadPyodide(obj)
|
||||
await pyodideInstance.loadPackage(['pandas', 'numpy'])
|
||||
}
|
||||
|
||||
return pyodideInstance
|
||||
}
|
||||
|
||||
export const systemPrompt = `You are working with a pandas dataframe in Python. The name of the dataframe is df.
|
||||
|
||||
The columns and data types of a dataframe are given below as a Python dictionary with keys showing column names and values showing the data types.
|
||||
{dict}
|
||||
|
||||
I will ask question, and you will output the Python code using pandas dataframe to answer my question. Do not provide any explanations. Do not respond with anything except the output of the code.
|
||||
|
||||
Question: {question}
|
||||
Output Code:`
|
||||
|
||||
export const finalSystemPrompt = `You are given the question: {question}. You have an answer to the question: {answer}. Rephrase the answer into a standalone answer.
|
||||
Standalone Answer:`
|
||||
|
After Width: | Height: | Size: 2.0 KiB |
|
|
@ -1,14 +1,23 @@
|
|||
import { ICommonObject, IMessage, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { initializeAgentExecutorWithOptions, AgentExecutor, InitializeAgentExecutorOptions } from 'langchain/agents'
|
||||
import { Tool } from 'langchain/tools'
|
||||
import { BaseChatMemory, ChatMessageHistory } from 'langchain/memory'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { AIChatMessage, HumanChatMessage } from 'langchain/schema'
|
||||
import { BaseChatMemory } from 'langchain/memory'
|
||||
import { getBaseClasses, mapChatHistory } from '../../../src/utils'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { flatten } from 'lodash'
|
||||
|
||||
const DEFAULT_PREFIX = `Assistant is a large language model trained by OpenAI.
|
||||
|
||||
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
|
||||
|
||||
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
|
||||
|
||||
Overall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.`
|
||||
|
||||
class ConversationalAgent_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
|
|
@ -19,6 +28,7 @@ class ConversationalAgent_Agents implements INode {
|
|||
constructor() {
|
||||
this.label = 'Conversational Agent'
|
||||
this.name = 'conversationalAgent'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'agent.svg'
|
||||
|
|
@ -46,14 +56,7 @@ class ConversationalAgent_Agents implements INode {
|
|||
name: 'systemMessage',
|
||||
type: 'string',
|
||||
rows: 4,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Human Message',
|
||||
name: 'humanMessage',
|
||||
type: 'string',
|
||||
rows: 4,
|
||||
default: DEFAULT_PREFIX,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
|
|
@ -63,9 +66,8 @@ class ConversationalAgent_Agents implements INode {
|
|||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
let tools = nodeData.inputs?.tools as Tool[]
|
||||
tools = tools.flat()
|
||||
tools = flatten(tools)
|
||||
const memory = nodeData.inputs?.memory as BaseChatMemory
|
||||
const humanMessage = nodeData.inputs?.humanMessage as string
|
||||
const systemMessage = nodeData.inputs?.systemMessage as string
|
||||
|
||||
const obj: InitializeAgentExecutorOptions = {
|
||||
|
|
@ -74,9 +76,6 @@ class ConversationalAgent_Agents implements INode {
|
|||
}
|
||||
|
||||
const agentArgs: any = {}
|
||||
if (humanMessage) {
|
||||
agentArgs.humanMessage = humanMessage
|
||||
}
|
||||
if (systemMessage) {
|
||||
agentArgs.systemMessage = systemMessage
|
||||
}
|
||||
|
|
@ -93,19 +92,10 @@ class ConversationalAgent_Agents implements INode {
|
|||
const memory = nodeData.inputs?.memory as BaseChatMemory
|
||||
|
||||
if (options && options.chatHistory) {
|
||||
const chatHistory = []
|
||||
const histories: IMessage[] = options.chatHistory
|
||||
|
||||
for (const message of histories) {
|
||||
if (message.type === 'apiMessage') {
|
||||
chatHistory.push(new AIChatMessage(message.message))
|
||||
} else if (message.type === 'userMessage') {
|
||||
chatHistory.push(new HumanChatMessage(message.message))
|
||||
}
|
||||
}
|
||||
memory.chatHistory = new ChatMessageHistory(chatHistory)
|
||||
memory.chatHistory = mapChatHistory(options)
|
||||
executor.memory = memory
|
||||
}
|
||||
|
||||
const result = await executor.call({ input })
|
||||
|
||||
return result?.output
|
||||
|
|
|
|||
|
|
@ -0,0 +1,101 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { initializeAgentExecutorWithOptions, AgentExecutor } from 'langchain/agents'
|
||||
import { getBaseClasses, mapChatHistory } from '../../../src/utils'
|
||||
import { flatten } from 'lodash'
|
||||
import { BaseChatMemory } from 'langchain/memory'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
const defaultMessage = `Do your best to answer the questions. Feel free to use any tools available to look up relevant information, only if necessary.`
|
||||
|
||||
class ConversationalRetrievalAgent_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Conversational Retrieval Agent'
|
||||
this.name = 'conversationalRetrievalAgent'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'agent.svg'
|
||||
this.description = `An agent optimized for retrieval during conversation, answering questions based on past dialogue, all using OpenAI's Function Calling`
|
||||
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Allowed Tools',
|
||||
name: 'tools',
|
||||
type: 'Tool',
|
||||
list: true
|
||||
},
|
||||
{
|
||||
label: 'Memory',
|
||||
name: 'memory',
|
||||
type: 'BaseChatMemory'
|
||||
},
|
||||
{
|
||||
label: 'OpenAI Chat Model',
|
||||
name: 'model',
|
||||
type: 'ChatOpenAI'
|
||||
},
|
||||
{
|
||||
label: 'System Message',
|
||||
name: 'systemMessage',
|
||||
type: 'string',
|
||||
default: defaultMessage,
|
||||
rows: 4,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model
|
||||
const memory = nodeData.inputs?.memory as BaseChatMemory
|
||||
const systemMessage = nodeData.inputs?.systemMessage as string
|
||||
|
||||
let tools = nodeData.inputs?.tools
|
||||
tools = flatten(tools)
|
||||
|
||||
const executor = await initializeAgentExecutorWithOptions(tools, model, {
|
||||
agentType: 'openai-functions',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
agentArgs: {
|
||||
prefix: systemMessage ?? defaultMessage
|
||||
},
|
||||
returnIntermediateSteps: true
|
||||
})
|
||||
executor.memory = memory
|
||||
return executor
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const executor = nodeData.instance as AgentExecutor
|
||||
|
||||
if (executor.memory) {
|
||||
;(executor.memory as any).memoryKey = 'chat_history'
|
||||
;(executor.memory as any).outputKey = 'output'
|
||||
;(executor.memory as any).chatHistory = mapChatHistory(options)
|
||||
}
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
|
||||
const result = await executor.call({ input }, [loggerHandler, handler])
|
||||
return result?.output
|
||||
} else {
|
||||
const result = await executor.call({ input }, [loggerHandler])
|
||||
return result?.output
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: ConversationalRetrievalAgent_Agents }
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-robot" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path>
|
||||
<path d="M7 7h10a2 2 0 0 1 2 2v1l1 1v3l-1 1v3a2 2 0 0 1 -2 2h-10a2 2 0 0 1 -2 -2v-3l-1 -1v-3l1 -1v-1a2 2 0 0 1 2 -2z"></path>
|
||||
<path d="M10 16h4"></path>
|
||||
<circle cx="8.5" cy="11.5" r=".5" fill="currentColor"></circle>
|
||||
<circle cx="15.5" cy="11.5" r=".5" fill="currentColor"></circle>
|
||||
<path d="M9 7l-1 -4"></path>
|
||||
<path d="M15 7l1 -4"></path>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 650 B |
|
|
@ -3,10 +3,12 @@ import { initializeAgentExecutorWithOptions, AgentExecutor } from 'langchain/age
|
|||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { Tool } from 'langchain/tools'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { flatten } from 'lodash'
|
||||
|
||||
class MRKLAgentChat_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
|
|
@ -17,6 +19,7 @@ class MRKLAgentChat_Agents implements INode {
|
|||
constructor() {
|
||||
this.label = 'MRKL Agent for Chat Models'
|
||||
this.name = 'mrklAgentChat'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'agent.svg'
|
||||
|
|
@ -40,7 +43,7 @@ class MRKLAgentChat_Agents implements INode {
|
|||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
let tools = nodeData.inputs?.tools as Tool[]
|
||||
tools = tools.flat()
|
||||
tools = flatten(tools)
|
||||
const executor = await initializeAgentExecutorWithOptions(tools, model, {
|
||||
agentType: 'chat-zero-shot-react-description',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
|
|
|
|||
|
|
@ -3,10 +3,12 @@ import { initializeAgentExecutorWithOptions, AgentExecutor } from 'langchain/age
|
|||
import { Tool } from 'langchain/tools'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { flatten } from 'lodash'
|
||||
|
||||
class MRKLAgentLLM_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
|
|
@ -17,6 +19,7 @@ class MRKLAgentLLM_Agents implements INode {
|
|||
constructor() {
|
||||
this.label = 'MRKL Agent for LLMs'
|
||||
this.name = 'mrklAgentLLM'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'agent.svg'
|
||||
|
|
@ -40,7 +43,7 @@ class MRKLAgentLLM_Agents implements INode {
|
|||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
let tools = nodeData.inputs?.tools as Tool[]
|
||||
tools = tools.flat()
|
||||
tools = flatten(tools)
|
||||
|
||||
const executor = await initializeAgentExecutorWithOptions(tools, model, {
|
||||
agentType: 'zero-shot-react-description',
|
||||
|
|
|
|||
|
|
@ -0,0 +1,101 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { initializeAgentExecutorWithOptions, AgentExecutor } from 'langchain/agents'
|
||||
import { getBaseClasses, mapChatHistory } from '../../../src/utils'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { flatten } from 'lodash'
|
||||
import { BaseChatMemory } from 'langchain/memory'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class OpenAIFunctionAgent_Agents implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'OpenAI Function Agent'
|
||||
this.name = 'openAIFunctionAgent'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'openai.png'
|
||||
this.description = `An agent that uses OpenAI's Function Calling functionality to pick the tool and args to call`
|
||||
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Allowed Tools',
|
||||
name: 'tools',
|
||||
type: 'Tool',
|
||||
list: true
|
||||
},
|
||||
{
|
||||
label: 'Memory',
|
||||
name: 'memory',
|
||||
type: 'BaseChatMemory'
|
||||
},
|
||||
{
|
||||
label: 'OpenAI Chat Model',
|
||||
name: 'model',
|
||||
description:
|
||||
'Only works with gpt-3.5-turbo-0613 and gpt-4-0613. Refer <a target="_blank" href="https://platform.openai.com/docs/guides/gpt/function-calling">docs</a> for more info',
|
||||
type: 'BaseChatModel'
|
||||
},
|
||||
{
|
||||
label: 'System Message',
|
||||
name: 'systemMessage',
|
||||
type: 'string',
|
||||
rows: 4,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const memory = nodeData.inputs?.memory as BaseChatMemory
|
||||
const systemMessage = nodeData.inputs?.systemMessage as string
|
||||
|
||||
let tools = nodeData.inputs?.tools
|
||||
tools = flatten(tools)
|
||||
|
||||
const executor = await initializeAgentExecutorWithOptions(tools, model, {
|
||||
agentType: 'openai-functions',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
agentArgs: {
|
||||
prefix: systemMessage ?? `You are a helpful AI assistant.`
|
||||
}
|
||||
})
|
||||
if (memory) executor.memory = memory
|
||||
|
||||
return executor
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const executor = nodeData.instance as AgentExecutor
|
||||
const memory = nodeData.inputs?.memory as BaseChatMemory
|
||||
|
||||
if (options && options.chatHistory) {
|
||||
memory.chatHistory = mapChatHistory(options)
|
||||
executor.memory = memory
|
||||
}
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
|
||||
const result = await executor.run(input, [loggerHandler, handler])
|
||||
return result
|
||||
} else {
|
||||
const result = await executor.run(input, [loggerHandler])
|
||||
return result
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: OpenAIFunctionAgent_Agents }
|
||||
|
After Width: | Height: | Size: 3.9 KiB |
|
|
@ -0,0 +1,134 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { APIChain } from 'langchain/chains'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { PromptTemplate } from 'langchain/prompts'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
export const API_URL_RAW_PROMPT_TEMPLATE = `You are given the below API Documentation:
|
||||
{api_docs}
|
||||
Using this documentation, generate the full API url to call for answering the user question.
|
||||
You should build the API url in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.
|
||||
|
||||
Question:{question}
|
||||
API url:`
|
||||
|
||||
export const API_RESPONSE_RAW_PROMPT_TEMPLATE =
|
||||
'Given this {api_response} response for {api_url}. use the given response to answer this {question}'
|
||||
|
||||
class GETApiChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'GET API Chain'
|
||||
this.name = 'getApiChain'
|
||||
this.version = 1.0
|
||||
this.type = 'GETApiChain'
|
||||
this.icon = 'apichain.svg'
|
||||
this.category = 'Chains'
|
||||
this.description = 'Chain to run queries against GET API'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Language Model',
|
||||
name: 'model',
|
||||
type: 'BaseLanguageModel'
|
||||
},
|
||||
{
|
||||
label: 'API Documentation',
|
||||
name: 'apiDocs',
|
||||
type: 'string',
|
||||
description:
|
||||
'Description of how API works. Please refer to more <a target="_blank" href="https://github.com/hwchase17/langchain/blob/master/langchain/chains/api/open_meteo_docs.py">examples</a>',
|
||||
rows: 4
|
||||
},
|
||||
{
|
||||
label: 'Headers',
|
||||
name: 'headers',
|
||||
type: 'json',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'URL Prompt',
|
||||
name: 'urlPrompt',
|
||||
type: 'string',
|
||||
description: 'Prompt used to tell LLMs how to construct the URL. Must contains {api_docs} and {question}',
|
||||
default: API_URL_RAW_PROMPT_TEMPLATE,
|
||||
rows: 4,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Answer Prompt',
|
||||
name: 'ansPrompt',
|
||||
type: 'string',
|
||||
description:
|
||||
'Prompt used to tell LLMs how to return the API response. Must contains {api_response}, {api_url}, and {question}',
|
||||
default: API_RESPONSE_RAW_PROMPT_TEMPLATE,
|
||||
rows: 4,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const apiDocs = nodeData.inputs?.apiDocs as string
|
||||
const headers = nodeData.inputs?.headers as string
|
||||
const urlPrompt = nodeData.inputs?.urlPrompt as string
|
||||
const ansPrompt = nodeData.inputs?.ansPrompt as string
|
||||
|
||||
const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
|
||||
return chain
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const apiDocs = nodeData.inputs?.apiDocs as string
|
||||
const headers = nodeData.inputs?.headers as string
|
||||
const urlPrompt = nodeData.inputs?.urlPrompt as string
|
||||
const ansPrompt = nodeData.inputs?.ansPrompt as string
|
||||
|
||||
const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2)
|
||||
const res = await chain.run(input, [loggerHandler, handler])
|
||||
return res
|
||||
} else {
|
||||
const res = await chain.run(input, [loggerHandler])
|
||||
return res
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const getAPIChain = async (documents: string, llm: BaseLanguageModel, headers: string, urlPrompt: string, ansPrompt: string) => {
|
||||
const apiUrlPrompt = new PromptTemplate({
|
||||
inputVariables: ['api_docs', 'question'],
|
||||
template: urlPrompt ? urlPrompt : API_URL_RAW_PROMPT_TEMPLATE
|
||||
})
|
||||
|
||||
const apiResponsePrompt = new PromptTemplate({
|
||||
inputVariables: ['api_docs', 'question', 'api_url', 'api_response'],
|
||||
template: ansPrompt ? ansPrompt : API_RESPONSE_RAW_PROMPT_TEMPLATE
|
||||
})
|
||||
|
||||
const chain = APIChain.fromLLMAndAPIDocs(llm, documents, {
|
||||
apiUrlPrompt,
|
||||
apiResponsePrompt,
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {}
|
||||
})
|
||||
return chain
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: GETApiChain_Chains }
|
||||
|
|
@ -0,0 +1,100 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { APIChain, createOpenAPIChain } from 'langchain/chains'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { ChatOpenAI } from 'langchain/chat_models/openai'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class OpenApiChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'OpenAPI Chain'
|
||||
this.name = 'openApiChain'
|
||||
this.version = 1.0
|
||||
this.type = 'OpenAPIChain'
|
||||
this.icon = 'openapi.png'
|
||||
this.category = 'Chains'
|
||||
this.description = 'Chain that automatically select and call APIs based only on an OpenAPI spec'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'ChatOpenAI Model',
|
||||
name: 'model',
|
||||
type: 'ChatOpenAI'
|
||||
},
|
||||
{
|
||||
label: 'YAML Link',
|
||||
name: 'yamlLink',
|
||||
type: 'string',
|
||||
placeholder: 'https://api.speak.com/openapi.yaml',
|
||||
description: 'If YAML link is provided, uploaded YAML File will be ignored and YAML link will be used instead'
|
||||
},
|
||||
{
|
||||
label: 'YAML File',
|
||||
name: 'yamlFile',
|
||||
type: 'file',
|
||||
fileType: '.yaml',
|
||||
description: 'If YAML link is provided, uploaded YAML File will be ignored and YAML link will be used instead'
|
||||
},
|
||||
{
|
||||
label: 'Headers',
|
||||
name: 'headers',
|
||||
type: 'json',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
return await initChain(nodeData)
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const chain = await initChain(nodeData)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
|
||||
const res = await chain.run(input, [loggerHandler, handler])
|
||||
return res
|
||||
} else {
|
||||
const res = await chain.run(input, [loggerHandler])
|
||||
return res
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const initChain = async (nodeData: INodeData) => {
|
||||
const model = nodeData.inputs?.model as ChatOpenAI
|
||||
const headers = nodeData.inputs?.headers as string
|
||||
const yamlLink = nodeData.inputs?.yamlLink as string
|
||||
const yamlFileBase64 = nodeData.inputs?.yamlFile as string
|
||||
|
||||
let yamlString = ''
|
||||
|
||||
if (yamlLink) {
|
||||
yamlString = yamlLink
|
||||
} else {
|
||||
const splitDataURI = yamlFileBase64.split(',')
|
||||
splitDataURI.pop()
|
||||
const bf = Buffer.from(splitDataURI.pop() || '', 'base64')
|
||||
yamlString = bf.toString('utf-8')
|
||||
}
|
||||
|
||||
return await createOpenAPIChain(yamlString, {
|
||||
llm: model,
|
||||
headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {},
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: OpenApiChain_Chains }
|
||||
|
|
@ -0,0 +1,123 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { PromptTemplate } from 'langchain/prompts'
|
||||
import { API_RESPONSE_RAW_PROMPT_TEMPLATE, API_URL_RAW_PROMPT_TEMPLATE, APIChain } from './postCore'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class POSTApiChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'POST API Chain'
|
||||
this.name = 'postApiChain'
|
||||
this.version = 1.0
|
||||
this.type = 'POSTApiChain'
|
||||
this.icon = 'apichain.svg'
|
||||
this.category = 'Chains'
|
||||
this.description = 'Chain to run queries against POST API'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Language Model',
|
||||
name: 'model',
|
||||
type: 'BaseLanguageModel'
|
||||
},
|
||||
{
|
||||
label: 'API Documentation',
|
||||
name: 'apiDocs',
|
||||
type: 'string',
|
||||
description:
|
||||
'Description of how API works. Please refer to more <a target="_blank" href="https://github.com/hwchase17/langchain/blob/master/langchain/chains/api/open_meteo_docs.py">examples</a>',
|
||||
rows: 4
|
||||
},
|
||||
{
|
||||
label: 'Headers',
|
||||
name: 'headers',
|
||||
type: 'json',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'URL Prompt',
|
||||
name: 'urlPrompt',
|
||||
type: 'string',
|
||||
description: 'Prompt used to tell LLMs how to construct the URL. Must contains {api_docs} and {question}',
|
||||
default: API_URL_RAW_PROMPT_TEMPLATE,
|
||||
rows: 4,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Answer Prompt',
|
||||
name: 'ansPrompt',
|
||||
type: 'string',
|
||||
description:
|
||||
'Prompt used to tell LLMs how to return the API response. Must contains {api_response}, {api_url}, and {question}',
|
||||
default: API_RESPONSE_RAW_PROMPT_TEMPLATE,
|
||||
rows: 4,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const apiDocs = nodeData.inputs?.apiDocs as string
|
||||
const headers = nodeData.inputs?.headers as string
|
||||
const urlPrompt = nodeData.inputs?.urlPrompt as string
|
||||
const ansPrompt = nodeData.inputs?.ansPrompt as string
|
||||
|
||||
const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
|
||||
return chain
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const apiDocs = nodeData.inputs?.apiDocs as string
|
||||
const headers = nodeData.inputs?.headers as string
|
||||
const urlPrompt = nodeData.inputs?.urlPrompt as string
|
||||
const ansPrompt = nodeData.inputs?.ansPrompt as string
|
||||
|
||||
const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2)
|
||||
const res = await chain.run(input, [loggerHandler, handler])
|
||||
return res
|
||||
} else {
|
||||
const res = await chain.run(input, [loggerHandler])
|
||||
return res
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const getAPIChain = async (documents: string, llm: BaseLanguageModel, headers: string, urlPrompt: string, ansPrompt: string) => {
|
||||
const apiUrlPrompt = new PromptTemplate({
|
||||
inputVariables: ['api_docs', 'question'],
|
||||
template: urlPrompt ? urlPrompt : API_URL_RAW_PROMPT_TEMPLATE
|
||||
})
|
||||
|
||||
const apiResponsePrompt = new PromptTemplate({
|
||||
inputVariables: ['api_docs', 'question', 'api_url_body', 'api_response'],
|
||||
template: ansPrompt ? ansPrompt : API_RESPONSE_RAW_PROMPT_TEMPLATE
|
||||
})
|
||||
|
||||
const chain = APIChain.fromLLMAndAPIDocs(llm, documents, {
|
||||
apiUrlPrompt,
|
||||
apiResponsePrompt,
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {}
|
||||
})
|
||||
return chain
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: POSTApiChain_Chains }
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
<?xml version="1.0" encoding="utf-8"?><svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 122.88 92.96" style="enable-background:new 0 0 122.88 92.96" xml:space="preserve"><style type="text/css"><![CDATA[
|
||||
.st0{fill-rule:evenodd;clip-rule:evenodd;}
|
||||
]]></style><g><path class="st0" d="M100.09,6.87l2.5,3.3c0.66,0.87,0.49,2.12-0.38,2.77l-2.66,2.02c0.48,1.29,0.79,2.65,0.92,4.06l3.03,0.41 c1.08,0.15,1.84,1.15,1.69,2.23l-0.56,4.1c-0.15,1.08-1.15,1.84-2.23,1.69l-3.3-0.45c-0.59,1.28-1.34,2.46-2.22,3.52l1.85,2.43 c0.66,0.87,0.49,2.12-0.38,2.78l-3.3,2.5c-0.87,0.66-2.12,0.49-2.78-0.38l-2.02-2.66c-1.29,0.48-2.66,0.79-4.06,0.92l-0.41,3.03 c-0.15,1.08-1.15,1.84-2.23,1.69l-4.1-0.56c-1.08-0.15-1.84-1.15-1.69-2.23l0.45-3.3c-1.28-0.59-2.46-1.34-3.52-2.23l-2.43,1.85 c-0.87,0.66-2.12,0.49-2.78-0.38l-2.5-3.3c-0.66-0.87-0.49-2.12,0.38-2.77l2.66-2.02c-0.48-1.29-0.79-2.65-0.92-4.06l-3.03-0.41 c-1.08-0.15-1.84-1.15-1.69-2.23l0.56-4.1c0.15-1.08,1.15-1.84,2.23-1.69l3.3,0.45c0.59-1.28,1.34-2.46,2.23-3.52L70.84,7.9 c-0.66-0.87-0.49-2.12,0.38-2.78l3.3-2.5c0.87-0.66,2.12-0.49,2.78,0.38l2.02,2.66c1.29-0.48,2.66-0.79,4.06-0.92l0.41-3.02 c0.15-1.08,1.15-1.84,2.23-1.69l4.1,0.56c1.08,0.15,1.84,1.15,1.69,2.23l-0.45,3.3c1.28,0.59,2.46,1.34,3.52,2.23l2.43-1.85 C98.19,5.83,99.44,6,100.09,6.87L100.09,6.87L100.09,6.87z M55.71,13.75c-0.23,0.02-0.46,0.04-0.69,0.06 c-5.63,0.54-11.1,2.59-15.62,6.1c-5.23,4.05-9.2,10.11-10.73,18.14l-0.48,2.51L25.69,41c-2.45,0.43-4.64,1.02-6.56,1.77 c-1.86,0.72-3.52,1.61-4.97,2.66c-1.16,0.84-2.16,1.78-3.01,2.8c-2.63,3.15-3.85,7.1-3.82,11.1c0.03,4.06,1.35,8.16,3.79,11.53 c0.91,1.25,1.96,2.4,3.16,3.4l0.03,0.02l-2.68,7.13c-0.71-0.47-1.4-0.98-2.04-1.52c-1.68-1.4-3.15-2.99-4.4-4.72 C1.84,70.57,0.04,64.95,0,59.35c-0.04-5.66,1.72-11.29,5.52-15.85c1.23-1.48,2.68-2.84,4.34-4.04c1.93-1.4,4.14-2.58,6.64-3.55 c1.72-0.67,3.56-1.23,5.5-1.68c2.2-8.74,6.89-15.47,12.92-20.14c5.64-4.37,12.43-6.92,19.42-7.59c2.13-0.21,4.29-0.24,6.43-0.09 c-0.47,0.25-0.91,0.53-1.33,0.85l-0.03,0.02c-1.93,1.47-3.32,3.7-3.68,6.3L55.71,13.75L55.71,13.75z M43.85,87.38H31.99l-1.7,5.58 H19.6l12.75-33.87h11.46l12.7,33.87H45.55L43.85,87.38L43.85,87.38z M41.63,80.04l-3.7-12.17l-3.71,12.17H41.63L41.63,80.04z M59.78,59.09h17.41c3.79,0,6.64,0.9,8.52,2.7c1.88,1.8,2.83,4.38,2.83,7.71c0,3.42-1.04,6.1-3.09,8.03 c-2.06,1.93-5.21,2.89-9.43,2.89h-5.74v12.54h-10.5V59.09L59.78,59.09z M70.28,73.56h2.58c2.03,0,3.46-0.35,4.28-1.06 c0.82-0.7,1.23-1.6,1.23-2.7c0-1.06-0.36-1.96-1.07-2.7c-0.71-0.74-2.05-1.11-4.02-1.11h-3V73.56L70.28,73.56z M92.77,59.09h10.5 v33.87h-10.5V59.09L92.77,59.09z M112.01,31.74c1.07,0.83,2.09,1.77,3.07,2.82c1.07,1.15,2.08,2.45,3.03,3.9 c3.2,4.92,4.84,11.49,4.77,17.92c-0.07,6.31-1.77,12.59-5.25,17.21c-0.84,1.11-1.77,2.15-2.78,3.12V62.15 c0.43-1.87,0.65-3.84,0.67-5.83c0.06-5.07-1.18-10.16-3.59-13.86c-0.69-1.07-1.45-2.03-2.25-2.89c-0.65-0.7-1.34-1.34-2.05-1.9 c0.07-0.3,0.13-0.6,0.17-0.9c0.08-0.62,0.11-1.25,0.07-1.88c0.82-0.32,1.58-0.75,2.26-1.27l0.03-0.02 C110.86,33.06,111.48,32.44,112.01,31.74L112.01,31.74z M85.89,12.37c4.45,0.61,7.57,4.71,6.96,9.17 c-0.61,4.45-4.71,7.57-9.17,6.96c-4.45-0.61-7.57-4.71-6.96-9.17S81.44,11.76,85.89,12.37L85.89,12.37L85.89,12.37z"/></g></svg>
|
||||
|
After Width: | Height: | Size: 3.2 KiB |
|
After Width: | Height: | Size: 24 KiB |
|
|
@ -0,0 +1,162 @@
|
|||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { CallbackManagerForChainRun } from 'langchain/callbacks'
|
||||
import { BaseChain, ChainInputs, LLMChain, SerializedAPIChain } from 'langchain/chains'
|
||||
import { BasePromptTemplate, PromptTemplate } from 'langchain/prompts'
|
||||
import { ChainValues } from 'langchain/schema'
|
||||
import fetch from 'node-fetch'
|
||||
|
||||
export const API_URL_RAW_PROMPT_TEMPLATE = `You are given the below API Documentation:
|
||||
{api_docs}
|
||||
Using this documentation, generate a json string with two keys: "url" and "data".
|
||||
The value of "url" should be a string, which is the API url to call for answering the user question.
|
||||
The value of "data" should be a dictionary of key-value pairs you want to POST to the url as a JSON body.
|
||||
Be careful to always use double quotes for strings in the json string.
|
||||
You should build the json string in order to get a response that is as short as possible, while still getting the necessary information to answer the question. Pay attention to deliberately exclude any unnecessary pieces of data in the API call.
|
||||
|
||||
Question:{question}
|
||||
json string:`
|
||||
|
||||
export const API_RESPONSE_RAW_PROMPT_TEMPLATE = `${API_URL_RAW_PROMPT_TEMPLATE} {api_url_body}
|
||||
|
||||
Here is the response from the API:
|
||||
|
||||
{api_response}
|
||||
|
||||
Summarize this response to answer the original question.
|
||||
|
||||
Summary:`
|
||||
|
||||
const defaultApiUrlPrompt = new PromptTemplate({
|
||||
inputVariables: ['api_docs', 'question'],
|
||||
template: API_URL_RAW_PROMPT_TEMPLATE
|
||||
})
|
||||
|
||||
const defaultApiResponsePrompt = new PromptTemplate({
|
||||
inputVariables: ['api_docs', 'question', 'api_url_body', 'api_response'],
|
||||
template: API_RESPONSE_RAW_PROMPT_TEMPLATE
|
||||
})
|
||||
|
||||
export interface APIChainInput extends Omit<ChainInputs, 'memory'> {
|
||||
apiAnswerChain: LLMChain
|
||||
apiRequestChain: LLMChain
|
||||
apiDocs: string
|
||||
inputKey?: string
|
||||
headers?: Record<string, string>
|
||||
/** Key to use for output, defaults to `output` */
|
||||
outputKey?: string
|
||||
}
|
||||
|
||||
export type APIChainOptions = {
|
||||
headers?: Record<string, string>
|
||||
apiUrlPrompt?: BasePromptTemplate
|
||||
apiResponsePrompt?: BasePromptTemplate
|
||||
}
|
||||
|
||||
export class APIChain extends BaseChain implements APIChainInput {
|
||||
apiAnswerChain: LLMChain
|
||||
|
||||
apiRequestChain: LLMChain
|
||||
|
||||
apiDocs: string
|
||||
|
||||
headers = {}
|
||||
|
||||
inputKey = 'question'
|
||||
|
||||
outputKey = 'output'
|
||||
|
||||
get inputKeys() {
|
||||
return [this.inputKey]
|
||||
}
|
||||
|
||||
get outputKeys() {
|
||||
return [this.outputKey]
|
||||
}
|
||||
|
||||
constructor(fields: APIChainInput) {
|
||||
super(fields)
|
||||
this.apiRequestChain = fields.apiRequestChain
|
||||
this.apiAnswerChain = fields.apiAnswerChain
|
||||
this.apiDocs = fields.apiDocs
|
||||
this.inputKey = fields.inputKey ?? this.inputKey
|
||||
this.outputKey = fields.outputKey ?? this.outputKey
|
||||
this.headers = fields.headers ?? this.headers
|
||||
}
|
||||
|
||||
/** @ignore */
|
||||
async _call(values: ChainValues, runManager?: CallbackManagerForChainRun): Promise<ChainValues> {
|
||||
try {
|
||||
const question: string = values[this.inputKey]
|
||||
|
||||
const api_url_body = await this.apiRequestChain.predict({ question, api_docs: this.apiDocs }, runManager?.getChild())
|
||||
|
||||
const { url, data } = JSON.parse(api_url_body)
|
||||
|
||||
const res = await fetch(url, {
|
||||
method: 'POST',
|
||||
headers: this.headers,
|
||||
body: JSON.stringify(data)
|
||||
})
|
||||
|
||||
const api_response = await res.text()
|
||||
|
||||
const answer = await this.apiAnswerChain.predict(
|
||||
{ question, api_docs: this.apiDocs, api_url_body, api_response },
|
||||
runManager?.getChild()
|
||||
)
|
||||
|
||||
return { [this.outputKey]: answer }
|
||||
} catch (error) {
|
||||
return { [this.outputKey]: error }
|
||||
}
|
||||
}
|
||||
|
||||
_chainType() {
|
||||
return 'api_chain' as const
|
||||
}
|
||||
|
||||
static async deserialize(data: SerializedAPIChain) {
|
||||
const { api_request_chain, api_answer_chain, api_docs } = data
|
||||
|
||||
if (!api_request_chain) {
|
||||
throw new Error('LLMChain must have api_request_chain')
|
||||
}
|
||||
if (!api_answer_chain) {
|
||||
throw new Error('LLMChain must have api_answer_chain')
|
||||
}
|
||||
if (!api_docs) {
|
||||
throw new Error('LLMChain must have api_docs')
|
||||
}
|
||||
|
||||
return new APIChain({
|
||||
apiAnswerChain: await LLMChain.deserialize(api_answer_chain),
|
||||
apiRequestChain: await LLMChain.deserialize(api_request_chain),
|
||||
apiDocs: api_docs
|
||||
})
|
||||
}
|
||||
|
||||
serialize(): SerializedAPIChain {
|
||||
return {
|
||||
_type: this._chainType(),
|
||||
api_answer_chain: this.apiAnswerChain.serialize(),
|
||||
api_request_chain: this.apiRequestChain.serialize(),
|
||||
api_docs: this.apiDocs
|
||||
}
|
||||
}
|
||||
|
||||
static fromLLMAndAPIDocs(
|
||||
llm: BaseLanguageModel,
|
||||
apiDocs: string,
|
||||
options: APIChainOptions & Omit<APIChainInput, 'apiAnswerChain' | 'apiRequestChain' | 'apiDocs'> = {}
|
||||
): APIChain {
|
||||
const { apiUrlPrompt = defaultApiUrlPrompt, apiResponsePrompt = defaultApiResponsePrompt } = options
|
||||
const apiRequestChain = new LLMChain({ prompt: apiUrlPrompt, llm })
|
||||
const apiAnswerChain = new LLMChain({ prompt: apiResponsePrompt, llm })
|
||||
return new this({
|
||||
apiAnswerChain,
|
||||
apiRequestChain,
|
||||
apiDocs,
|
||||
...options
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
@ -1,16 +1,19 @@
|
|||
import { ICommonObject, IMessage, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { ConversationChain } from 'langchain/chains'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { getBaseClasses, mapChatHistory } from '../../../src/utils'
|
||||
import { ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate } from 'langchain/prompts'
|
||||
import { BufferMemory, ChatMessageHistory } from 'langchain/memory'
|
||||
import { BufferMemory } from 'langchain/memory'
|
||||
import { BaseChatModel } from 'langchain/chat_models/base'
|
||||
import { AIChatMessage, HumanChatMessage } from 'langchain/schema'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
import { flatten } from 'lodash'
|
||||
import { Document } from 'langchain/document'
|
||||
|
||||
const systemMessage = `The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.`
|
||||
let systemMessage = `The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.`
|
||||
|
||||
class ConversationChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
|
|
@ -21,6 +24,7 @@ class ConversationChain_Chains implements INode {
|
|||
constructor() {
|
||||
this.label = 'Conversation Chain'
|
||||
this.name = 'conversationChain'
|
||||
this.version = 1.0
|
||||
this.type = 'ConversationChain'
|
||||
this.icon = 'chain.svg'
|
||||
this.category = 'Chains'
|
||||
|
|
@ -37,6 +41,15 @@ class ConversationChain_Chains implements INode {
|
|||
name: 'memory',
|
||||
type: 'BaseMemory'
|
||||
},
|
||||
{
|
||||
label: 'Document',
|
||||
name: 'document',
|
||||
type: 'Document',
|
||||
description:
|
||||
'Include whole document into the context window, if you get maximum context length error, please use model with higher context window like Claude 100k, or gpt4 32k',
|
||||
optional: true,
|
||||
list: true
|
||||
},
|
||||
{
|
||||
label: 'System Message',
|
||||
name: 'systemMessagePrompt',
|
||||
|
|
@ -53,10 +66,28 @@ class ConversationChain_Chains implements INode {
|
|||
const model = nodeData.inputs?.model as BaseChatModel
|
||||
const memory = nodeData.inputs?.memory as BufferMemory
|
||||
const prompt = nodeData.inputs?.systemMessagePrompt as string
|
||||
const docs = nodeData.inputs?.document as Document[]
|
||||
|
||||
const flattenDocs = docs && docs.length ? flatten(docs) : []
|
||||
const finalDocs = []
|
||||
for (let i = 0; i < flattenDocs.length; i += 1) {
|
||||
finalDocs.push(new Document(flattenDocs[i]))
|
||||
}
|
||||
|
||||
let finalText = ''
|
||||
for (let i = 0; i < finalDocs.length; i += 1) {
|
||||
finalText += finalDocs[i].pageContent
|
||||
}
|
||||
|
||||
const replaceChar: string[] = ['{', '}']
|
||||
for (const char of replaceChar) finalText = finalText.replaceAll(char, '')
|
||||
|
||||
if (finalText) systemMessage = `${systemMessage}\nThe AI has the following context:\n${finalText}`
|
||||
|
||||
const obj: any = {
|
||||
llm: model,
|
||||
memory
|
||||
memory,
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
}
|
||||
|
||||
const chatPrompt = ChatPromptTemplate.fromPromptMessages([
|
||||
|
|
@ -75,22 +106,20 @@ class ConversationChain_Chains implements INode {
|
|||
const memory = nodeData.inputs?.memory as BufferMemory
|
||||
|
||||
if (options && options.chatHistory) {
|
||||
const chatHistory = []
|
||||
const histories: IMessage[] = options.chatHistory
|
||||
|
||||
for (const message of histories) {
|
||||
if (message.type === 'apiMessage') {
|
||||
chatHistory.push(new AIChatMessage(message.message))
|
||||
} else if (message.type === 'userMessage') {
|
||||
chatHistory.push(new HumanChatMessage(message.message))
|
||||
}
|
||||
}
|
||||
memory.chatHistory = new ChatMessageHistory(chatHistory)
|
||||
memory.chatHistory = mapChatHistory(options)
|
||||
chain.memory = memory
|
||||
}
|
||||
|
||||
const res = await chain.call({ input })
|
||||
return res?.response
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
|
||||
const res = await chain.call({ input }, [loggerHandler, handler])
|
||||
return res?.response
|
||||
} else {
|
||||
const res = await chain.call({ input }, [loggerHandler])
|
||||
return res?.response
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,26 +1,25 @@
|
|||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { ICommonObject, IMessage, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { ConversationalRetrievalQAChain } from 'langchain/chains'
|
||||
import { BaseRetriever } from 'langchain/schema'
|
||||
|
||||
const default_qa_template = `Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
|
||||
|
||||
{context}
|
||||
|
||||
Question: {question}
|
||||
Helpful Answer:`
|
||||
|
||||
const qa_template = `Use the following pieces of context to answer the question at the end.
|
||||
|
||||
{context}
|
||||
|
||||
Question: {question}
|
||||
Helpful Answer:`
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses, mapChatHistory } from '../../../src/utils'
|
||||
import { ConversationalRetrievalQAChain, QAChainParams } from 'langchain/chains'
|
||||
import { BaseRetriever } from 'langchain/schema/retriever'
|
||||
import { BufferMemory, BufferMemoryInput } from 'langchain/memory'
|
||||
import { PromptTemplate } from 'langchain/prompts'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
import {
|
||||
default_map_reduce_template,
|
||||
default_qa_template,
|
||||
qa_template,
|
||||
map_reduce_template,
|
||||
CUSTOM_QUESTION_GENERATOR_CHAIN_PROMPT,
|
||||
refine_question_template,
|
||||
refine_template
|
||||
} from './prompts'
|
||||
|
||||
class ConversationalRetrievalQAChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
|
|
@ -31,6 +30,7 @@ class ConversationalRetrievalQAChain_Chains implements INode {
|
|||
constructor() {
|
||||
this.label = 'Conversational Retrieval QA Chain'
|
||||
this.name = 'conversationalRetrievalQAChain'
|
||||
this.version = 1.0
|
||||
this.type = 'ConversationalRetrievalQAChain'
|
||||
this.icon = 'chain.svg'
|
||||
this.category = 'Chains'
|
||||
|
|
@ -47,6 +47,19 @@ class ConversationalRetrievalQAChain_Chains implements INode {
|
|||
name: 'vectorStoreRetriever',
|
||||
type: 'BaseRetriever'
|
||||
},
|
||||
{
|
||||
label: 'Memory',
|
||||
name: 'memory',
|
||||
type: 'BaseMemory',
|
||||
optional: true,
|
||||
description: 'If left empty, a default BufferMemory will be used'
|
||||
},
|
||||
{
|
||||
label: 'Return Source Documents',
|
||||
name: 'returnSourceDocuments',
|
||||
type: 'boolean',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'System Message',
|
||||
name: 'systemMessagePrompt',
|
||||
|
|
@ -56,6 +69,31 @@ class ConversationalRetrievalQAChain_Chains implements INode {
|
|||
optional: true,
|
||||
placeholder:
|
||||
'I want you to act as a document that I am having a conversation with. Your name is "AI Assistant". You will provide me with answers from the given info. If the answer is not included, say exactly "Hmm, I am not sure." and stop after that. Refuse to answer any question not about the info. Never break character.'
|
||||
},
|
||||
{
|
||||
label: 'Chain Option',
|
||||
name: 'chainOption',
|
||||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: 'MapReduceDocumentsChain',
|
||||
name: 'map_reduce',
|
||||
description:
|
||||
'Suitable for QA tasks over larger documents and can run the preprocessing step in parallel, reducing the running time'
|
||||
},
|
||||
{
|
||||
label: 'RefineDocumentsChain',
|
||||
name: 'refine',
|
||||
description: 'Suitable for QA tasks over a large number of documents.'
|
||||
},
|
||||
{
|
||||
label: 'StuffDocumentsChain',
|
||||
name: 'stuff',
|
||||
description: 'Suitable for QA tasks over a small number of documents.'
|
||||
}
|
||||
],
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -64,35 +102,112 @@ class ConversationalRetrievalQAChain_Chains implements INode {
|
|||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever
|
||||
const systemMessagePrompt = nodeData.inputs?.systemMessagePrompt as string
|
||||
const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
|
||||
const chainOption = nodeData.inputs?.chainOption as string
|
||||
const externalMemory = nodeData.inputs?.memory
|
||||
|
||||
const chain = ConversationalRetrievalQAChain.fromLLM(model, vectorStoreRetriever, {
|
||||
const obj: any = {
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
qaTemplate: systemMessagePrompt ? `${systemMessagePrompt}\n${qa_template}` : default_qa_template
|
||||
})
|
||||
questionGeneratorChainOptions: {
|
||||
template: CUSTOM_QUESTION_GENERATOR_CHAIN_PROMPT
|
||||
}
|
||||
}
|
||||
|
||||
if (returnSourceDocuments) obj.returnSourceDocuments = returnSourceDocuments
|
||||
|
||||
if (chainOption === 'map_reduce') {
|
||||
obj.qaChainOptions = {
|
||||
type: 'map_reduce',
|
||||
combinePrompt: PromptTemplate.fromTemplate(
|
||||
systemMessagePrompt ? `${systemMessagePrompt}\n${map_reduce_template}` : default_map_reduce_template
|
||||
)
|
||||
} as QAChainParams
|
||||
} else if (chainOption === 'refine') {
|
||||
const qprompt = new PromptTemplate({
|
||||
inputVariables: ['context', 'question'],
|
||||
template: refine_question_template(systemMessagePrompt)
|
||||
})
|
||||
const rprompt = new PromptTemplate({
|
||||
inputVariables: ['context', 'question', 'existing_answer'],
|
||||
template: refine_template
|
||||
})
|
||||
obj.qaChainOptions = {
|
||||
type: 'refine',
|
||||
questionPrompt: qprompt,
|
||||
refinePrompt: rprompt
|
||||
} as QAChainParams
|
||||
} else {
|
||||
obj.qaChainOptions = {
|
||||
type: 'stuff',
|
||||
prompt: PromptTemplate.fromTemplate(systemMessagePrompt ? `${systemMessagePrompt}\n${qa_template}` : default_qa_template)
|
||||
} as QAChainParams
|
||||
}
|
||||
|
||||
if (externalMemory) {
|
||||
externalMemory.memoryKey = 'chat_history'
|
||||
externalMemory.inputKey = 'question'
|
||||
externalMemory.outputKey = 'text'
|
||||
externalMemory.returnMessages = true
|
||||
if (chainOption === 'refine') externalMemory.outputKey = 'output_text'
|
||||
obj.memory = externalMemory
|
||||
} else {
|
||||
const fields: BufferMemoryInput = {
|
||||
memoryKey: 'chat_history',
|
||||
inputKey: 'question',
|
||||
outputKey: 'text',
|
||||
returnMessages: true
|
||||
}
|
||||
if (chainOption === 'refine') fields.outputKey = 'output_text'
|
||||
obj.memory = new BufferMemory(fields)
|
||||
}
|
||||
|
||||
const chain = ConversationalRetrievalQAChain.fromLLM(model, vectorStoreRetriever, obj)
|
||||
return chain
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
|
||||
const chain = nodeData.instance as ConversationalRetrievalQAChain
|
||||
let chatHistory = ''
|
||||
const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
|
||||
const chainOption = nodeData.inputs?.chainOption as string
|
||||
|
||||
if (options && options.chatHistory) {
|
||||
const histories: IMessage[] = options.chatHistory
|
||||
chatHistory = histories
|
||||
.map((item) => {
|
||||
return item.message
|
||||
})
|
||||
.join('')
|
||||
let model = nodeData.inputs?.model
|
||||
|
||||
// Temporary fix: https://github.com/hwchase17/langchainjs/issues/754
|
||||
model.streaming = false
|
||||
chain.questionGeneratorChain.llm = model
|
||||
|
||||
const obj = { question: input }
|
||||
|
||||
if (options && options.chatHistory && chain.memory) {
|
||||
;(chain.memory as any).chatHistory = mapChatHistory(options)
|
||||
}
|
||||
|
||||
const obj = {
|
||||
question: input,
|
||||
chat_history: chatHistory ? chatHistory : []
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(
|
||||
options.socketIO,
|
||||
options.socketIOClientId,
|
||||
chainOption === 'refine' ? 4 : undefined,
|
||||
returnSourceDocuments
|
||||
)
|
||||
const res = await chain.call(obj, [loggerHandler, handler])
|
||||
if (chainOption === 'refine') {
|
||||
if (res.output_text && res.sourceDocuments) {
|
||||
return {
|
||||
text: res.output_text,
|
||||
sourceDocuments: res.sourceDocuments
|
||||
}
|
||||
}
|
||||
return res?.output_text
|
||||
}
|
||||
if (res.text && res.sourceDocuments) return res
|
||||
return res?.text
|
||||
} else {
|
||||
const res = await chain.call(obj, [loggerHandler])
|
||||
if (res.text && res.sourceDocuments) return res
|
||||
return res?.text
|
||||
}
|
||||
|
||||
const res = await chain.call(obj)
|
||||
|
||||
return res?.text
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,64 @@
|
|||
export const default_qa_template = `Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
|
||||
|
||||
{context}
|
||||
|
||||
Question: {question}
|
||||
Helpful Answer:`
|
||||
|
||||
export const qa_template = `Use the following pieces of context to answer the question at the end.
|
||||
|
||||
{context}
|
||||
|
||||
Question: {question}
|
||||
Helpful Answer:`
|
||||
|
||||
export const default_map_reduce_template = `Given the following extracted parts of a long document and a question, create a final answer.
|
||||
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
|
||||
|
||||
{summaries}
|
||||
|
||||
Question: {question}
|
||||
Helpful Answer:`
|
||||
|
||||
export const map_reduce_template = `Given the following extracted parts of a long document and a question, create a final answer.
|
||||
|
||||
{summaries}
|
||||
|
||||
Question: {question}
|
||||
Helpful Answer:`
|
||||
|
||||
export const refine_question_template = (sysPrompt?: string) => {
|
||||
let returnPrompt = ''
|
||||
if (sysPrompt)
|
||||
returnPrompt = `Context information is below.
|
||||
---------------------
|
||||
{context}
|
||||
---------------------
|
||||
Given the context information and not prior knowledge, ${sysPrompt}
|
||||
Answer the question: {question}.
|
||||
Answer:`
|
||||
if (!sysPrompt)
|
||||
returnPrompt = `Context information is below.
|
||||
---------------------
|
||||
{context}
|
||||
---------------------
|
||||
Given the context information and not prior knowledge, answer the question: {question}.
|
||||
Answer:`
|
||||
return returnPrompt
|
||||
}
|
||||
|
||||
export const refine_template = `The original question is as follows: {question}
|
||||
We have provided an existing answer: {existing_answer}
|
||||
We have the opportunity to refine the existing answer (only if needed) with some more context below.
|
||||
------------
|
||||
{context}
|
||||
------------
|
||||
Given the new context, refine the original answer to better answer the question.
|
||||
If you can't find answer from the context, return the original answer.`
|
||||
|
||||
export const CUSTOM_QUESTION_GENERATOR_CHAIN_PROMPT = `Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, answer in the same language as the follow up question. include it in the standalone question.
|
||||
|
||||
Chat History:
|
||||
{chat_history}
|
||||
Follow Up Input: {question}
|
||||
Standalone question:`
|
||||
|
|
@ -1,11 +1,13 @@
|
|||
import { ICommonObject, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { getBaseClasses, handleEscapeCharacters } from '../../../src/utils'
|
||||
import { LLMChain } from 'langchain/chains'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class LLMChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
|
|
@ -17,6 +19,7 @@ class LLMChain_Chains implements INode {
|
|||
constructor() {
|
||||
this.label = 'LLM Chain'
|
||||
this.name = 'llmChain'
|
||||
this.version = 1.0
|
||||
this.type = 'LLMChain'
|
||||
this.icon = 'chain.svg'
|
||||
this.category = 'Chains'
|
||||
|
|
@ -50,12 +53,12 @@ class LLMChain_Chains implements INode {
|
|||
{
|
||||
label: 'Output Prediction',
|
||||
name: 'outputPrediction',
|
||||
baseClasses: ['string']
|
||||
baseClasses: ['string', 'json']
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData, input: string): Promise<any> {
|
||||
async init(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const prompt = nodeData.inputs?.prompt
|
||||
const output = nodeData.outputs?.output as string
|
||||
|
|
@ -67,21 +70,25 @@ class LLMChain_Chains implements INode {
|
|||
} else if (output === 'outputPrediction') {
|
||||
const chain = new LLMChain({ llm: model, prompt, verbose: process.env.DEBUG === 'true' ? true : false })
|
||||
const inputVariables = chain.prompt.inputVariables as string[] // ["product"]
|
||||
const res = await runPrediction(inputVariables, chain, input, promptValues)
|
||||
const res = await runPrediction(inputVariables, chain, input, promptValues, options)
|
||||
// eslint-disable-next-line no-console
|
||||
console.log('\x1b[92m\x1b[1m\n*****OUTPUT PREDICTION*****\n\x1b[0m\x1b[0m')
|
||||
// eslint-disable-next-line no-console
|
||||
console.log(res)
|
||||
return res
|
||||
/**
|
||||
* Apply string transformation to convert special chars:
|
||||
* FROM: hello i am ben\n\n\thow are you?
|
||||
* TO: hello i am benFLOWISE_NEWLINEFLOWISE_NEWLINEFLOWISE_TABhow are you?
|
||||
*/
|
||||
return handleEscapeCharacters(res, false)
|
||||
}
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string): Promise<string> {
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const inputVariables = nodeData.instance.prompt.inputVariables as string[] // ["product"]
|
||||
const chain = nodeData.instance as LLMChain
|
||||
const promptValues = nodeData.inputs?.prompt.promptValues as ICommonObject
|
||||
|
||||
const res = await runPrediction(inputVariables, chain, input, promptValues)
|
||||
const res = await runPrediction(inputVariables, chain, input, promptValues, options)
|
||||
// eslint-disable-next-line no-console
|
||||
console.log('\x1b[93m\x1b[1m\n*****FINAL RESULT*****\n\x1b[0m\x1b[0m')
|
||||
// eslint-disable-next-line no-console
|
||||
|
|
@ -90,11 +97,26 @@ class LLMChain_Chains implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
const runPrediction = async (inputVariables: string[], chain: LLMChain, input: string, promptValues: ICommonObject) => {
|
||||
if (inputVariables.length === 1) {
|
||||
const res = await chain.run(input)
|
||||
return res
|
||||
} else if (inputVariables.length > 1) {
|
||||
const runPrediction = async (
|
||||
inputVariables: string[],
|
||||
chain: LLMChain,
|
||||
input: string,
|
||||
promptValuesRaw: ICommonObject,
|
||||
options: ICommonObject
|
||||
) => {
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const isStreaming = options.socketIO && options.socketIOClientId
|
||||
const socketIO = isStreaming ? options.socketIO : undefined
|
||||
const socketIOClientId = isStreaming ? options.socketIOClientId : ''
|
||||
|
||||
/**
|
||||
* Apply string transformation to reverse converted special chars:
|
||||
* FROM: { "value": "hello i am benFLOWISE_NEWLINEFLOWISE_NEWLINEFLOWISE_TABhow are you?" }
|
||||
* TO: { "value": "hello i am ben\n\n\thow are you?" }
|
||||
*/
|
||||
const promptValues = handleEscapeCharacters(promptValuesRaw, true)
|
||||
|
||||
if (promptValues && inputVariables.length > 0) {
|
||||
let seen: string[] = []
|
||||
|
||||
for (const variable of inputVariables) {
|
||||
|
|
@ -106,11 +128,15 @@ const runPrediction = async (inputVariables: string[], chain: LLMChain, input: s
|
|||
|
||||
if (seen.length === 0) {
|
||||
// All inputVariables have fixed values specified
|
||||
const options = {
|
||||
...promptValues
|
||||
const options = { ...promptValues }
|
||||
if (isStreaming) {
|
||||
const handler = new CustomChainHandler(socketIO, socketIOClientId)
|
||||
const res = await chain.call(options, [loggerHandler, handler])
|
||||
return res?.text
|
||||
} else {
|
||||
const res = await chain.call(options, [loggerHandler])
|
||||
return res?.text
|
||||
}
|
||||
const res = await chain.call(options)
|
||||
return res?.text
|
||||
} else if (seen.length === 1) {
|
||||
// If one inputVariable is not specify, use input (user's question) as value
|
||||
const lastValue = seen.pop()
|
||||
|
|
@ -119,14 +145,26 @@ const runPrediction = async (inputVariables: string[], chain: LLMChain, input: s
|
|||
...promptValues,
|
||||
[lastValue]: input
|
||||
}
|
||||
const res = await chain.call(options)
|
||||
return res?.text
|
||||
if (isStreaming) {
|
||||
const handler = new CustomChainHandler(socketIO, socketIOClientId)
|
||||
const res = await chain.call(options, [loggerHandler, handler])
|
||||
return res?.text
|
||||
} else {
|
||||
const res = await chain.call(options, [loggerHandler])
|
||||
return res?.text
|
||||
}
|
||||
} else {
|
||||
throw new Error(`Please provide Prompt Values for: ${seen.join(', ')}`)
|
||||
}
|
||||
} else {
|
||||
const res = await chain.run(input)
|
||||
return res
|
||||
if (isStreaming) {
|
||||
const handler = new CustomChainHandler(socketIO, socketIOClientId)
|
||||
const res = await chain.run(input, [loggerHandler, handler])
|
||||
return res
|
||||
} else {
|
||||
const res = await chain.run(input, [loggerHandler])
|
||||
return res
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,82 @@
|
|||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { ICommonObject, INode, INodeData, INodeParams, PromptRetriever } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { MultiPromptChain } from 'langchain/chains'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class MultiPromptChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Multi Prompt Chain'
|
||||
this.name = 'multiPromptChain'
|
||||
this.version = 1.0
|
||||
this.type = 'MultiPromptChain'
|
||||
this.icon = 'chain.svg'
|
||||
this.category = 'Chains'
|
||||
this.description = 'Chain automatically picks an appropriate prompt from multiple prompt templates'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(MultiPromptChain)]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Language Model',
|
||||
name: 'model',
|
||||
type: 'BaseLanguageModel'
|
||||
},
|
||||
{
|
||||
label: 'Prompt Retriever',
|
||||
name: 'promptRetriever',
|
||||
type: 'PromptRetriever',
|
||||
list: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const promptRetriever = nodeData.inputs?.promptRetriever as PromptRetriever[]
|
||||
const promptNames = []
|
||||
const promptDescriptions = []
|
||||
const promptTemplates = []
|
||||
|
||||
for (const prompt of promptRetriever) {
|
||||
promptNames.push(prompt.name)
|
||||
promptDescriptions.push(prompt.description)
|
||||
promptTemplates.push(prompt.systemMessage)
|
||||
}
|
||||
|
||||
const chain = MultiPromptChain.fromLLMAndPrompts(model, {
|
||||
promptNames,
|
||||
promptDescriptions,
|
||||
promptTemplates,
|
||||
llmChainOpts: { verbose: process.env.DEBUG === 'true' ? true : false }
|
||||
})
|
||||
|
||||
return chain
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const chain = nodeData.instance as MultiPromptChain
|
||||
const obj = { input }
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2)
|
||||
const res = await chain.call(obj, [loggerHandler, handler])
|
||||
return res?.text
|
||||
} else {
|
||||
const res = await chain.call(obj, [loggerHandler])
|
||||
return res?.text
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: MultiPromptChain_Chains }
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-dna" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path>
|
||||
<path d="M14.828 14.828a4 4 0 1 0 -5.656 -5.656a4 4 0 0 0 5.656 5.656z"></path>
|
||||
<path d="M9.172 20.485a4 4 0 1 0 -5.657 -5.657"></path>
|
||||
<path d="M14.828 3.515a4 4 0 0 0 5.657 5.657"></path>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 489 B |
|
|
@ -0,0 +1,92 @@
|
|||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { ICommonObject, INode, INodeData, INodeParams, VectorStoreRetriever } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { MultiRetrievalQAChain } from 'langchain/chains'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class MultiRetrievalQAChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Multi Retrieval QA Chain'
|
||||
this.name = 'multiRetrievalQAChain'
|
||||
this.version = 1.0
|
||||
this.type = 'MultiRetrievalQAChain'
|
||||
this.icon = 'chain.svg'
|
||||
this.category = 'Chains'
|
||||
this.description = 'QA Chain that automatically picks an appropriate vector store from multiple retrievers'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(MultiRetrievalQAChain)]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Language Model',
|
||||
name: 'model',
|
||||
type: 'BaseLanguageModel'
|
||||
},
|
||||
{
|
||||
label: 'Vector Store Retriever',
|
||||
name: 'vectorStoreRetriever',
|
||||
type: 'VectorStoreRetriever',
|
||||
list: true
|
||||
},
|
||||
{
|
||||
label: 'Return Source Documents',
|
||||
name: 'returnSourceDocuments',
|
||||
type: 'boolean',
|
||||
optional: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as VectorStoreRetriever[]
|
||||
const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
|
||||
|
||||
const retrieverNames = []
|
||||
const retrieverDescriptions = []
|
||||
const retrievers = []
|
||||
|
||||
for (const vs of vectorStoreRetriever) {
|
||||
retrieverNames.push(vs.name)
|
||||
retrieverDescriptions.push(vs.description)
|
||||
retrievers.push(vs.vectorStore.asRetriever((vs.vectorStore as any).k ?? 4))
|
||||
}
|
||||
|
||||
const chain = MultiRetrievalQAChain.fromLLMAndRetrievers(model, {
|
||||
retrieverNames,
|
||||
retrieverDescriptions,
|
||||
retrievers,
|
||||
retrievalQAChainOpts: { verbose: process.env.DEBUG === 'true' ? true : false, returnSourceDocuments }
|
||||
})
|
||||
return chain
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
|
||||
const chain = nodeData.instance as MultiRetrievalQAChain
|
||||
const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
|
||||
|
||||
const obj = { input }
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2, returnSourceDocuments)
|
||||
const res = await chain.call(obj, [loggerHandler, handler])
|
||||
if (res.text && res.sourceDocuments) return res
|
||||
return res?.text
|
||||
} else {
|
||||
const res = await chain.call(obj, [loggerHandler])
|
||||
if (res.text && res.sourceDocuments) return res
|
||||
return res?.text
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: MultiRetrievalQAChain_Chains }
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-dna" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path>
|
||||
<path d="M14.828 14.828a4 4 0 1 0 -5.656 -5.656a4 4 0 0 0 5.656 5.656z"></path>
|
||||
<path d="M9.172 20.485a4 4 0 1 0 -5.657 -5.657"></path>
|
||||
<path d="M14.828 3.515a4 4 0 0 0 5.657 5.657"></path>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 489 B |
|
|
@ -1,12 +1,14 @@
|
|||
import { INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { RetrievalQAChain } from 'langchain/chains'
|
||||
import { BaseRetriever } from 'langchain/schema'
|
||||
import { BaseRetriever } from 'langchain/schema/retriever'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class RetrievalQAChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
|
|
@ -17,6 +19,7 @@ class RetrievalQAChain_Chains implements INode {
|
|||
constructor() {
|
||||
this.label = 'Retrieval QA Chain'
|
||||
this.name = 'retrievalQAChain'
|
||||
this.version = 1.0
|
||||
this.type = 'RetrievalQAChain'
|
||||
this.icon = 'chain.svg'
|
||||
this.category = 'Chains'
|
||||
|
|
@ -44,13 +47,21 @@ class RetrievalQAChain_Chains implements INode {
|
|||
return chain
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string): Promise<string> {
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const chain = nodeData.instance as RetrievalQAChain
|
||||
const obj = {
|
||||
query: input
|
||||
}
|
||||
const res = await chain.call(obj)
|
||||
return res?.text
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
|
||||
const res = await chain.call(obj, [loggerHandler, handler])
|
||||
return res?.text
|
||||
} else {
|
||||
const res = await chain.call(obj, [loggerHandler])
|
||||
return res?.text
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,13 +1,18 @@
|
|||
import { INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { SqlDatabaseChain, SqlDatabaseChainInput } from 'langchain/chains'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { SqlDatabaseChain, SqlDatabaseChainInput } from 'langchain/chains/sql_db'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { DataSource } from 'typeorm'
|
||||
import { SqlDatabase } from 'langchain/sql_db'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
import { DataSourceOptions } from 'typeorm/data-source'
|
||||
|
||||
type DatabaseType = 'sqlite' | 'postgres' | 'mssql' | 'mysql'
|
||||
|
||||
class SqlDatabaseChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
|
|
@ -18,6 +23,7 @@ class SqlDatabaseChain_Chains implements INode {
|
|||
constructor() {
|
||||
this.label = 'Sql Database Chain'
|
||||
this.name = 'sqlDatabaseChain'
|
||||
this.version = 1.0
|
||||
this.type = 'SqlDatabaseChain'
|
||||
this.icon = 'sqlchain.svg'
|
||||
this.category = 'Chains'
|
||||
|
|
@ -35,46 +41,73 @@ class SqlDatabaseChain_Chains implements INode {
|
|||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: 'SQlite',
|
||||
label: 'SQLite',
|
||||
name: 'sqlite'
|
||||
},
|
||||
{
|
||||
label: 'PostgreSQL',
|
||||
name: 'postgres'
|
||||
},
|
||||
{
|
||||
label: 'MSSQL',
|
||||
name: 'mssql'
|
||||
},
|
||||
{
|
||||
label: 'MySQL',
|
||||
name: 'mysql'
|
||||
}
|
||||
],
|
||||
default: 'sqlite'
|
||||
},
|
||||
{
|
||||
label: 'Database File Path',
|
||||
name: 'dbFilePath',
|
||||
label: 'Connection string or file path (sqlite only)',
|
||||
name: 'url',
|
||||
type: 'string',
|
||||
placeholder: 'C:/Users/chinook.db'
|
||||
placeholder: '1270.0.0.1:5432/chinook'
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const databaseType = nodeData.inputs?.database as 'sqlite'
|
||||
const databaseType = nodeData.inputs?.database as DatabaseType
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const dbFilePath = nodeData.inputs?.dbFilePath
|
||||
const url = nodeData.inputs?.url
|
||||
|
||||
const chain = await getSQLDBChain(databaseType, dbFilePath, model)
|
||||
const chain = await getSQLDBChain(databaseType, url, model)
|
||||
return chain
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string): Promise<string> {
|
||||
const databaseType = nodeData.inputs?.database as 'sqlite'
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const databaseType = nodeData.inputs?.database as DatabaseType
|
||||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const dbFilePath = nodeData.inputs?.dbFilePath
|
||||
const url = nodeData.inputs?.url
|
||||
|
||||
const chain = await getSQLDBChain(databaseType, dbFilePath, model)
|
||||
const res = await chain.run(input)
|
||||
return res
|
||||
const chain = await getSQLDBChain(databaseType, url, model)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId, 2)
|
||||
const res = await chain.run(input, [loggerHandler, handler])
|
||||
return res
|
||||
} else {
|
||||
const res = await chain.run(input, [loggerHandler])
|
||||
return res
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const getSQLDBChain = async (databaseType: 'sqlite', dbFilePath: string, llm: BaseLanguageModel) => {
|
||||
const datasource = new DataSource({
|
||||
type: databaseType,
|
||||
database: dbFilePath
|
||||
})
|
||||
const getSQLDBChain = async (databaseType: DatabaseType, url: string, llm: BaseLanguageModel) => {
|
||||
const datasource = new DataSource(
|
||||
databaseType === 'sqlite'
|
||||
? {
|
||||
type: databaseType,
|
||||
database: url
|
||||
}
|
||||
: ({
|
||||
type: databaseType,
|
||||
url: url
|
||||
} as DataSourceOptions)
|
||||
)
|
||||
|
||||
const db = await SqlDatabase.fromDataSourceParams({
|
||||
appDataSource: datasource
|
||||
|
|
|
|||
|
|
@ -1,12 +1,14 @@
|
|||
import { INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { VectorDBQAChain } from 'langchain/chains'
|
||||
import { BaseLanguageModel } from 'langchain/base_language'
|
||||
import { VectorStore } from 'langchain/vectorstores'
|
||||
import { ConsoleCallbackHandler, CustomChainHandler } from '../../../src/handler'
|
||||
|
||||
class VectorDBQAChain_Chains implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
|
|
@ -17,6 +19,7 @@ class VectorDBQAChain_Chains implements INode {
|
|||
constructor() {
|
||||
this.label = 'VectorDB QA Chain'
|
||||
this.name = 'vectorDBQAChain'
|
||||
this.version = 1.0
|
||||
this.type = 'VectorDBQAChain'
|
||||
this.icon = 'chain.svg'
|
||||
this.category = 'Chains'
|
||||
|
|
@ -40,17 +43,29 @@ class VectorDBQAChain_Chains implements INode {
|
|||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const vectorStore = nodeData.inputs?.vectorStore as VectorStore
|
||||
|
||||
const chain = VectorDBQAChain.fromLLM(model, vectorStore, { verbose: process.env.DEBUG === 'true' ? true : false })
|
||||
const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
|
||||
k: (vectorStore as any)?.k ?? 4,
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
return chain
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string): Promise<string> {
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string> {
|
||||
const chain = nodeData.instance as VectorDBQAChain
|
||||
const obj = {
|
||||
query: input
|
||||
}
|
||||
const res = await chain.call(obj)
|
||||
return res?.text
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
|
||||
if (options.socketIO && options.socketIOClientId) {
|
||||
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
|
||||
const res = await chain.call(obj, [loggerHandler, handler])
|
||||
return res?.text
|
||||
} else {
|
||||
const res = await chain.call(obj, [loggerHandler])
|
||||
return res?.text
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-brand-azure" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path stroke="none" d="M0 0h24v24H0z" fill="none"></path>
|
||||
<path d="M6 7.5l-4 9.5h4l6 -15z"></path>
|
||||
<path d="M22 20l-7 -15l-3 7l4 5l-8 3z"></path>
|
||||
</svg>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 48 48" width="96px" height="96px"><path fill="#035bda" d="M46 40L29.317 10.852 22.808 23.96 34.267 37.24 13 39.655zM13.092 18.182L2 36.896 11.442 35.947 28.033 5.678z"/></svg>
|
||||
|
Before Width: | Height: | Size: 392 B After Width: | Height: | Size: 229 B |
|
|
@ -1,32 +1,36 @@
|
|||
import { OpenAIBaseInput } from 'langchain/dist/types/openai-types'
|
||||
import { INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { AzureOpenAIInput, ChatOpenAI } from 'langchain/chat_models/openai'
|
||||
|
||||
class AzureChatOpenAI_ChatModels implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
description: string
|
||||
baseClasses: string[]
|
||||
credential: INodeParams
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Azure ChatOpenAI'
|
||||
this.name = 'azureChatOpenAI'
|
||||
this.version = 1.0
|
||||
this.type = 'AzureChatOpenAI'
|
||||
this.icon = 'Azure.svg'
|
||||
this.category = 'Chat Models'
|
||||
this.description = 'Wrapper around Azure OpenAI large language models that use the Chat endpoint'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
|
||||
this.credential = {
|
||||
label: 'Connect Credential',
|
||||
name: 'credential',
|
||||
type: 'credential',
|
||||
credentialNames: ['azureOpenAIApi']
|
||||
}
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Azure OpenAI Api Key',
|
||||
name: 'azureOpenAIApiKey',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Model Name',
|
||||
name: 'modelName',
|
||||
|
|
@ -43,6 +47,10 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
{
|
||||
label: 'gpt-35-turbo',
|
||||
name: 'gpt-35-turbo'
|
||||
},
|
||||
{
|
||||
label: 'gpt-35-turbo-16k',
|
||||
name: 'gpt-35-turbo-16k'
|
||||
}
|
||||
],
|
||||
default: 'gpt-35-turbo',
|
||||
|
|
@ -52,37 +60,15 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
label: 'Temperature',
|
||||
name: 'temperature',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
default: 0.9,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Azure OpenAI Api Instance Name',
|
||||
name: 'azureOpenAIApiInstanceName',
|
||||
type: 'string',
|
||||
placeholder: 'YOUR-INSTANCE-NAME'
|
||||
},
|
||||
{
|
||||
label: 'Azure OpenAI Api Deployment Name',
|
||||
name: 'azureOpenAIApiDeploymentName',
|
||||
type: 'string',
|
||||
placeholder: 'YOUR-DEPLOYMENT-NAME'
|
||||
},
|
||||
{
|
||||
label: 'Azure OpenAI Api Version',
|
||||
name: 'azureOpenAIApiVersion',
|
||||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: '2023-03-15-preview',
|
||||
name: '2023-03-15-preview'
|
||||
}
|
||||
],
|
||||
default: '2023-03-15-preview'
|
||||
},
|
||||
{
|
||||
label: 'Max Tokens',
|
||||
name: 'maxTokens',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -90,6 +76,7 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
label: 'Frequency Penalty',
|
||||
name: 'frequencyPenalty',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -97,6 +84,7 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
label: 'Presence Penalty',
|
||||
name: 'presencePenalty',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -104,36 +92,41 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
label: 'Timeout',
|
||||
name: 'timeout',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const azureOpenAIApiKey = nodeData.inputs?.azureOpenAIApiKey as string
|
||||
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
|
||||
const modelName = nodeData.inputs?.modelName as string
|
||||
const temperature = nodeData.inputs?.temperature as string
|
||||
const azureOpenAIApiInstanceName = nodeData.inputs?.azureOpenAIApiInstanceName as string
|
||||
const azureOpenAIApiDeploymentName = nodeData.inputs?.azureOpenAIApiDeploymentName as string
|
||||
const azureOpenAIApiVersion = nodeData.inputs?.azureOpenAIApiVersion as string
|
||||
const maxTokens = nodeData.inputs?.maxTokens as string
|
||||
const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
|
||||
const presencePenalty = nodeData.inputs?.presencePenalty as string
|
||||
const timeout = nodeData.inputs?.timeout as string
|
||||
const streaming = nodeData.inputs?.streaming as boolean
|
||||
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const azureOpenAIApiKey = getCredentialParam('azureOpenAIApiKey', credentialData, nodeData)
|
||||
const azureOpenAIApiInstanceName = getCredentialParam('azureOpenAIApiInstanceName', credentialData, nodeData)
|
||||
const azureOpenAIApiDeploymentName = getCredentialParam('azureOpenAIApiDeploymentName', credentialData, nodeData)
|
||||
const azureOpenAIApiVersion = getCredentialParam('azureOpenAIApiVersion', credentialData, nodeData)
|
||||
|
||||
const obj: Partial<AzureOpenAIInput> & Partial<OpenAIBaseInput> = {
|
||||
temperature: parseInt(temperature, 10),
|
||||
temperature: parseFloat(temperature),
|
||||
modelName,
|
||||
azureOpenAIApiKey,
|
||||
azureOpenAIApiInstanceName,
|
||||
azureOpenAIApiDeploymentName,
|
||||
azureOpenAIApiVersion
|
||||
azureOpenAIApiVersion,
|
||||
streaming: streaming ?? true
|
||||
}
|
||||
|
||||
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
|
||||
if (frequencyPenalty) obj.frequencyPenalty = parseInt(frequencyPenalty, 10)
|
||||
if (presencePenalty) obj.presencePenalty = parseInt(presencePenalty, 10)
|
||||
if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
|
||||
if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
|
||||
if (timeout) obj.timeout = parseInt(timeout, 10)
|
||||
|
||||
const model = new ChatOpenAI(obj)
|
||||
|
|
|
|||
|
|
@ -1,36 +1,50 @@
|
|||
import { INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { AnthropicInput, ChatAnthropic } from 'langchain/chat_models/anthropic'
|
||||
|
||||
class ChatAnthropic_ChatModels implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
description: string
|
||||
baseClasses: string[]
|
||||
credential: INodeParams
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'ChatAnthropic'
|
||||
this.name = 'chatAnthropic'
|
||||
this.version = 1.0
|
||||
this.type = 'ChatAnthropic'
|
||||
this.icon = 'chatAnthropic.png'
|
||||
this.category = 'Chat Models'
|
||||
this.description = 'Wrapper around ChatAnthropic large language models that use the Chat endpoint'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(ChatAnthropic)]
|
||||
this.credential = {
|
||||
label: 'Connect Credential',
|
||||
name: 'credential',
|
||||
type: 'credential',
|
||||
credentialNames: ['anthropicApi']
|
||||
}
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'ChatAnthropic Api Key',
|
||||
name: 'anthropicApiKey',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Model Name',
|
||||
name: 'modelName',
|
||||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: 'claude-2',
|
||||
name: 'claude-2',
|
||||
description: 'Claude 2 latest major version, automatically get updates to the model as they are released'
|
||||
},
|
||||
{
|
||||
label: 'claude-instant-1',
|
||||
name: 'claude-instant-1',
|
||||
description: 'Claude Instant latest major version, automatically get updates to the model as they are released'
|
||||
},
|
||||
{
|
||||
label: 'claude-v1',
|
||||
name: 'claude-v1'
|
||||
|
|
@ -76,13 +90,14 @@ class ChatAnthropic_ChatModels implements INode {
|
|||
name: 'claude-instant-v1.1-100k'
|
||||
}
|
||||
],
|
||||
default: 'claude-v1',
|
||||
default: 'claude-2',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Temperature',
|
||||
name: 'temperature',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
default: 0.9,
|
||||
optional: true
|
||||
},
|
||||
|
|
@ -90,6 +105,7 @@ class ChatAnthropic_ChatModels implements INode {
|
|||
label: 'Max Tokens',
|
||||
name: 'maxTokensToSample',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -97,6 +113,7 @@ class ChatAnthropic_ChatModels implements INode {
|
|||
label: 'Top P',
|
||||
name: 'topP',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -104,29 +121,34 @@ class ChatAnthropic_ChatModels implements INode {
|
|||
label: 'Top K',
|
||||
name: 'topK',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
|
||||
const temperature = nodeData.inputs?.temperature as string
|
||||
const modelName = nodeData.inputs?.modelName as string
|
||||
const anthropicApiKey = nodeData.inputs?.anthropicApiKey as string
|
||||
const maxTokensToSample = nodeData.inputs?.maxTokensToSample as string
|
||||
const topP = nodeData.inputs?.topP as string
|
||||
const topK = nodeData.inputs?.topK as string
|
||||
const streaming = nodeData.inputs?.streaming as boolean
|
||||
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const anthropicApiKey = getCredentialParam('anthropicApiKey', credentialData, nodeData)
|
||||
|
||||
const obj: Partial<AnthropicInput> & { anthropicApiKey?: string } = {
|
||||
temperature: parseInt(temperature, 10),
|
||||
temperature: parseFloat(temperature),
|
||||
modelName,
|
||||
anthropicApiKey
|
||||
anthropicApiKey,
|
||||
streaming: streaming ?? true
|
||||
}
|
||||
|
||||
if (maxTokensToSample) obj.maxTokensToSample = parseInt(maxTokensToSample, 10)
|
||||
if (topP) obj.topP = parseInt(topP, 10)
|
||||
if (topK) obj.topK = parseInt(topK, 10)
|
||||
if (topP) obj.topP = parseFloat(topP)
|
||||
if (topK) obj.topK = parseFloat(topK)
|
||||
|
||||
const model = new ChatAnthropic(obj)
|
||||
return model
|
||||
|
|
|
|||
|
|
@ -0,0 +1,126 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { HFInput, HuggingFaceInference } from './core'
|
||||
|
||||
class ChatHuggingFace_ChatModels implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
description: string
|
||||
baseClasses: string[]
|
||||
credential: INodeParams
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'ChatHuggingFace'
|
||||
this.name = 'chatHuggingFace'
|
||||
this.version = 1.0
|
||||
this.type = 'ChatHuggingFace'
|
||||
this.icon = 'huggingface.png'
|
||||
this.category = 'Chat Models'
|
||||
this.description = 'Wrapper around HuggingFace large language models'
|
||||
this.baseClasses = [this.type, 'BaseChatModel', ...getBaseClasses(HuggingFaceInference)]
|
||||
this.credential = {
|
||||
label: 'Connect Credential',
|
||||
name: 'credential',
|
||||
type: 'credential',
|
||||
credentialNames: ['huggingFaceApi']
|
||||
}
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Model',
|
||||
name: 'model',
|
||||
type: 'string',
|
||||
description: 'If using own inference endpoint, leave this blank',
|
||||
placeholder: 'gpt2',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Endpoint',
|
||||
name: 'endpoint',
|
||||
type: 'string',
|
||||
placeholder: 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2',
|
||||
description: 'Using your own inference endpoint',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Temperature',
|
||||
name: 'temperature',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
description: 'Temperature parameter may not apply to certain model. Please check available model parameters',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Max Tokens',
|
||||
name: 'maxTokens',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
description: 'Max Tokens parameter may not apply to certain model. Please check available model parameters',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Top Probability',
|
||||
name: 'topP',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
description: 'Top Probability parameter may not apply to certain model. Please check available model parameters',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Top K',
|
||||
name: 'hfTopK',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
description: 'Top K parameter may not apply to certain model. Please check available model parameters',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Frequency Penalty',
|
||||
name: 'frequencyPenalty',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
description: 'Frequency Penalty parameter may not apply to certain model. Please check available model parameters',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
|
||||
const model = nodeData.inputs?.model as string
|
||||
const temperature = nodeData.inputs?.temperature as string
|
||||
const maxTokens = nodeData.inputs?.maxTokens as string
|
||||
const topP = nodeData.inputs?.topP as string
|
||||
const hfTopK = nodeData.inputs?.hfTopK as string
|
||||
const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
|
||||
const endpoint = nodeData.inputs?.endpoint as string
|
||||
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData)
|
||||
|
||||
const obj: Partial<HFInput> = {
|
||||
model,
|
||||
apiKey: huggingFaceApiKey
|
||||
}
|
||||
|
||||
if (temperature) obj.temperature = parseFloat(temperature)
|
||||
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
|
||||
if (topP) obj.topP = parseFloat(topP)
|
||||
if (hfTopK) obj.topK = parseFloat(hfTopK)
|
||||
if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
|
||||
if (endpoint) obj.endpoint = endpoint
|
||||
|
||||
const huggingFace = new HuggingFaceInference(obj)
|
||||
return huggingFace
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: ChatHuggingFace_ChatModels }
|
||||
|
|
@ -0,0 +1,113 @@
|
|||
import { getEnvironmentVariable } from '../../../src/utils'
|
||||
import { LLM, BaseLLMParams } from 'langchain/llms/base'
|
||||
|
||||
export interface HFInput {
|
||||
/** Model to use */
|
||||
model: string
|
||||
|
||||
/** Sampling temperature to use */
|
||||
temperature?: number
|
||||
|
||||
/**
|
||||
* Maximum number of tokens to generate in the completion.
|
||||
*/
|
||||
maxTokens?: number
|
||||
|
||||
/** Total probability mass of tokens to consider at each step */
|
||||
topP?: number
|
||||
|
||||
/** Integer to define the top tokens considered within the sample operation to create new text. */
|
||||
topK?: number
|
||||
|
||||
/** Penalizes repeated tokens according to frequency */
|
||||
frequencyPenalty?: number
|
||||
|
||||
/** API key to use. */
|
||||
apiKey?: string
|
||||
|
||||
/** Private endpoint to use. */
|
||||
endpoint?: string
|
||||
}
|
||||
|
||||
export class HuggingFaceInference extends LLM implements HFInput {
|
||||
get lc_secrets(): { [key: string]: string } | undefined {
|
||||
return {
|
||||
apiKey: 'HUGGINGFACEHUB_API_KEY'
|
||||
}
|
||||
}
|
||||
|
||||
model = 'gpt2'
|
||||
|
||||
temperature: number | undefined = undefined
|
||||
|
||||
maxTokens: number | undefined = undefined
|
||||
|
||||
topP: number | undefined = undefined
|
||||
|
||||
topK: number | undefined = undefined
|
||||
|
||||
frequencyPenalty: number | undefined = undefined
|
||||
|
||||
apiKey: string | undefined = undefined
|
||||
|
||||
endpoint: string | undefined = undefined
|
||||
|
||||
constructor(fields?: Partial<HFInput> & BaseLLMParams) {
|
||||
super(fields ?? {})
|
||||
|
||||
this.model = fields?.model ?? this.model
|
||||
this.temperature = fields?.temperature ?? this.temperature
|
||||
this.maxTokens = fields?.maxTokens ?? this.maxTokens
|
||||
this.topP = fields?.topP ?? this.topP
|
||||
this.topK = fields?.topK ?? this.topK
|
||||
this.frequencyPenalty = fields?.frequencyPenalty ?? this.frequencyPenalty
|
||||
this.endpoint = fields?.endpoint ?? ''
|
||||
this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
|
||||
if (!this.apiKey) {
|
||||
throw new Error(
|
||||
'Please set an API key for HuggingFace Hub in the environment variable HUGGINGFACEHUB_API_KEY or in the apiKey field of the HuggingFaceInference constructor.'
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
_llmType() {
|
||||
return 'hf'
|
||||
}
|
||||
|
||||
/** @ignore */
|
||||
async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> {
|
||||
const { HfInference } = await HuggingFaceInference.imports()
|
||||
const hf = new HfInference(this.apiKey)
|
||||
const obj: any = {
|
||||
parameters: {
|
||||
// make it behave similar to openai, returning only the generated text
|
||||
return_full_text: false,
|
||||
temperature: this.temperature,
|
||||
max_new_tokens: this.maxTokens,
|
||||
top_p: this.topP,
|
||||
top_k: this.topK,
|
||||
repetition_penalty: this.frequencyPenalty
|
||||
},
|
||||
inputs: prompt
|
||||
}
|
||||
if (this.endpoint) {
|
||||
hf.endpoint(this.endpoint)
|
||||
} else {
|
||||
obj.model = this.model
|
||||
}
|
||||
const res = await this.caller.callWithOptions({ signal: options.signal }, hf.textGeneration.bind(hf), obj)
|
||||
return res.generated_text
|
||||
}
|
||||
|
||||
/** @ignore */
|
||||
static async imports(): Promise<{
|
||||
HfInference: typeof import('@huggingface/inference').HfInference
|
||||
}> {
|
||||
try {
|
||||
const { HfInference } = await import('@huggingface/inference')
|
||||
return { HfInference }
|
||||
} catch (e) {
|
||||
throw new Error('Please install huggingface as a dependency with, e.g. `yarn add @huggingface/inference`')
|
||||
}
|
||||
}
|
||||
}
|
||||
|
After Width: | Height: | Size: 118 KiB |
|
|
@ -6,6 +6,7 @@ import { OpenAIChatInput } from 'langchain/chat_models/openai'
|
|||
class ChatLocalAI_ChatModels implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
|
|
@ -16,6 +17,7 @@ class ChatLocalAI_ChatModels implements INode {
|
|||
constructor() {
|
||||
this.label = 'ChatLocalAI'
|
||||
this.name = 'chatLocalAI'
|
||||
this.version = 1.0
|
||||
this.type = 'ChatLocalAI'
|
||||
this.icon = 'localai.png'
|
||||
this.category = 'Chat Models'
|
||||
|
|
@ -38,6 +40,7 @@ class ChatLocalAI_ChatModels implements INode {
|
|||
label: 'Temperature',
|
||||
name: 'temperature',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
default: 0.9,
|
||||
optional: true
|
||||
},
|
||||
|
|
@ -45,6 +48,7 @@ class ChatLocalAI_ChatModels implements INode {
|
|||
label: 'Max Tokens',
|
||||
name: 'maxTokens',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -52,6 +56,7 @@ class ChatLocalAI_ChatModels implements INode {
|
|||
label: 'Top Probability',
|
||||
name: 'topP',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -59,6 +64,7 @@ class ChatLocalAI_ChatModels implements INode {
|
|||
label: 'Timeout',
|
||||
name: 'timeout',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
|
|
@ -74,13 +80,13 @@ class ChatLocalAI_ChatModels implements INode {
|
|||
const basePath = nodeData.inputs?.basePath as string
|
||||
|
||||
const obj: Partial<OpenAIChatInput> & { openAIApiKey?: string } = {
|
||||
temperature: parseInt(temperature, 10),
|
||||
temperature: parseFloat(temperature),
|
||||
modelName,
|
||||
openAIApiKey: 'sk-'
|
||||
}
|
||||
|
||||
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
|
||||
if (topP) obj.topP = parseInt(topP, 10)
|
||||
if (topP) obj.topP = parseFloat(topP)
|
||||
if (timeout) obj.timeout = parseInt(timeout, 10)
|
||||
|
||||
const model = new OpenAIChat(obj, { basePath })
|
||||
|
|
|
|||
|
|
@ -1,31 +1,35 @@
|
|||
import { INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses } from '../../../src/utils'
|
||||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { ChatOpenAI, OpenAIChatInput } from 'langchain/chat_models/openai'
|
||||
|
||||
class ChatOpenAI_ChatModels implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
description: string
|
||||
baseClasses: string[]
|
||||
credential: INodeParams
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'ChatOpenAI'
|
||||
this.name = 'chatOpenAI'
|
||||
this.version = 1.0
|
||||
this.type = 'ChatOpenAI'
|
||||
this.icon = 'openai.png'
|
||||
this.category = 'Chat Models'
|
||||
this.description = 'Wrapper around OpenAI large language models that use the Chat endpoint'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
|
||||
this.credential = {
|
||||
label: 'Connect Credential',
|
||||
name: 'credential',
|
||||
type: 'credential',
|
||||
credentialNames: ['openAIApi']
|
||||
}
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'OpenAI Api Key',
|
||||
name: 'openAIApiKey',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Model Name',
|
||||
name: 'modelName',
|
||||
|
|
@ -36,20 +40,32 @@ class ChatOpenAI_ChatModels implements INode {
|
|||
name: 'gpt-4'
|
||||
},
|
||||
{
|
||||
label: 'gpt-4-0314',
|
||||
name: 'gpt-4-0314'
|
||||
label: 'gpt-4-0613',
|
||||
name: 'gpt-4-0613'
|
||||
},
|
||||
{
|
||||
label: 'gpt-4-32k-0314',
|
||||
name: 'gpt-4-32k-0314'
|
||||
label: 'gpt-4-32k',
|
||||
name: 'gpt-4-32k'
|
||||
},
|
||||
{
|
||||
label: 'gpt-4-32k-0613',
|
||||
name: 'gpt-4-32k-0613'
|
||||
},
|
||||
{
|
||||
label: 'gpt-3.5-turbo',
|
||||
name: 'gpt-3.5-turbo'
|
||||
},
|
||||
{
|
||||
label: 'gpt-3.5-turbo-0301',
|
||||
name: 'gpt-3.5-turbo-0301'
|
||||
label: 'gpt-3.5-turbo-0613',
|
||||
name: 'gpt-3.5-turbo-0613'
|
||||
},
|
||||
{
|
||||
label: 'gpt-3.5-turbo-16k',
|
||||
name: 'gpt-3.5-turbo-16k'
|
||||
},
|
||||
{
|
||||
label: 'gpt-3.5-turbo-16k-0613',
|
||||
name: 'gpt-3.5-turbo-16k-0613'
|
||||
}
|
||||
],
|
||||
default: 'gpt-3.5-turbo',
|
||||
|
|
@ -59,6 +75,7 @@ class ChatOpenAI_ChatModels implements INode {
|
|||
label: 'Temperature',
|
||||
name: 'temperature',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
default: 0.9,
|
||||
optional: true
|
||||
},
|
||||
|
|
@ -66,6 +83,7 @@ class ChatOpenAI_ChatModels implements INode {
|
|||
label: 'Max Tokens',
|
||||
name: 'maxTokens',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -73,6 +91,7 @@ class ChatOpenAI_ChatModels implements INode {
|
|||
label: 'Top Probability',
|
||||
name: 'topP',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -80,6 +99,7 @@ class ChatOpenAI_ChatModels implements INode {
|
|||
label: 'Frequency Penalty',
|
||||
name: 'frequencyPenalty',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -87,6 +107,7 @@ class ChatOpenAI_ChatModels implements INode {
|
|||
label: 'Presence Penalty',
|
||||
name: 'presencePenalty',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
@ -94,35 +115,68 @@ class ChatOpenAI_ChatModels implements INode {
|
|||
label: 'Timeout',
|
||||
name: 'timeout',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'BasePath',
|
||||
name: 'basepath',
|
||||
type: 'string',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'BaseOptions',
|
||||
name: 'baseOptions',
|
||||
type: 'json',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
|
||||
const temperature = nodeData.inputs?.temperature as string
|
||||
const modelName = nodeData.inputs?.modelName as string
|
||||
const openAIApiKey = nodeData.inputs?.openAIApiKey as string
|
||||
const maxTokens = nodeData.inputs?.maxTokens as string
|
||||
const topP = nodeData.inputs?.topP as string
|
||||
const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
|
||||
const presencePenalty = nodeData.inputs?.presencePenalty as string
|
||||
const timeout = nodeData.inputs?.timeout as string
|
||||
const streaming = nodeData.inputs?.streaming as boolean
|
||||
const basePath = nodeData.inputs?.basepath as string
|
||||
const baseOptions = nodeData.inputs?.baseOptions
|
||||
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, nodeData)
|
||||
|
||||
const obj: Partial<OpenAIChatInput> & { openAIApiKey?: string } = {
|
||||
temperature: parseInt(temperature, 10),
|
||||
temperature: parseFloat(temperature),
|
||||
modelName,
|
||||
openAIApiKey
|
||||
openAIApiKey,
|
||||
streaming: streaming ?? true
|
||||
}
|
||||
|
||||
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
|
||||
if (topP) obj.topP = parseInt(topP, 10)
|
||||
if (frequencyPenalty) obj.frequencyPenalty = parseInt(frequencyPenalty, 10)
|
||||
if (presencePenalty) obj.presencePenalty = parseInt(presencePenalty, 10)
|
||||
if (topP) obj.topP = parseFloat(topP)
|
||||
if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
|
||||
if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
|
||||
if (timeout) obj.timeout = parseInt(timeout, 10)
|
||||
|
||||
const model = new ChatOpenAI(obj)
|
||||
let parsedBaseOptions: any | undefined = undefined
|
||||
|
||||
if (baseOptions) {
|
||||
try {
|
||||
parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions)
|
||||
} catch (exception) {
|
||||
throw new Error("Invalid JSON in the ChatOpenAI's BaseOptions: " + exception)
|
||||
}
|
||||
}
|
||||
const model = new ChatOpenAI(obj, {
|
||||
basePath,
|
||||
baseOptions: parsedBaseOptions
|
||||
})
|
||||
return model
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,115 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { ChatGoogleVertexAI, GoogleVertexAIChatInput } from 'langchain/chat_models/googlevertexai'
|
||||
import { GoogleAuthOptions } from 'google-auth-library'
|
||||
|
||||
class GoogleVertexAI_ChatModels implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
description: string
|
||||
baseClasses: string[]
|
||||
credential: INodeParams
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'ChatGoogleVertexAI'
|
||||
this.name = 'chatGoogleVertexAI'
|
||||
this.version = 1.0
|
||||
this.type = 'ChatGoogleVertexAI'
|
||||
this.icon = 'vertexai.svg'
|
||||
this.category = 'Chat Models'
|
||||
this.description = 'Wrapper around VertexAI large language models that use the Chat endpoint'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(ChatGoogleVertexAI)]
|
||||
this.credential = {
|
||||
label: 'Connect Credential',
|
||||
name: 'credential',
|
||||
type: 'credential',
|
||||
credentialNames: ['googleVertexAuth']
|
||||
}
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Model Name',
|
||||
name: 'modelName',
|
||||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: 'chat-bison',
|
||||
name: 'chat-bison'
|
||||
},
|
||||
{
|
||||
label: 'codechat-bison',
|
||||
name: 'codechat-bison'
|
||||
}
|
||||
],
|
||||
default: 'chat-bison',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Temperature',
|
||||
name: 'temperature',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
default: 0.9,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Max Output Tokens',
|
||||
name: 'maxOutputTokens',
|
||||
type: 'number',
|
||||
step: 1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Top Probability',
|
||||
name: 'topP',
|
||||
type: 'number',
|
||||
step: 0.1,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const googleApplicationCredentialFilePath = getCredentialParam('googleApplicationCredentialFilePath', credentialData, nodeData)
|
||||
const googleApplicationCredential = getCredentialParam('googleApplicationCredential', credentialData, nodeData)
|
||||
const projectID = getCredentialParam('projectID', credentialData, nodeData)
|
||||
|
||||
if (!googleApplicationCredentialFilePath && !googleApplicationCredential)
|
||||
throw new Error('Please specify your Google Application Credential')
|
||||
if (googleApplicationCredentialFilePath && googleApplicationCredential)
|
||||
throw new Error('Please use either Google Application Credential File Path or Google Credential JSON Object')
|
||||
|
||||
const authOptions: GoogleAuthOptions = {}
|
||||
if (googleApplicationCredentialFilePath && !googleApplicationCredential) authOptions.keyFile = googleApplicationCredentialFilePath
|
||||
else if (!googleApplicationCredentialFilePath && googleApplicationCredential)
|
||||
authOptions.credentials = JSON.parse(googleApplicationCredential)
|
||||
|
||||
if (projectID) authOptions.projectId = projectID
|
||||
|
||||
const temperature = nodeData.inputs?.temperature as string
|
||||
const modelName = nodeData.inputs?.modelName as string
|
||||
const maxOutputTokens = nodeData.inputs?.maxOutputTokens as string
|
||||
const topP = nodeData.inputs?.topP as string
|
||||
|
||||
const obj: Partial<GoogleVertexAIChatInput> = {
|
||||
temperature: parseFloat(temperature),
|
||||
model: modelName,
|
||||
authOptions
|
||||
}
|
||||
|
||||
if (maxOutputTokens) obj.maxOutputTokens = parseInt(maxOutputTokens, 10)
|
||||
if (topP) obj.topP = parseFloat(topP)
|
||||
|
||||
const model = new ChatGoogleVertexAI(obj)
|
||||
return model
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: GoogleVertexAI_ChatModels }
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
<!-- from https://cloud.google.com/icons-->
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="24px" height="24px"><path d="M20,13.89A.77.77,0,0,0,19,13.73l-7,5.14v.22a.72.72,0,1,1,0,1.43v0a.74.74,0,0,0,.45-.15l7.41-5.47A.76.76,0,0,0,20,13.89Z" style="fill:#669df6"/><path d="M12,20.52a.72.72,0,0,1,0-1.43h0v-.22L5,13.73a.76.76,0,0,0-1,.16.74.74,0,0,0,.16,1l7.41,5.47a.73.73,0,0,0,.44.15v0Z" style="fill:#aecbfa"/><path d="M12,18.34a1.47,1.47,0,1,0,1.47,1.47A1.47,1.47,0,0,0,12,18.34Zm0,2.18a.72.72,0,1,1,.72-.71A.71.71,0,0,1,12,20.52Z" style="fill:#4285f4"/><path d="M6,6.11a.76.76,0,0,1-.75-.75V3.48a.76.76,0,1,1,1.51,0V5.36A.76.76,0,0,1,6,6.11Z" style="fill:#aecbfa"/><circle cx="5.98" cy="12" r="0.76" style="fill:#aecbfa"/><circle cx="5.98" cy="9.79" r="0.76" style="fill:#aecbfa"/><circle cx="5.98" cy="7.57" r="0.76" style="fill:#aecbfa"/><path d="M18,8.31a.76.76,0,0,1-.75-.76V5.67a.75.75,0,1,1,1.5,0V7.55A.75.75,0,0,1,18,8.31Z" style="fill:#4285f4"/><circle cx="18.02" cy="12.01" r="0.76" style="fill:#4285f4"/><circle cx="18.02" cy="9.76" r="0.76" style="fill:#4285f4"/><circle cx="18.02" cy="3.48" r="0.76" style="fill:#4285f4"/><path d="M12,15a.76.76,0,0,1-.75-.75V12.34a.76.76,0,0,1,1.51,0v1.89A.76.76,0,0,1,12,15Z" style="fill:#669df6"/><circle cx="12" cy="16.45" r="0.76" style="fill:#669df6"/><circle cx="12" cy="10.14" r="0.76" style="fill:#669df6"/><circle cx="12" cy="7.92" r="0.76" style="fill:#669df6"/><path d="M15,10.54a.76.76,0,0,1-.75-.75V7.91a.76.76,0,1,1,1.51,0V9.79A.76.76,0,0,1,15,10.54Z" style="fill:#4285f4"/><circle cx="15.01" cy="5.69" r="0.76" style="fill:#4285f4"/><circle cx="15.01" cy="14.19" r="0.76" style="fill:#4285f4"/><circle cx="15.01" cy="11.97" r="0.76" style="fill:#4285f4"/><circle cx="8.99" cy="14.19" r="0.76" style="fill:#aecbfa"/><circle cx="8.99" cy="7.92" r="0.76" style="fill:#aecbfa"/><circle cx="8.99" cy="5.69" r="0.76" style="fill:#aecbfa"/><path d="M9,12.73A.76.76,0,0,1,8.24,12V10.1a.75.75,0,1,1,1.5,0V12A.75.75,0,0,1,9,12.73Z" style="fill:#aecbfa"/></svg>
|
||||
|
After Width: | Height: | Size: 2.0 KiB |
|
|
@ -0,0 +1,200 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { TextSplitter } from 'langchain/text_splitter'
|
||||
import { BaseDocumentLoader } from 'langchain/document_loaders/base'
|
||||
import { Document } from 'langchain/document'
|
||||
import axios, { AxiosRequestConfig } from 'axios'
|
||||
|
||||
class API_DocumentLoaders implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
inputs?: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'API Loader'
|
||||
this.name = 'apiLoader'
|
||||
this.version = 1.0
|
||||
this.type = 'Document'
|
||||
this.icon = 'api-loader.png'
|
||||
this.category = 'Document Loaders'
|
||||
this.description = `Load data from an API`
|
||||
this.baseClasses = [this.type]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Text Splitter',
|
||||
name: 'textSplitter',
|
||||
type: 'TextSplitter',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Method',
|
||||
name: 'method',
|
||||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: 'GET',
|
||||
name: 'GET'
|
||||
},
|
||||
{
|
||||
label: 'POST',
|
||||
name: 'POST'
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
label: 'URL',
|
||||
name: 'url',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Headers',
|
||||
name: 'headers',
|
||||
type: 'json',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Body',
|
||||
name: 'body',
|
||||
type: 'json',
|
||||
description:
|
||||
'JSON body for the POST request. If not specified, agent will try to figure out itself from AIPlugin if provided',
|
||||
additionalParams: true,
|
||||
optional: true
|
||||
}
|
||||
]
|
||||
}
|
||||
async init(nodeData: INodeData): Promise<any> {
|
||||
const headers = nodeData.inputs?.headers as string
|
||||
const url = nodeData.inputs?.url as string
|
||||
const body = nodeData.inputs?.body as string
|
||||
const method = nodeData.inputs?.method as string
|
||||
const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
|
||||
const metadata = nodeData.inputs?.metadata
|
||||
|
||||
const options: ApiLoaderParams = {
|
||||
url,
|
||||
method
|
||||
}
|
||||
|
||||
if (headers) {
|
||||
const parsedHeaders = typeof headers === 'object' ? headers : JSON.parse(headers)
|
||||
options.headers = parsedHeaders
|
||||
}
|
||||
|
||||
if (body) {
|
||||
const parsedBody = typeof body === 'object' ? body : JSON.parse(body)
|
||||
options.body = parsedBody
|
||||
}
|
||||
|
||||
const loader = new ApiLoader(options)
|
||||
|
||||
let docs = []
|
||||
|
||||
if (textSplitter) {
|
||||
docs = await loader.loadAndSplit(textSplitter)
|
||||
} else {
|
||||
docs = await loader.load()
|
||||
}
|
||||
|
||||
if (metadata) {
|
||||
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
|
||||
let finaldocs = []
|
||||
for (const doc of docs) {
|
||||
const newdoc = {
|
||||
...doc,
|
||||
metadata: {
|
||||
...doc.metadata,
|
||||
...parsedMetadata
|
||||
}
|
||||
}
|
||||
finaldocs.push(newdoc)
|
||||
}
|
||||
return finaldocs
|
||||
}
|
||||
|
||||
return docs
|
||||
}
|
||||
}
|
||||
|
||||
interface ApiLoaderParams {
|
||||
url: string
|
||||
method: string
|
||||
headers?: ICommonObject
|
||||
body?: ICommonObject
|
||||
}
|
||||
|
||||
class ApiLoader extends BaseDocumentLoader {
|
||||
public readonly url: string
|
||||
|
||||
public readonly headers?: ICommonObject
|
||||
|
||||
public readonly body?: ICommonObject
|
||||
|
||||
public readonly method: string
|
||||
|
||||
constructor({ url, headers, body, method }: ApiLoaderParams) {
|
||||
super()
|
||||
this.url = url
|
||||
this.headers = headers
|
||||
this.body = body
|
||||
this.method = method
|
||||
}
|
||||
|
||||
public async load(): Promise<Document[]> {
|
||||
if (this.method === 'POST') {
|
||||
return this.executePostRequest(this.url, this.headers, this.body)
|
||||
} else {
|
||||
return this.executeGetRequest(this.url, this.headers)
|
||||
}
|
||||
}
|
||||
|
||||
protected async executeGetRequest(url: string, headers?: ICommonObject): Promise<Document[]> {
|
||||
try {
|
||||
const config: AxiosRequestConfig = {}
|
||||
if (headers) {
|
||||
config.headers = headers
|
||||
}
|
||||
const response = await axios.get(url, config)
|
||||
const responseJsonString = JSON.stringify(response.data, null, 2)
|
||||
const doc = new Document({
|
||||
pageContent: responseJsonString,
|
||||
metadata: {
|
||||
url
|
||||
}
|
||||
})
|
||||
return [doc]
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to fetch ${url}: ${error}`)
|
||||
}
|
||||
}
|
||||
|
||||
protected async executePostRequest(url: string, headers?: ICommonObject, body?: ICommonObject): Promise<Document[]> {
|
||||
try {
|
||||
const config: AxiosRequestConfig = {}
|
||||
if (headers) {
|
||||
config.headers = headers
|
||||
}
|
||||
const response = await axios.post(url, body ?? {}, config)
|
||||
const responseJsonString = JSON.stringify(response.data, null, 2)
|
||||
const doc = new Document({
|
||||
pageContent: responseJsonString,
|
||||
metadata: {
|
||||
url
|
||||
}
|
||||
})
|
||||
return [doc]
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to post ${url}: ${error}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
nodeClass: API_DocumentLoaders
|
||||
}
|
||||
|
After Width: | Height: | Size: 1.4 KiB |
|
|
@ -0,0 +1,230 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { TextSplitter } from 'langchain/text_splitter'
|
||||
import { BaseDocumentLoader } from 'langchain/document_loaders/base'
|
||||
import { Document } from 'langchain/document'
|
||||
import axios from 'axios'
|
||||
import { getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
|
||||
class Airtable_DocumentLoaders implements INode {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
credential: INodeParams
|
||||
inputs?: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Airtable'
|
||||
this.name = 'airtable'
|
||||
this.version = 1.0
|
||||
this.type = 'Document'
|
||||
this.icon = 'airtable.svg'
|
||||
this.category = 'Document Loaders'
|
||||
this.description = `Load data from Airtable table`
|
||||
this.baseClasses = [this.type]
|
||||
this.credential = {
|
||||
label: 'Connect Credential',
|
||||
name: 'credential',
|
||||
type: 'credential',
|
||||
credentialNames: ['airtableApi']
|
||||
}
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Text Splitter',
|
||||
name: 'textSplitter',
|
||||
type: 'TextSplitter',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Base Id',
|
||||
name: 'baseId',
|
||||
type: 'string',
|
||||
placeholder: 'app11RobdGoX0YNsC',
|
||||
description:
|
||||
'If your table URL looks like: https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYICO/viw9UrP77Id0CE4ee, app11RovdGoX0YNsC is the base id'
|
||||
},
|
||||
{
|
||||
label: 'Table Id',
|
||||
name: 'tableId',
|
||||
type: 'string',
|
||||
placeholder: 'tblJdmvbrgizbYICO',
|
||||
description:
|
||||
'If your table URL looks like: https://airtable.com/app11RobdGoX0YNsC/tblJdmvbrgizbYICO/viw9UrP77Id0CE4ee, tblJdmvbrgizbYICO is the table id'
|
||||
},
|
||||
{
|
||||
label: 'Return All',
|
||||
name: 'returnAll',
|
||||
type: 'boolean',
|
||||
default: true,
|
||||
additionalParams: true,
|
||||
description: 'If all results should be returned or only up to a given limit'
|
||||
},
|
||||
{
|
||||
label: 'Limit',
|
||||
name: 'limit',
|
||||
type: 'number',
|
||||
default: 100,
|
||||
additionalParams: true,
|
||||
description: 'Number of results to return'
|
||||
},
|
||||
{
|
||||
label: 'Metadata',
|
||||
name: 'metadata',
|
||||
type: 'json',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
|
||||
const baseId = nodeData.inputs?.baseId as string
|
||||
const tableId = nodeData.inputs?.tableId as string
|
||||
const returnAll = nodeData.inputs?.returnAll as boolean
|
||||
const limit = nodeData.inputs?.limit as string
|
||||
const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
|
||||
const metadata = nodeData.inputs?.metadata
|
||||
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const accessToken = getCredentialParam('accessToken', credentialData, nodeData)
|
||||
|
||||
const airtableOptions: AirtableLoaderParams = {
|
||||
baseId,
|
||||
tableId,
|
||||
returnAll,
|
||||
accessToken,
|
||||
limit: limit ? parseInt(limit, 10) : 100
|
||||
}
|
||||
|
||||
const loader = new AirtableLoader(airtableOptions)
|
||||
|
||||
let docs = []
|
||||
|
||||
if (textSplitter) {
|
||||
docs = await loader.loadAndSplit(textSplitter)
|
||||
} else {
|
||||
docs = await loader.load()
|
||||
}
|
||||
|
||||
if (metadata) {
|
||||
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
|
||||
let finaldocs = []
|
||||
for (const doc of docs) {
|
||||
const newdoc = {
|
||||
...doc,
|
||||
metadata: {
|
||||
...doc.metadata,
|
||||
...parsedMetadata
|
||||
}
|
||||
}
|
||||
finaldocs.push(newdoc)
|
||||
}
|
||||
return finaldocs
|
||||
}
|
||||
|
||||
return docs
|
||||
}
|
||||
}
|
||||
|
||||
interface AirtableLoaderParams {
|
||||
baseId: string
|
||||
tableId: string
|
||||
accessToken: string
|
||||
limit?: number
|
||||
returnAll?: boolean
|
||||
}
|
||||
|
||||
interface AirtableLoaderResponse {
|
||||
records: AirtableLoaderPage[]
|
||||
offset?: string
|
||||
}
|
||||
|
||||
interface AirtableLoaderPage {
|
||||
id: string
|
||||
createdTime: string
|
||||
fields: ICommonObject
|
||||
}
|
||||
|
||||
class AirtableLoader extends BaseDocumentLoader {
|
||||
public readonly baseId: string
|
||||
|
||||
public readonly tableId: string
|
||||
|
||||
public readonly accessToken: string
|
||||
|
||||
public readonly limit: number
|
||||
|
||||
public readonly returnAll: boolean
|
||||
|
||||
constructor({ baseId, tableId, accessToken, limit = 100, returnAll = false }: AirtableLoaderParams) {
|
||||
super()
|
||||
this.baseId = baseId
|
||||
this.tableId = tableId
|
||||
this.accessToken = accessToken
|
||||
this.limit = limit
|
||||
this.returnAll = returnAll
|
||||
}
|
||||
|
||||
public async load(): Promise<Document[]> {
|
||||
if (this.returnAll) {
|
||||
return this.loadAll()
|
||||
}
|
||||
return this.loadLimit()
|
||||
}
|
||||
|
||||
protected async fetchAirtableData(url: string, params: ICommonObject): Promise<AirtableLoaderResponse> {
|
||||
try {
|
||||
const headers = {
|
||||
Authorization: `Bearer ${this.accessToken}`,
|
||||
'Content-Type': 'application/json',
|
||||
Accept: 'application/json'
|
||||
}
|
||||
const response = await axios.get(url, { params, headers })
|
||||
return response.data
|
||||
} catch (error) {
|
||||
throw new Error(`Failed to fetch ${url} from Airtable: ${error}`)
|
||||
}
|
||||
}
|
||||
|
||||
private createDocumentFromPage(page: AirtableLoaderPage): Document {
|
||||
// Generate the URL
|
||||
const pageUrl = `https://api.airtable.com/v0/${this.baseId}/${this.tableId}/${page.id}`
|
||||
|
||||
// Return a langchain document
|
||||
return new Document({
|
||||
pageContent: JSON.stringify(page.fields, null, 2),
|
||||
metadata: {
|
||||
url: pageUrl
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
private async loadLimit(): Promise<Document[]> {
|
||||
const params = { maxRecords: this.limit }
|
||||
const data = await this.fetchAirtableData(`https://api.airtable.com/v0/${this.baseId}/${this.tableId}`, params)
|
||||
if (data.records.length === 0) {
|
||||
return []
|
||||
}
|
||||
return data.records.map((page) => this.createDocumentFromPage(page))
|
||||
}
|
||||
|
||||
private async loadAll(): Promise<Document[]> {
|
||||
const params: ICommonObject = { pageSize: 100 }
|
||||
let data: AirtableLoaderResponse
|
||||
let returnPages: AirtableLoaderPage[] = []
|
||||
|
||||
do {
|
||||
data = await this.fetchAirtableData(`https://api.airtable.com/v0/${this.baseId}/${this.tableId}`, params)
|
||||
returnPages.push.apply(returnPages, data.records)
|
||||
params.offset = data.offset
|
||||
} while (data.offset !== undefined)
|
||||
return returnPages.map((page) => this.createDocumentFromPage(page))
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
nodeClass: Airtable_DocumentLoaders
|
||||
}
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg width="256px" height="215px" viewBox="0 0 256 215" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" preserveAspectRatio="xMidYMid">
|
||||
<g>
|
||||
<path d="M114.25873,2.70101695 L18.8604023,42.1756384 C13.5552723,44.3711638 13.6102328,51.9065311 18.9486282,54.0225085 L114.746142,92.0117514 C123.163769,95.3498757 132.537419,95.3498757 140.9536,92.0117514 L236.75256,54.0225085 C242.08951,51.9065311 242.145916,44.3711638 236.83934,42.1756384 L141.442459,2.70101695 C132.738459,-0.900338983 122.961284,-0.900338983 114.25873,2.70101695" fill="#FFBF00"></path>
|
||||
<path d="M136.349071,112.756863 L136.349071,207.659101 C136.349071,212.173089 140.900664,215.263892 145.096461,213.600615 L251.844122,172.166219 C254.281184,171.200072 255.879376,168.845451 255.879376,166.224705 L255.879376,71.3224678 C255.879376,66.8084791 251.327783,63.7176768 247.131986,65.3809537 L140.384325,106.815349 C137.94871,107.781496 136.349071,110.136118 136.349071,112.756863" fill="#26B5F8"></path>
|
||||
<path d="M111.422771,117.65355 L79.742409,132.949912 L76.5257763,134.504714 L9.65047684,166.548104 C5.4112904,168.593211 0.000578531073,165.503855 0.000578531073,160.794612 L0.000578531073,71.7210757 C0.000578531073,70.0173017 0.874160452,68.5463864 2.04568588,67.4384994 C2.53454463,66.9481944 3.08848814,66.5446689 3.66412655,66.2250305 C5.26231864,65.2661153 7.54173107,65.0101153 9.47981017,65.7766689 L110.890522,105.957098 C116.045234,108.002206 116.450206,115.225166 111.422771,117.65355" fill="#ED3049"></path>
|
||||
<path d="M111.422771,117.65355 L79.742409,132.949912 L2.04568588,67.4384994 C2.53454463,66.9481944 3.08848814,66.5446689 3.66412655,66.2250305 C5.26231864,65.2661153 7.54173107,65.0101153 9.47981017,65.7766689 L110.890522,105.957098 C116.045234,108.002206 116.450206,115.225166 111.422771,117.65355" fill-opacity="0.25" fill="#000000"></path>
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 1.9 KiB |
|
|
@ -0,0 +1,139 @@
|
|||
import { INode, INodeData, INodeParams, ICommonObject } from '../../../src/Interface'
|
||||
import { getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { TextSplitter } from 'langchain/text_splitter'
|
||||
import { ApifyDatasetLoader } from 'langchain/document_loaders/web/apify_dataset'
|
||||
import { Document } from 'langchain/document'
|
||||
|
||||
class ApifyWebsiteContentCrawler_DocumentLoaders implements INode {
|
||||
label: string
|
||||
name: string
|
||||
description: string
|
||||
type: string
|
||||
icon: string
|
||||
version: number
|
||||
category: string
|
||||
baseClasses: string[]
|
||||
inputs: INodeParams[]
|
||||
credential: INodeParams
|
||||
|
||||
constructor() {
|
||||
this.label = 'Apify Website Content Crawler'
|
||||
this.name = 'apifyWebsiteContentCrawler'
|
||||
this.type = 'Document'
|
||||
this.icon = 'apify-symbol-transparent.svg'
|
||||
this.version = 1.0
|
||||
this.category = 'Document Loaders'
|
||||
this.description = 'Load data from Apify Website Content Crawler'
|
||||
this.baseClasses = [this.type]
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Start URLs',
|
||||
name: 'urls',
|
||||
type: 'string',
|
||||
description: 'One or more URLs of pages where the crawler will start, separated by commas.',
|
||||
placeholder: 'https://js.langchain.com/docs/'
|
||||
},
|
||||
{
|
||||
label: 'Crawler type',
|
||||
type: 'options',
|
||||
name: 'crawlerType',
|
||||
options: [
|
||||
{
|
||||
label: 'Headless web browser (Chrome+Playwright)',
|
||||
name: 'playwright:chrome'
|
||||
},
|
||||
{
|
||||
label: 'Stealthy web browser (Firefox+Playwright)',
|
||||
name: 'playwright:firefox'
|
||||
},
|
||||
{
|
||||
label: 'Raw HTTP client (Cheerio)',
|
||||
name: 'cheerio'
|
||||
},
|
||||
{
|
||||
label: 'Raw HTTP client with JavaScript execution (JSDOM) [experimental]',
|
||||
name: 'jsdom'
|
||||
}
|
||||
],
|
||||
description:
|
||||
'Select the crawling engine, see <a target="_blank" href="https://apify.com/apify/website-content-crawler#crawling">documentation</a> for additional information.',
|
||||
default: 'playwright:firefox'
|
||||
},
|
||||
{
|
||||
label: 'Max crawling depth',
|
||||
name: 'maxCrawlDepth',
|
||||
type: 'number',
|
||||
optional: true,
|
||||
default: 1
|
||||
},
|
||||
{
|
||||
label: 'Max crawl pages',
|
||||
name: 'maxCrawlPages',
|
||||
type: 'number',
|
||||
optional: true,
|
||||
default: 3
|
||||
},
|
||||
{
|
||||
label: 'Additional input',
|
||||
name: 'additionalInput',
|
||||
type: 'json',
|
||||
default: JSON.stringify({}),
|
||||
description:
|
||||
'For additional input options for the crawler see <a target="_blank" href="https://apify.com/apify/website-content-crawler/input-schema">documentation</a>.',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Text Splitter',
|
||||
name: 'textSplitter',
|
||||
type: 'TextSplitter',
|
||||
optional: true
|
||||
}
|
||||
]
|
||||
this.credential = {
|
||||
label: 'Connect Apify API',
|
||||
name: 'credential',
|
||||
type: 'credential',
|
||||
credentialNames: ['apifyApi']
|
||||
}
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
|
||||
const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
|
||||
|
||||
// Get input options and merge with additional input
|
||||
const urls = nodeData.inputs?.urls as string
|
||||
const crawlerType = nodeData.inputs?.crawlerType as string
|
||||
const maxCrawlDepth = nodeData.inputs?.maxCrawlDepth as string
|
||||
const maxCrawlPages = nodeData.inputs?.maxCrawlPages as string
|
||||
const additionalInput =
|
||||
typeof nodeData.inputs?.additionalInput === 'object'
|
||||
? nodeData.inputs?.additionalInput
|
||||
: JSON.parse(nodeData.inputs?.additionalInput as string)
|
||||
const input = {
|
||||
startUrls: urls.split(',').map((url) => ({ url: url.trim() })),
|
||||
crawlerType,
|
||||
maxCrawlDepth: parseInt(maxCrawlDepth, 10),
|
||||
maxCrawlPages: parseInt(maxCrawlPages, 10),
|
||||
...additionalInput
|
||||
}
|
||||
|
||||
// Get Apify API token from credential data
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const apifyApiToken = getCredentialParam('apifyApiToken', credentialData, nodeData)
|
||||
|
||||
const loader = await ApifyDatasetLoader.fromActorCall('apify/website-content-crawler', input, {
|
||||
datasetMappingFunction: (item) =>
|
||||
new Document({
|
||||
pageContent: (item.text || '') as string,
|
||||
metadata: { source: item.url }
|
||||
}),
|
||||
clientOptions: {
|
||||
token: apifyApiToken
|
||||
}
|
||||
})
|
||||
|
||||
return textSplitter ? loader.loadAndSplit(textSplitter) : loader.load()
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: ApifyWebsiteContentCrawler_DocumentLoaders }
|
||||
|
|
@ -0,0 +1 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><defs><style>.cls-1{fill:none;}.cls-2{fill:#97d700;}.cls-3{fill:#71c5e8;}.cls-4{fill:#ff9013;}</style></defs><g id="Trmplate"><rect class="cls-1" width="512" height="512"/><path class="cls-2" d="M163.14,152.65a36.06,36.06,0,0,0-30.77,40.67v0l21.34,152.33,89.74-204.23Z"/><path class="cls-3" d="M379.69,279.56l-8.38-117.1a36.12,36.12,0,0,0-38.53-33.36,17.61,17.61,0,0,0-2.4.26l-34.63,4.79,76.08,170.57A35.94,35.94,0,0,0,379.69,279.56Z"/><path class="cls-4" d="M186.43,382.69a35.88,35.88,0,0,0,18-2.63l130.65-55.13L273,185.65Z"/></g></svg>
|
||||
|
After Width: | Height: | Size: 599 B |