* adding DeepSeekr1 distill to groq
* Added max_tokens to groq.ts and chatGroqLlamaindex.ts plus updated groq models removing the outdated models and adding new models such as compound-beta
* Patched OpenAI typo on ChatGroq_LLamaIndex.ts
* Patching groq llamaindex
* Patched pnpm lint error
* Removed retundant image
* Update ChatGroq_LlamaIndex.ts
---------
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
* fix: 400 Bad Request errors from Gemini API when converting tool schemas (MCP tools).
* Update FlowiseChatGoogleGenerativeAI.ts
---------
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
* Fix(FlowiseChatGoogleGenerativeAI): Prevent "parts must not be empty" API error in Seq Agents
* Fix: Update pnpm-lock.yaml to resolve CI issues
* convert role function and tool to function
* remove comment
---------
Co-authored-by: Henry <hzj94@hotmail.com>
* Support cache system instructs for Google GenAI
* format code
* Update FlowiseGoogleAICacheManager.ts
---------
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
* fix: udpate label to "NVIDIA NIM API Key"
* test: update tag from ":latest" to ":1.8.0-rtx"
* test: add image URL path "nvcr.io/nim/"
* fix/nvidia-nim-2 (#4208)
* fix: update nim-container-manager
* feat: add "DeepSeek R1 Distill Llama 8B"
* fix/nidia-nim-3 (#4209)
* chore: add error message NVIDIA NIM is not installed.
* chore: standardize NVIDIA NGC API Key
* chore: capitalize Nvidia to NVIDIA
* chore: generalize error message for chat models
* fix/nvidia-nim-4-yau (#4212)
* test: nimRelaxMemConstraints and hostPort
* test: add logger for hostPort and nimRelaxMemConstraints
* test: nim-container-manager version 1.0.9
* test: parseInt nimRelaxMemConstraints
* test: update nim-container-manager version to 1.0.10
* chore: update nim-container-manager version to 1.0.11
* Update start container behaviour - show existing containers and give users the choice
* Go back to previous step when clicking start new so user can change port number
* Update condition for showing existing container dialog
* Fix start new in different port not working
* Update get container controller
* Update again
* fix: generalize error message for chat models
* Update getContainer controller
* Fix incorrect image check in getContainer controller
* Update existing container dialog text
* Fix styles in container exists dialog for nvidia nim
---------
Co-authored-by: chungyau97 <chungyau97@gmail.com>
Co-authored-by: Ong Chung Yau <33013947+chungyau97@users.noreply.github.com>
* add nim container setup
* check if image or container exist before pulling
* update NIM dialog
* update chat nvidia api key
* update nim container version
* update nim container version
* Update AzureChatOpenAI.ts - corrected reasoning description and default
- Description for reasoning effort only mentioned o1. Added o3.
- Changed reasoning effort default to medium as this is OpenAI's default / what users will most likely expect (https://platform.openai.com/docs/guides/reasoning)
* Update ChatOpenAI.ts - corrected reasoning description and default
- Description for reasoning effort only mentioned o1. Added o3.
- Changed reasoning effort default to medium as this is OpenAI's default / what users will most likely expect (https://platform.openai.com/docs/guides/reasoning)
* Update models.json - add specific model ID for o3-mini
- Added o3-mini-2025-01-31
- Updated "o3-mini" label to "o3-mini (latest)"
This allows the user to choose a specific model ID and avoid the risk of unexpected behavior if the "o3-mini" alias is updated.
* Refactor ChatOpenAI_ChatModels to include stopSequence parameter
* lint fix
* Stop Sequence String will now be split by comma
* Update ChatOpenAI.ts
---------
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
* feat: Add Alibaba API credential and ChatAlibabaTongyi node
* lint fix
* Add chatAlibabaTongyi model to models.json
and chat models
---------
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
* Add chat model open api proxy url param
* Refactor add proxy url to chat model open ai
* Update ChatOpenAI.ts
---------
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
* updates to loader to support file upload
* adding a todo
* upgrade llamaindex
* update groq icon
* update azure models
* update llamaindex version
---------
Co-authored-by: Henry <hzj94@hotmail.com>