Nvidia adds new vision, speech and language capabilities to ChatRTX – a free local chatbot for PCs with Nvidia RTX graphics cards

Nvidia adds new vision, speech and language capabilities to ChatRTX – a free local chatbot for PCs with Nvidia RTX graphics cards
Nvidia adds new vision, speech and language capabilities to ChatRTX – a free local chatbot for PCs with Nvidia RTX graphics cards

Nvidia has expanded the capabilities of Chat for RTX (ChatRTX) by adding support for additional Large Language Models (LLMs), as well as support for vision and speech in the latest release. The free local chatbot does not need to send data online when working with user information, which helps keep discussions confidential.

Different LLMs are suitable for different purposes: general discussions, scientific articles, essay writing, etc. Some LLMs also have strict training to censor results, which makes some LLMs more useful than others. Previously available TensorRT-LLM models included the default Mistral model as well as Llama 2. Nvidia has opened ChatRTX to a few new models, Gemma and ChatGLM3.

Gemma is an LLM developed by Google DeepMind and released in 2024 in the form of two- and seven-billion parameter models. The templates are highly tuned to filter out sensitive speech, personal information, and risky or dangerous responses.

ChatGLM3 is an LLM developed by Zhipu AI and Tsinghua KEG, released in 2023 as a six billion parameter model. The Chinese-English bilingual model was designed to be a strong, open-Source competitor to OpenAI’s ChatGPT.

Nvidia added OpenAI’s CLIP neural network to ChatRTX to automatically recognize images and associate corresponding text with them. For example, if CLIP receives an image of a television presenter, broadcast cameras and lights in a room, it will label the image as a photo of a television studio. ChatRTX can thus work with images without manual content labeling. OpenAI’s Whisper speech recognition system has also been added to ChatRTX, allowing users to speak prompts to ChatRTX.

Readers who want to try a local chatbot can download Nvidia ChatRTX at https://www.nvidia.com/en-us/ai-on-rtx/chatrtx/ provided they meet the following requirements. The local chatbot works on PCs equipped with an Nvidia GeForce RTX 30 or 40 graphics card. Ampere and Ada graphics cards with at least 8 GB of VRAM are also supported. Those who don’t have a compatible card can purchase a fast Nvidia card on Amazon. Other system requirements are Windows 11, 16 GB of RAM, a minimum of 35 GB of free storage space, and driver version 535.11 or later.

-

-

PREV The Sound of Entrepreneurship: “Womanly is a real breath of fresh air for solopreneurs”, with Marie Buron, CEO of Womanly
NEXT Apple’s 7 tips for properly charging your iPhone