Nvidia releases “Chat with RTX” an AI chatbot that runs locally on PCs |

[ad_1]

Nvidia has released “Chat with RTX,” a new AI chatbot that runs locally on Windows PCs equipped with Nvidia GeForce RTX graphics cards. The chatbot allows users to personalise it by feeding their documents, text files, PDFs, and YouTube videos to power its responses.
The chatbot, available as a demo on Windows, leverages GPU acceleration and Nvidia’s RAG (retrieval-augmented generation) technology to quickly scan local files pointed to it and provide users with fast, contextually relevant answers to natural language questions without needing internet connectivity. For example, users could ask, “What was that restaurant in Vegas my wife told me about?” and Chat with RTX would search the user’s local files to find the answer.
The Chat with RTX lets users “quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2,” says the company.
Chat with RTX supports various file formats, such as .txt, .pdf, and .doc, as well as YouTube video transcripts. The tool can quickly index the contents of these files to create its knowledge database. Users can ask Chat with RTX to summarise YouTube videos or documents based on the indexed information.
The chatbot promises to are respond quickly as the processing happens locally on the PC using the AI capabilities of Nvidia GeForce RTX 30 or 40 series graphics cards. And sensitive user data remains only on their device rather than being sent to the cloud. Although some limitations exist, like lack of context carryover between questions and not handling follow-ups well. But Nvidia hopes to continue refining Chat with RTX over time.
The Chat with RTX demo app is free to download for GeForce RTX PC owners meeting minimum hardware requirements of Windows 10/11, RTX 30/40 GPU with 8GB+ VRAM, and the latest Nvidia drivers.
Nvidia is also sponsoring a developer contest to create RTX-accelerated, generative AI Windows apps and plug-ins using its TensorRT-LLM framework that Chat with RTX demonstrates. This could bring more local AI capabilities to RTX PCs.



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *