NVIDIA NeMo Guardrails will prevent chatbot AIs like ChatGPT from going crazy with their responses.

[ad_1]

Nvidia logo

Over the past few months, we’ve read about the many different chatbots that are available, from ChatGPT to Bing Chat to Bard. However, the large language models that are the basis of these chatbots have led to many concerns about both the level of truthfulness in their responses and also that they may occasionally produce some unusual responses. Let’s start.

Today, NVIDIA announced a new open source platform designed to remove some restrictions on chatbot responses. It is called, appropriately, Nemo Guardrails. This will allow software developers to take security measures on the types of responses that are generated by chatbots.

The program will have three different types of guardrails:

  • Topical guardrails prevent apps from getting into unwanted areas. For example, they prevent customer service assistants from answering questions about the weather.
  • Security patches ensure that apps respond with correct, appropriate information. They can filter out unwanted language and enforce that citations are only from credible sources.
  • Security guardrails restrict apps to only making connections to external third-party applications that are known to be safe.

Software developers can learn more about NeMo Guardrails. On NVIDIA’s technical blog.



[ad_2]

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

x