Skip to main content

NVIDIA

NVIDIA AI Foundation Endpoints give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the NVIDIA NGC catalog, are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.

With NVIDIA AI Foundation Endpoints, you can get quick results from a fully accelerated stack running on NVIDIA DGX Cloud. Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using NVIDIA AI Enterprise.

These models can be easily accessed via the langchain-nvidia-ai-endpoints package, as shown below.

Installation​

pip install -U langchain-nvidia-ai-endpoints

Setup and Authentication​

  • Create a free NVIDIA NGC account.
  • Navigate to Catalog > AI Foundation Models > (Model with API endpoint).
  • Select API and generate the key NVIDIA_API_KEY.
export NVIDIA_API_KEY=nvapi-XXXXXXXXXXXXXXXXXXXXXXXXXX
from langchain_nvidia_ai_endpoints import ChatNVIDIA

llm = ChatNVIDIA(model="mixtral_8x7b")
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)

Using NVIDIA AI Foundation Endpoints​

A selection of NVIDIA AI Foundation models are supported directly in LangChain with familiar APIs.

The active models which are supported can be found in NGC.

The following may be useful examples to help you get started: