Skip to main content

AWS

The LangChain integrations related to Amazon AWS platform.

LLMs

Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.

See a usage example.

from langchain_community.llms.bedrock import Bedrock

Amazon API Gateway

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.

API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.

See a usage example.

from langchain_community.llms import AmazonAPIGateway

SageMaker Endpoint

Amazon SageMaker is a system that can build, train, and deploy machine learning (ML) models with fully managed infrastructure, tools, and workflows.

We use SageMaker to host our model and expose it as the SageMaker Endpoint.

See a usage example.

from langchain_community.llms import SagemakerEndpoint
from langchain_community.llms.sagemaker_endpoint import LLMContentHandler

Chat models

Bedrock Chat

See a usage example.

from langchain_community.chat_models import BedrockChat

Text Embedding Models

Bedrock

See a usage example.

from langchain_community.embeddings import BedrockEmbeddings

SageMaker Endpoint

See a usage example.

from langchain_community.embeddings import SagemakerEndpointEmbeddings
from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase

Chains

Amazon Comprehend Moderation Chain

Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text.

We need to install the boto3 and nltk libraries.

pip install boto3 nltk

See a usage example.

from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain

Document loaders

AWS S3 Directory and File

Amazon Simple Storage Service (Amazon S3) is an object storage service. AWS S3 Directory AWS S3 Buckets

See a usage example for S3DirectoryLoader.

See a usage example for S3FileLoader.

from langchain_community.document_loaders import S3DirectoryLoader, S3FileLoader

Amazon Textract

Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.

See a usage example.

from langchain_community.document_loaders import AmazonTextractPDFLoader

Memory

AWS DynamoDB

AWS DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.

We have to configure the AWS CLI.

We need to install the boto3 library.

pip install boto3

See a usage example.

from langchain.memory import DynamoDBChatMessageHistory

Retrievers

Amazon Kendra

Amazon Kendra is an intelligent search service provided by Amazon Web Services (AWS). It utilizes advanced natural language processing (NLP) and machine learning algorithms to enable powerful search capabilities across various data sources within an organization. Kendra is designed to help users find the information they need quickly and accurately, improving productivity and decision-making.

With Kendra, we can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results.

We need to install the boto3 library.

pip install boto3

See a usage example.

from langchain.retrievers import AmazonKendraRetriever

Amazon Bedrock (Knowledge Bases)

Knowledge bases for Amazon Bedrock is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize foundation model response.

We need to install the boto3 library.

pip install boto3

See a usage example.

from langchain.retrievers import AmazonKnowledgeBasesRetriever

Vector stores

Amazon OpenSearch Service

Amazon OpenSearch Service performs interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Amazon OpenSearch Service offers the latest versions of OpenSearch, support for many versions of Elasticsearch, as well as visualization capabilities powered by OpenSearch Dashboards and Kibana.

We need to install several python libraries.

pip install boto3 requests requests-aws4auth

See a usage example.

from langchain_community.vectorstores import OpenSearchVectorSearch

Tools

AWS Lambda

Amazon AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It helps developers to build and run applications and services without provisioning or managing servers. This serverless architecture enables you to focus on writing and deploying code, while AWS automatically takes care of scaling, patching, and managing the infrastructure required to run your applications.

We need to install boto3 python library.

pip install boto3

See a usage example.

Callbacks

SageMaker Tracking

Amazon SageMaker is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models.

Amazon SageMaker Experiments is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.

We need to install several python libraries.

pip install google-search-results sagemaker

See a usage example.

from langchain.callbacks import SageMakerCallbackHandler