Skip to main content

OpenAI

Let’s load the OpenAI Embedding class.

from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
text = "This is a test document."
query_result = embeddings.embed_query(text)
query_result[:5]
[-0.003186025367556387,
0.011071979803637493,
-0.004020420763285827,
-0.011658221276953042,
-0.0010534035786864363]
doc_result = embeddings.embed_documents([text])
doc_result[0][:5]
[-0.003186025367556387,
0.011071979803637493,
-0.004020420763285827,
-0.011658221276953042,
-0.0010534035786864363]

Let’s load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see here

from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
text = "This is a test document."
query_result = embeddings.embed_query(text)
query_result[:5]
[0.004452846988523035,
0.034550655976098514,
-0.015029939040690051,
0.03827273883655212,
0.005785414075152477]
doc_result = embeddings.embed_documents([text])
doc_result[0][:5]
[0.004452846988523035,
0.034550655976098514,
-0.015029939040690051,
0.03827273883655212,
0.005785414075152477]
import os

# if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through
os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"