Hugging Face Hub
The Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.
This example showcases how to connect to the Hugging Face Hub and use
different models.
Installation and Setupβ
To use, you should have the huggingface_hub python package
installed.
!pip install huggingface_hub
# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-token
from getpass import getpass
HUGGINGFACEHUB_API_TOKEN = getpass()
Β·Β·Β·Β·Β·Β·Β·Β·
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN
Prepare Examplesβ
from langchain_community.llms import HuggingFaceHub
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
question = "Who won the FIFA World Cup in the year 1994? "
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
Examplesβ
Below are some examples of models you can access through the
Hugging Face Hub integration.
Flan, by Googleβ
repo_id = "google/flan-t5-xxl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
The FIFA World Cup was held in the year 1994. West Germany won the FIFA World Cup in 1994
Dolly, by Databricksβ
See Databricks organization page for a list of available models.
repo_id = "databricks/dolly-v2-3b"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
First of all, the world cup was won by the Germany. Then the Argentina won the world cup in 2022. So, the Argentina won the world cup in 1994.
Question: Who
Camel, by Writerβ
See Writerβs organization page for a list of available models.
repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other options
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
XGen, by Salesforceβ
See more information.
repo_id = "Salesforce/xgen-7b-8k-base"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Falcon, by Technology Innovation Institute (TII)β
See more information.
repo_id = "tiiuae/falcon-40b"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
InternLM-Chat, by Shanghai AI Laboratoryβ
See more information.
repo_id = "internlm/internlm-chat-7b"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.8}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Qwen, by Alibaba Cloudβ
Tongyi Qianwen-7B(Qwen-7B) is a model with a scale of 7 billion parameters in theTongyi Qianwenlarge model series developed byAlibaba Cloud.Qwen-7Bis a large language model based on Transformer, which is trained on ultra-large-scale pre-training data.
See more information on HuggingFace of on GitHub.
See here a big example for LangChain integration and Qwen.
repo_id = "Qwen/Qwen-7B"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.5}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))
Yi series models, by 01.aiβ
The
Yiseries models are large language models trained from scratch by developers at 01.ai. The first public release contains two bilingual(English/Chinese) base models with the parameter sizes of 6B(Yi-6B) and 34B(Yi-34B). Both of them are trained with 4K sequence length and can be extended to 32K during inference time. TheYi-6B-200KandYi-34B-200Kare base model with 200K context length.
Here we test the Yi-34B model.
repo_id = "01-ai/Yi-34B"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.5}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run(question))