Skip to main content

Titan Takeoff

TitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.

Our inference server, Titan Takeoff enables deployment of LLMs locally on your hardware in a single command. Most generative model architectures are supported, such as Falcon, Llama 2, GPT2, T5 and many more.

Installation

To get started with Iris Takeoff, all you need is to have docker and python installed on your local system. If you wish to use the server with gpu support, then you will need to install docker with cuda support.

For Mac and Windows users, make sure you have the docker daemon running! You can check this by running docker ps in your terminal. To start the daemon, open the docker desktop app.

Run the following command to install the Iris CLI that will enable you to run the takeoff server:

!pip install titan-iris

Choose a Model

Takeoff supports many of the most powerful generative text models, such as Falcon, MPT, and Llama. See the supported models for more information. For information about using your own models, see the custom models.

Going forward in this demo we will be using the falcon 7B instruct model. This is a good open-source model that is trained to follow instructions, and is small enough to easily inference even on CPUs.

Taking off

Models are referred to by their model id on HuggingFace. Takeoff uses port 8000 by default, but can be configured to use another port. There is also support to use a Nvidia GPU by specifying cuda for the device flag.

To start the takeoff server, run:

iris takeoff --model tiiuae/falcon-7b-instruct --device cpu
iris takeoff --model tiiuae/falcon-7b-instruct --device cuda # Nvidia GPU required
iris takeoff --model tiiuae/falcon-7b-instruct --device cpu --port 5000 # run on port 5000 (default: 8000)

You will then be directed to a login page, where you will need to create an account to proceed. After logging in, run the command onscreen to check whether the server is ready. When it is ready, you can start using the Takeoff integration.

To shutdown the server, run the following command. You will be presented with options on which Takeoff server to shut down, in case you have multiple running servers.

iris takeoff --shutdown # shutdown the server

Inferencing your model

To access your LLM, use the TitanTakeoff LLM wrapper:

from langchain_community.llms import TitanTakeoff

llm = TitanTakeoff(
base_url="http://localhost:8000", generate_max_length=128, temperature=1.0
)

prompt = "What is the largest planet in the solar system?"

llm(prompt)

No parameters are needed by default, but a baseURL that points to your desired URL where Takeoff is running can be specified and generation parameters can be supplied.

Streaming

Streaming is also supported via the streaming flag:

from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

llm = TitanTakeoff(
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), streaming=True
)

prompt = "What is the capital of France?"

llm(prompt)

Integration with LLMChain

from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

llm = TitanTakeoff()

template = "What is the capital of {country}"

prompt = PromptTemplate(template=template, input_variables=["country"])

llm_chain = LLMChain(llm=llm, prompt=prompt)

generated = llm_chain.run(country="Belgium")
print(generated)