Running OSS Large Language Models locally with Ollama


1 min read

Ollama is a really easy and sleek tool to run OSS large language models. Download the Ollama app from, run it. Once you run, it spins up and API and you can use the CLI to install models and ask your questions.

You can lookup all the models available here. As an example, let us install llama2 (Meta's newer version which is at par with OpenAI's ChatGPT) and run the same.

# install llama2 model
ollama pull llama2

# run it
ollama run llama2

Now you have a command prompt >>> and you can type your question to get answers, all working in your local machine. Here is an example:

>>> give me a short tip to write a journal
Sure, here's a short tip for writing in a journal:

"Start small and be consistent. Write down one thing you're grateful for each day, or describe your
thoughts and feelings in a short paragraph. As you continue to write, you'll find that it becomes easier
to express yourself and explore your thoughts and emotions."

If you are looking for GUI user interface, I would recommend There are plenty other GUI options available and are outlined in the Ollama repo readme.