🚀 welcome folks, stay tuned! 🚀

How to use DeepSeek Locally with Ollama in Ubuntu 24.04

In recent new we are aware that DeepSeek has overthrown ChatGPT from it’s long sustained reign. Deep Seek is free to use as you can head over to deepseek.com. However, it’s not really as fun as hosting it in your own local machine.

In this tutorial I will guide you to use DeepSeek locally with Ollama in you Ubuntu 24.04. In addition to it I’ll show how you can utilize a ChatGPT like UI for interacting with it.

What is DeepSeek ?

DeepSeek is an open source AI Large Language Model (LLM) that is a kind of natural language processor. It takes in text input from you and provides you with the response to whatever you might have asked it.

Official Website: deepseek.com

It will answer your general questions, programming or math questions and also other questions that you throw at it.

What is Ollama ?

Ollama is a tool that is fully open source that you can use to run the LLMs locally in your own system. It means if you have a capable PC for running the LLMs, you can run it directly on your own PC.

Official Website: ollama.com

We will use this to install the DeepSeek model locally in our PC.

Prerequisites

Before we being our tutorial it is absolutely necessary that you make sure you have the following ready.

  • High speed internet connection probably ~50 Mbps ( but its not mandatory, if you have slow connection you need more patience )
  • Ubuntu 24.04 ( if you’re experience you can follow the similar approach for other operating systems )
  • 16 GB RAM (I personally recommend 32 GB, because the more the better)
  • Understanding of Linux terminal

Step #1: Install Git and Python

Ubuntu comes with Python preinstalled. The git isn’t installed by default. So let us make sure that it is installed before we proceed.

Run the code below in a new terminal.

# if you are doing this on a server make sure you know what you're doing
sudo apt update && sudo apt upgrade -yCode language: PHP (php)

Then after that please run the following command.

sudo apt install git python3 -y

That concludes our tutorial on using DeepSeek Locally using Ollama in your won machine.

Step #2: Install Ollama and install DeepSeek

If the previous command executed successfully let us move on with Step #2. Run the following command in the terminal

# FREE ADVICE: please don't run commands blindly form the internet 
curl -fsSL https://ollama.com/install.sh | shCode language: PHP (php)

It’s very important to verify that you’re not executing something malicious. Please check the above command is correct in from this official URL: ollama.com/download.

Step #3: Install DeepSeek R1 Model

Now that we have installed the Ollama, let us go ahead with installing the LLM model. The model that i’m using for this tutorial is 7b which is fairly decent. There are other models which require heavy RAM and GPU resources.

ollama run deepseek-r1:7bCode language: CSS (css)

You’ll see something like this when you run the above command.

You can skip Step #4 if you don’t want web like interface you can simply run this command to run the model.

ollama run deepseek-r1:7bCode language: CSS (css)

But if you are feeling a bit fancy and want to go an extra mile for Web UI, follow along.

Step #4: Install OpenWEBUI

First let us install the python3-venv, this will let us isolate the code for our specific purpose.

sudo apt install python3-venv

After that let us create a virtual environment.

python3 -m venv ~/openwebui # will create the virtual environment
source ~/openwebui/bin/activate # will load/activate the python environmentCode language: PHP (php)

You should see something like below after running the above commands.

After that please run the following command in the same terminal, it will take some time to complete.

pip install open-webui

Now after it is installed you need to run the command below to open it.

open-webui serve

After that you’ll be greeted with the UI like below. Once you navigate to 0.0.0.0:8080, you’ll be prompted to signup and then you’ll be ready to do some chatting.

Once you signup you will be greeted with a similar UI to what is shown below. Please select the model from the drop down and then you can chat with your DeepSeek local instance without any problem.

This concludes our tutorial about using DeepSeek locally using Ollama.

Related Content

Top 6 Command Line Music Players for Linux

WireGuard VPN on Linux | Setup Guide

Packet forwarding between interfaces | Linux IPtables

Manage Multiple Git Account using SSH, in One Device

Learn Interesting Uses of ‘ls’ command in Linux

Related Posts