Mastering DeepSeek Setup with Ollama: A Beginner’s Guide

Mastering DeepSeek Setup with Ollama: A Beginner’s Guide

Running DeepSeek with Ollama is easier than you think. This guide will show you how to set up this powerful AI model on your own server.

Understanding DeepSeek and Ollama

DeepSeek is a large language model (LLM) known for its accuracy and affordability. When combined with Ollama, it allows you to create a personal AI agent either on a private server or a local machine. This setup is ideal for anyone looking to harness AI capabilities without relying on third-party services.

Ollama acts as a platform that facilitates the deployment of various LLMs, including DeepSeek. Its main advantage is providing users with full control over their AI agent, making it a great choice for developers and businesses alike.

Prerequisites for Setting Up DeepSeek

Before diving into the setup process, it’s crucial to ensure your system meets the necessary requirements. Here’s what you need:

  • A Linux VPS platform or a Linux personal computer.
  • A minimum of 16GB of RAM, 12GB of storage space, and 4 CPU cores.
  • Popular Linux distributions like Ubuntu or CentOS, preferably the latest versions.

If your current system doesn’t meet these specifications, consider upgrading or purchasing a service from Hostinger. Their KVM 4 plan offers 4 CPU cores, 16 GB of RAM, and 200 GB of NVMe SSD storage, starting at $9.99/month.

Step-by-Step Guide to Install Ollama and DeepSeek

1. Install Ollama

First, you need to install Ollama on your Linux system. If you’re using Hostinger, you can select the corresponding template during onboarding or through the hPanel’s Operating System menu.

For manual installation, connect to your server via SSH using a tool like PuTTY or Terminal. If you’re installing locally, simply open your system’s terminal. Here are the steps:

  • Update your server’s repository using: sudo apt update
  • Install external dependencies: sudo apt install python3 python3-pip python3-venv
  • Download and set up Ollama: curl -fsSL https://ollama.com/install.sh | sh
  • Create a Python virtual environment: python3 -m venv ~/ollama-webui && source ~/ollama-webui/bin/activate
  • Set up the Ollama web UI: pip install open-webui
  • Start a virtual terminal using Screen: screen -S Ollama
  • Run Ollama: open-webui serve

Access Ollama in your web browser using your VPS IP address followed by the port 8080.

2. Set up DeepSeek

Once Ollama is running, it’s time to set up DeepSeek. Return to your host system’s command-line interface and press Ctrl + C to stop Ollama. If you’re using Hostinger’s OS template, connect via SSH using the Browser terminal.

Download the DeepSeek R1 model by running: ollama run deepseek-r1:7b

Wait for the download to complete and verify the configuration by running: ollama list

If DeepSeek appears in the list, restart the web interface with: open-webui serve

3. Test DeepSeek on Ollama

Log into the Ollama GUI and create a new account. This will be your default administrator for the AI agent. Interact with DeepSeek to ensure it responds correctly. For instance, you might ask, “Explain what is VPS hosting.”

4. Fine-tune Ollama

Customize the chat settings to match your specific use case. Access the settings by clicking the slider icon on the top right of the Ollama dashboard. Adjust parameters like:

  • Stream chat response – Enables real-time responses.
  • Seed – Sets consistent responses for specific prompts.
  • Temperature – Adjusts creativity level.
  • Reasoning effort – Controls AI model’s depth of thinking.
  • Top K – Reduces nonsensical outputs.
  • Max tokens – Sets the length of responses.

For fiction writing, a higher Temperature might be beneficial, whereas for coding, a lower Temperature and higher Reasoning effort might be ideal. Explore our Ollama GUI tutorial for more detailed instructions.

If you’re looking for a reliable platform to host your AI experiments, consider using Hostinger. With their robust VPS options, setting up and running Ollama becomes a seamless experience for both beginners and seasoned developers.

Conclusion

By following these steps, you can successfully set up DeepSeek on Ollama, creating a personalized AI agent on your server. Ensure your system meets the necessary specifications and make use of Hostinger‘s resources for a smoother experience. With the right configurations, your AI agent will be ready to tackle various tasks, from content creation to complex problem-solving.

How to Run DeepSeek with Ollama FAQ

What are the requirements for running DeepSeek with Ollama?

To run DeepSeek with Ollama, a Linux host system with a minimum of a 4-core CPU, 16 GB of RAM, and 12 GB of storage space is required. Popular distros like Ubuntu and CentOS are recommended, and always aim for the latest OS versions to avoid issues.

What is DeepSeek, and why should I use it with Ollama?

DeepSeek is a cost-effective and efficient LLM, comparable to models like OpenAI’s o1. Using it with Ollama allows for an affordable AI agent setup, leveraging its powerful capabilities.

Can I run DeepSeek with Ollama on any version of Ubuntu?

While technically possible on any Ubuntu version, it’s advisable to use newer releases such as Ubuntu 20.04, 22.04, or 24.04 to prevent compatibility issues.

Starter‑Pack HTML Section

Aris Sentika is a Content Writer specializing in Linux and WordPress development. He has a passion for networking, front-end web development, and server administration. By combining his IT and writing experience, Aris creates content that helps people easily understand complex technical topics to start their online journey. Follow him on LinkedIn.

👉 Start your website with Hostinger – get fast, secure hosting here 👈


🔗 Read more from MinimaDesk:


🎁 Download free premium WordPress tools from our Starter Tools page.

Mastering Web App Design with Hostinger Horizons: A Comprehensive Guide
Top Web Development Tools for Beginners and Beyond
My Cart
Wishlist
Recently Viewed
Categories