Setting up Ollama in the CLI
Before using Ollama in the CLI, make sure you’ve installed it on your system successfully. To verify, open your terminal and run the following command:
ollama --version
You should see an output similar to:
Next, familiarize yourself with these essential Ollama commands:
Essential usage of Ollama in the CLI
This section will cover the primary usage of the Ollama CLI, from interacting with models to saving model outputs to files.
Running models
To start using models in Ollama, you first need to download the desired model using the pull command. For example, to pull Llama3.2, execute the following:
ollama pull llama3.2
Wait for the download to complete; the time may vary depending on the model’s file size.
If you’re unsure which model to download, check out the Ollama official model library. It provides important details for each model, including customization options, language support, and recommended use cases.
After pulling the model, you can run it with a predefined prompt like this:
ollama run llama3.2 "Explain the basics of machine learning."
Alternatively, run the model without a prompt to start an interactive session:
ollama run llama3.2
In this mode, you can enter your queries or instructions, and the model will generate responses. You can also ask follow-up questions to gain deeper insights or clarify a previous response.
When you’re done interacting with the model, type:
/bye
This will exit the session and return you to the regular terminal interface.
Learn how to create effective AI prompts to improve your results and interactions with Ollama models.
Training models
While pre-trained open-source models like Llama3.2 perform well for general tasks like content generation, they may not always meet the needs of specific use cases. To improve a model’s accuracy on a particular topic, you need to train it using relevant data.
However, note that these models have short-term memory limitations, meaning the training data is only retained during the active conversation. When you quit the session and start a new one, the model won’t remember the information you previously trained it with.
To train the model, start an interactive session. Then, initiate training by typing a prompt like:
Hey, I want you to learn about [topic]. Can I train you on this?
The model will respond with something like:
You can then provide basic information about the topic to help the model understand:
To continue the training and provide more information, ask the model to prompt you with questions about the topic. For example:
Can you ask me a few questions about [topic] to help you understand it better?
Once the model has enough context on the subject, you can end the training and test if the model retains this knowledge.
Prompting and logging responses to files
In Ollama, you can ask the model to perform tasks using the contents of a file, such as summarizing text or analyzing information. This is especially useful for long documents, as it eliminates the need to copy and paste text when instructing the model.
For example, if you have a file named input.txt containing the information you want to summarize, you can run the following:
ollama run llama3.2 "Summarize the content of this file in 50 words." < input.txt
The model will read the file’s contents and generate a summary:
Ollama also lets you log model responses to a file, making it easier to review or refine them later. Here’s an example of asking the model a question and saving the output to a file:
ollama run llama3.2 "Tell me about renewable energy."> output.txt
This will save the model’s response in output.txt:
Advanced usage of Ollama in the CLI
Now that you understand the essentials, let’s explore more advanced uses of Ollama through the CLI.
Creating custom models
Running Ollama via the CLI, you can create a custom model based on your specific needs.
To do so, create a Model file, which is the blueprint for your custom model. The file defines key settings such as the base model, parameters to adjust, and how the model will respond to prompts.
Follow these steps to create a custom model in Ollama:
- Create a new Model file
- Customize the Model file
- Create and run the custom model
Automating tasks with scripts
Automating repetitive tasks in Ollama can save time and ensure workflow consistency. By using bash scripts, you can execute commands automatically. Meanwhile, with cron jobs, you can schedule tasks to run at specific times.
Here’s how to get started:
Create and run bash scripts
You can create a bash script that executes Ollama commands. Here’s how:
- Open a text editor and create a new file named ollama-script.sh
- Add the necessary Ollama commands inside the script
- Make the script executable by giving it the correct permissions
- Run the script directly from the terminal
Set up cron jobs to automate tasks
You can combine your script with a cron job to automate tasks like running models regularly. Here’s how to set up a cron job to run Ollama scripts automatically:
- Open the crontab editor
- Add a line specifying the schedule and the script you want to run
- Save and exit the editor after adding the cron job
Common use cases for the CLI
Here are some real-world examples of using Ollama’s CLI.
Text generation
Data processing, analysis, and prediction
Integration with external tools
Conclusion
In this article, you’ve learned the essentials of using Ollama via CLI, including running commands, interacting with models, and logging model responses to files.
Using the command-line interface, you can also perform more advanced tasks, such as creating new models based on existing ones, automating complex workflows with scripts and cron jobs, and integrating Ollama with external tools.
We encourage you to explore Ollama’s customization features to unlock its full potential and enhance your AI projects. If you have any questions or would like to share your experience using Ollama in the CLI, feel free to use the comment box below.
Ollama CLI tutorial FAQ
What can I do with the CLI version of Ollama?
How do I install models for Ollama in the CLI?
Can I use multimodal models in the CLI version?
👉
Start your website with Hostinger – get fast, secure hosting here 👈
🔗 Read more from MinimaDesk:
- The Ultimate Guide to WP-Content: Access, Upload, and Hide Your WordPress Directory
- How to Add and Customize RSS Feeds in WordPress
- How Many WordPress Plugins Are Too Many? Best Practices for Performance Optimization
- How to Fix Broken Permalinks in WordPress: A Step-by-Step Guide
🎁 Download free premium WordPress tools from our Starter Tools page.