Posey's Tips & Tricks
Do-It-Yourself AI, Part 2: Setting It Up
Now that we've gotten the basics out of the way, it's time to start bringing an LLM to your machine.
In my previous blog post, I explained that you can use DeepSeek-R1 as the basis for building your own AI chatbot that runs locally on your own hardware, for free. Now, I want to continue the discussion by walking you through the process of setting up and using DeepSeek-R1.
Before I get started, I want to remind you that although I am using DeepSeek-R1 as the basis for this blog series. There are other models and depending on what you are trying to achieve, some of the alternative models may work better than DeepSeek-R1. For example, there are models that are specifically designed to help with writing code.
To get started, the first thing that you will need to do is to install a free tool called Ollama. Ollama is just an engine that simplifies the process of getting large language models up and running. Any of the LLMs that are listed on the Models page can be downloaded for free and hosted by Ollama. Ollama itself works with several different large language models, not just DeepSeek-R1. Ollama works with Windows, Linux and MacOS. You can download Ollama here.
Installing Ollama on Windows is a really simple process. As you can see in Figure 1, Ollama uses a GUI-based installer that doesn't require you to do anything beyond just clicking Install. In fact, when the installation process completes, the installer simply closes on its own. The first time that I installed Ollama, I assumed that the installer had crashed, but that was not the case. The installer simply closes without telling you that the installation process has been completed.
[Click on image for larger view.] Figure 1. Ollama includes a simple setup wizard.
Once you have got Ollama installed, the next thing that you will need to do is to download a Deepseek-R1 model (or an alternative model). The easiest way to do this is to open PowerShell and enter this command:
Ollama pull deepseek-r1
This command downloads the 7 billion parameter model, which consumes roughly 4.7 GB of disk space. You can see what this looks like in Figure 2.
[Click on image for larger view.] Figure 2. This is how you download the default DeepSeek-R1 model.
If you want to download one of the other models, you simply need to append the desired number of parameters. The list below outlines the available numbers of parameters and the corresponding model sizes:
Parameters Size
1.5 Billion 1.1 GB
7 Billion 4.7 GB
8 Billion 4.9 GB
14 Billion 9 GB
32 Billion 20 GB
70 Billion 43 GB
671 Billion 404 GB
So as an example, if I wanted to download the 14 Billion parameter model, the command that I would use is:
Ollama pull deepseek-r1:14b
Figure 3
<Desktop AI 2-3.jpg>
This is how you download an alternative model.
[Click on image for larger view.] Figure 3. This is how you download an alternative model.
Keep in mind that some of the models require vast hardware resources. You may need to experiment with downloading various models to see which ones run well on your hardware. If you need a reminder as to which models you have downloaded to your system, just type:
ollama list.
As a general rule, I recommend choosing a model whose size is smaller than your PC's video RAM, although you can run any model that is smaller than the total amount of free RAM on the machine (you aren't limited to using only video RAM).
As a matter of context, the screen captures within this article were made on a Microsoft Surface Studio Laptop 2 containing a 13th-generation Intel Core i7 CPU, a Nvidia GeForce RTX 4060 GPU with 8 GB of video memory and 64 GB of RAM. The 14 billion parameter model runs well on this machine, but I have yet to attempt running the 32 billion parameter model.
Now that you have downloaded one or more models, it's time to test DeepSeek-R1 by giving it a query. If you are using the default model then just open PowerShell and type this command:
Ollama run deepseek-r1
If you have downloaded one of the alternative DeepSeek-R1 models and prefer to use it, then simply append a colon and enter the parameter count. As an example, to run the 14 billion parameter model that I downloaded, you would enter:
Ollama run deepseek-r1:14b
Upon entering the run command, it can take anywhere from a few seconds to a few minutes for Ollama to load the model. Once the model has been loaded, you can enter your query in the space provided. You can see what the query prompt looks like in Figure 4.
[Click on image for larger view.] Figure 4. This is what the DeepSeek-R1 query prompt looks like.
Now that I have shown you how to get Ollama and DeepSeek-R1 up and running, it's time to put PowerShell to work in making the models more useful. In the next blog post in this series, I will show you how to filter out the reasoning text, so that you are presented only with an answer to your question.
About the Author
Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.