I have another post about getting Ollama and Web UI up in running on Unraid with Docker and do a write up for Windows 10/11. Why am I writing up for Windows when I have a setup on Unraid? Well my Unraid setup didn’t have a GPU and was CPU only solution and responses were taking minutes. That response time was just unacceptable from a usability prospective and decided to get it setup on my new desktop and put it to good use.
Step 1 – Install Docker and Open WebUI
You might be asking, hey I thought we were installing Ollama? You will use Open WebUI to interface with Ollama. While this step is optional, you could use Ollama through the command line interface, using a web interface feels much more natural.
I was originally looking for a way to install Open WebUI without using Docker, but only found one method that was a very round-about way and wasn’t worth the extra effort. Overall these tools will help you use Ollama and installing them first makes it easier.
If you want to use Open WebUI, recommend using Docker
Step 1a – Install Docker
Head over to the docker website and find which install works for you.
After your install, you will need to Reboot windows.
Note: Docker ask for a work email (which I think is slimy) and ask you to create an account. You do not need to create an account or use your work email.
Step 1b – Install Open WebUI
Next step is to install Open WebUI, there is a couple of different ways to install Open WebUI depending on your configuration. Check out the following readme page and the how-to-install section https://github.com/open-webui/open-webui?tab=readme-ov-file#how-to-install-
If you are following this guide, you can use this command in the standard windows command prompt:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Step 2 – Install Ollama
Head over to the Ollama website and download the installer and then install Ollama.
Here is the installation screen:

Here is the progress screen

Step 3 – Download Ollama model
While you are on the Ollama website, check out their different models that you can download and use.
Copy the command and put it into your windows command prompt again and run it, Ollama will then download the model for you.

If you are looking for some recommendations on which model to use check out my previous article where I tested 6 models and ranked it based on the answers. This post may inspire me to do another comprehensive test.
Step 4 – Launch Open WebUI and use your new chatbot
Open up your web browser and navigate to
http://localhost:3000/
or open up Docker and go to the containers section and click on the ports which will open up a hyperlink and open it for you.

Congrats you now have your own chatgpt clone setup.
Conclusion
This was very easy to setup and the performance was much better then I expected especially after using the cpu only setup on my unraid server. While chatgpt continues to be the gold standard in terms of performance and capabilities, these Large Language Model (LLM) are gaining more and more capabilities with each release. Highly recommend setting up your own and testing it out. Drop a comment and let me know what LLM is your favorite?