The reason is simple: because you can. M1 or later Macs with 16GB of RAM or more are powerful enough to run useful models on device without sending any data…
How and why to run Ollama on your own Mac?
Continue reading
0 Comments
eplt tech blog
The reason is simple: because you can. M1 or later Macs with 16GB of RAM or more are powerful enough to run useful models on device without sending any data…
Running Ollama locally is straightforward: visit http://localhost:11434 and connect from your Python scripts or other tools. If you plan to expose your machine to the internet, though, you should add basic protections…
Lightweight, private, and customizable retrieval-augmented chatbot running entirely on your Mac. Based on the excellent work by pruthvirajcyn and his Medium article. ⚙️ About This Project This is my personal…