About 1,580,000 results
Open links in new tab
  1. ollama - Reddit

    Stop ollama from running in GPU I need to run ollama and whisper simultaneously. As I have only 4GB of VRAM, I am thinking of running whisper in GPU and ollama in CPU. How do I force ollama to stop …

  2. Local Ollama Text to Speech? : r/robotics - Reddit

    Apr 8, 2024 · Yes, I was able to run it on a RPi. Ollama works great. Mistral, and some of the smaller models work. Llava takes a bit of time, but works. For text to speech, you’ll have to run an API from …

  3. Ollama GPU Support : r/ollama - Reddit

    I've just installed Ollama in my system and chatted with it a little. Unfortunately, the response time is very slow even for lightweight models like…

  4. How to make Ollama faster with an integrated GPU? : r/ollama - Reddit

    Mar 8, 2024 · How to make Ollama faster with an integrated GPU? I decided to try out ollama after watching a youtube video. The ability to run LLMs locally and which could give output faster amused …

  5. How to Uninstall models? : r/ollama - Reddit

    Jan 10, 2024 · To get rid of the model I needed on install Ollama again and then run "ollama rm llama2". It should be transparent where it installs - so I can remove it later.

  6. Multiple GPU's supported? : r/ollama - Reddit

    Mar 15, 2024 · Multiple GPU's supported? I’m running Ollama on an ubuntu server with an AMD Threadripper CPU and a single GeForce 4070. I have 2 more PCI slots and was wondering if there …

  7. How to manually install a model? : r/ollama - Reddit

    Apr 11, 2024 · I'm currently downloading Mixtral 8x22b via torrent. Until now, I've always ran ollama run somemodel:xb (or pull). So once those >200GB of glorious…

  8. How does Ollama handle not having enough Vram? : r/ollama - Reddit

    How does Ollama handle not having enough Vram? I have been running phi3:3.8b on my GTX 1650 4GB and it's been great. I was just wondering if I were to use a more complex model, let's say …

  9. Request for Stop command for Ollama Server : r/ollama - Reddit

    Feb 15, 2024 · Ok so ollama doesn't Have a stop or exit command. We have to manually kill the process. And this is not very useful especially because the server respawns immediately. So there …

  10. Dockerized Ollama doesn't use GPU even though it's available

    [SOLVED] - see update comment Hi :) Ollama was using the GPU when i initially set it up (this was quite a few months ago), but recently i noticed the inference speed was low so I started to troubleshoot. …