Anyone here running their own LOCAL AI Assistant?

Arjun

IPCT Contributor
Feb 26, 2017
12,912
15,575
The Free? World
Anyone here running their own LOCAL AI Assistant?
 
I am running Ollama locally on host with a VB of Home Assistant. Gave it a snarky personality where it questions my commands. Use it mostly for text to speech hourly chime and NWS alerts, lightning alerts, et al.

1772913090108.png
 
  • Love
Reactions: Arjun
I am running Ollama locally on host with a VB of Home Assistant. Gave it a snarky personality where it questions my commands. Use it mostly for text to speech hourly chime and NWS alerts, lightning alerts, et al.

View attachment 239411
Going to have to try this, is this on a Raspberry Pi?
 
Running it in Oracle Virtual Box on a Lenova ThinkCentre M900

Download Ollama on macOS ==> Host computer running Ubuntu 24.04

Yes, Ollama runs on Raspberry Pi 4 and 5 (64-bit OS recommended). It is best suited for smaller models (1B-3B parameters) like TinyLlama or Gemma2 to achieve usable performance. An 8GB RAM model is recommended for better performance and larger model capability.

Key Details for Running Ollama on RPi:
  • Supported Models: TinyLlama, Phi-3, Gemma 2b, and LLaVa.
  • Performance: Performance is limited; expect slower token generation compared to desktop/GPU setups.
  • Requirements: A 64-bit OS (like Raspberry Pi OS 64-bit) is required.
  • Installation: Use the standard Linux install command: curl -fsSL | sh.
  • Optimization: Using an SSD instead of an SD card is strongly recommended for faster model loading.
Recommended Models for RPi:
  • ollama run tinyllama (1.1b)
  • ollama run gemma:2b
  • ollama run phi4-mini (3.8b)

Ollama ==> HA running in a VB.

1772924664626.png

1772924947413.png
 
  • Like
Reactions: Arjun