Apple’s Shortcut Automations allow your iPhone or Mac to react to events like receiving a message . By combining this with a locally hosted LLM (via Ollama) , you can build a private AI auto-reply system that runs entirely on your local network. In this guide, we’ll configure: A local LLM using Ollama A message-triggered Shortcut Automation Sender-based filtering Automatic AI-generated replies (within Apple’s security limits) 1. Why Use Shortcut Automations Instead of Manual Shortcuts? Automations let Shortcuts run automatically when an event occurs. Examples: When a message is received When a specific person messages you When you arrive at a location At a specific time For AI auto-replies, message-based automations are ideal. 2. Install and Run a Local LLM with Ollama Install Ollama: brew install ollama Start the server (default port): ollama serve Verify installation: ollama list 3. Pull a Lightweight Model For message replies, small models work best: ollama pull tinyllama Or for b...