|
Agentic Sidekick provides a Natural Language Interface to Vesta
A native macOS AI application with multiple backends, vision analysis, LaTeX rendering, code blocks with syntax highlighting, and local inference on Apple Silicon.
HuggingFace Explorer
brew install --cask scouzi1966/afm/vesta-mac
*By disabling public AI access in Settings — all inference runs locally on Apple Silicon.
Built-in Foundation Models framework. Completely private, zero setup, runs on Neural Engine.
Run MLX and GGUF models locally. Explore and download them directly from HuggingFace with an in-app model browser.
Analyze images, diagrams, documents with local Vision Language Models. Drag-and-drop or paste.
Kokoro text-to-speech with 45+ voices. WhisperKit speech-to-text. All on-device via MLX & CoreML.
Generate images with FLUX and Stable Diffusion. Create videos with Wan2.2 and HunyuanVideo via HuggingFace.
33+ tools via Model Context Protocol. Automate Vesta from Claude Code, scripts, or any MCP client.
Connect to the broader AI ecosystem through HuggingFace Inference API and OpenAI-compatible providers like OpenRouter, Cerebras, Groq, Together AI, and more — with 31 providers pre-configured out of the box.
HuggingFace
OpenAI
OpenRouter
Cerebras
Groq
+ 26 more providers
Vesta exposes 33+ tools over TCP using the Model Context Protocol. Automate workflows from Claude Code, scripts, or any MCP-compatible client.
Live captures driven via MCP remote control
Clean, dark-themed native macOS interface with conversation sidebar, backend selector, and integrated settings panel. Built with SwiftUI for macOS 15+.
Chat powered by Apple's Foundation Models framework. Fully private, running entirely on Apple Silicon's Neural Engine.
Full highlight.js integration with 20+ languages. Swift, Python, JavaScript, Rust, and more — rendered beautifully with copy support.
Qwen3-VL-4B running locally on Apple Silicon via MLX framework. Beautiful analogy + Python simulation with syntax highlighting. No internet required.
Analyze images with local VLMs. Inline image display with rich markdown descriptions — breed identification, scene analysis, and more.
Full LaTeX math rendering via KaTeX. Display and inline equations, Taylor series, matrices, integrals — publication-quality typography.
Full GFM support with styled tables, strikethrough, task lists, and more. Combined with LaTeX math in the same conversation.
Cyberpunk microfiction with rich formatting — headers, blockquotes, inline code, and a Swift greeting function. All from a local 4-bit model.
VLM analyzes a complex AWS 3-tier architecture diagram — identifying services, tiers, and infrastructure components from a single screenshot.
Point the VLM at a cookbook page — it identifies the recipe, lists ingredients, and suggests healthier modifications. Practical AI for everyday use.
Switch between Apple Intelligence, MLX, llama.cpp, OpenAI, and HuggingFace with a single click. Each backend optimized for different use cases.
Run Qwen, Llama, Gemma, Phi, and more locally. Unified memory, Metal GPU acceleration, 4-bit quantization. Your data never leaves your Mac.
Model management, temperature, top-p, repetition penalty, token limits, and TTS configuration — all in one panel.
GGUF model library, vision/text mode toggle, generation parameters, context size, and WhisperKit STT configuration.
Connect to OpenAI, OpenRouter, Cerebras, Groq, and 26+ providers. Search 115+ models across US, Canadian, and international endpoints.
Chat, Vision, Image generation, Editing, Transcription, and Video — six tabs of cloud AI capabilities with auto provider selection.
The unified settings inspector — toggle backends, load models, adjust parameters, and see status at a glance across all five engines.
Quick-access pills for Image generation, Video, Vision, Search, Software Maps, and more — one-click access to cloud AI capabilities.