-powered assistant for Mac.

No-subscription access to the latest models — through a single interface designed for productivity.

Download
evaluate for free
for continued use buy a license

One app, multiple models.

Use the latest models that power ChatGPT Plus, Claude Pro, and Gemini Advanced — as well as any local model — from a single app.

GPT-4o
128K
GPT-4o mini
128K
o1-preview
128K
o1-mini
128K
Claude 3.5 Sonnet
New
200K
Claude 3.5 Haiku
New
200K
Gemini 1.5 Pro
1M
Gemini 1.5 Flash
1M
grok-beta
New
128K
Llama 3.2 90B
128K
Llama 3.2 11B
128K
Llama 3.2 3B
128K
Llama 3.2 1B
128K
qwen-2.5-coder
128K
Mistral 8b
8K
These are local models that you run on your own device.

Designed for productivity.

Meticulously crafted to help you stay in flow.

Spot the difference!

See how IntelliBar compares to similar products.

IntelliBar
Raycast Pro
ChatGPT Plus
Use multiple models
×
Use local models
×
×
Ask about selection
×
Power user features
×
Funding model

100% user-funded

venture-funded

venture-funded

Data handling

🧑‍💻 → 🧠

messages go straight to models

🧑‍💻 → 📡 → 🧠

messages go to a Raycast server before they go to models

🧑‍💻 → 📡 → 🧠

messages go to a server before they go to models
Pricing

$49 $29 once + usage

pay model providers — not us — based on usage amount

$16/month

or $8/mo without access to advanced models

$20/month

Committed to privacy.

IntelliBar sends your prompts directly to the model provider's servers, and the model provider sends back the results.

Chats are stored locally
Your chat history is stored on your device. They can't leak because somebody hacked your account because they are not in your account but on your device.
No middleman
IntelliBar talks directly to your chosen AI model provider, and does not pass your data through any intermediaries. Your data doesn't leave your device except to the AI model provider.
No training on your data
IntelliBar talks to remote models via APIs, and API data is usually not used for training. Refer to the policies of the model providers for more details.
Support for local models
For questions that you don't want to send to the AI model provider, you can use local models. This way, no data is sent to the AI model provider.

Trusted by people like you.

Don't just take it from us — here's what the IntelliBar community is saying.