Plexi Assistant is a Retrieval-Augmented Generation (RAG) chatbot designed to answer questions strictly based on your provided course materials. Rather than giving generic AI responses, Plexi grounds its answers in the specific subject notes, syllabuses, and documents you load.

This guide explains how to configure your AI endpoints, load your study materials, and effectively use the assistant.


1. Configuring Your AI Provider

Plexi is model-agnostic, allowing you to use either cloud-based AI providers or your own self-hosted local models. When you launch Plexi Assistant for the first time, you will be prompted to configure your endpoint on the Onboarding screen.

Provider Quick Links

Choose your preferred provider from the table below to jump to its specific setup instructions:

Provider Description Setup Guide
Gemini (Google) Fast, free-tier available, OpenAI-compatible. Jump to Gemini Setup
OpenAI Industry standard (GPT-4o, GPT-4o-mini). Jump to OpenAI Setup
Mistral Robust European models. Jump to Mistral Setup
Groq Ultra-fast inference using LPU hardware. Jump to Groq Setup
OpenRouter API aggregator offering multiple models. Jump to OpenRouter Setup
Custom (Self-Hosted) use other providers or Run models locally via Ollama, LM Studio, etc. Jump to Custom Setup

Gemini (Google)

To set up Gemini using Google's OpenAI-compatible endpoint:

  1. Open Google AI Studio and sign in.

    image.png

  2. Click Create API key → Add a name for your key → Select or create a project → Create Key.

    image.png

  3. In the Plexi Assistant Setup Screen, set the Provider dropdown to Gemini (Google).

  4. Select a model from the Model dropdown (e.g., gemini-2.0-flash).

  5. Paste your API key into the API Key field.

    image.png

  6. Now you are good to go.


OpenAI