EphemerAl

EphemerAl: A Simple Self-Hosted Chat Interface for Local AI with Ollama that Accepts Documents and Images

EphemerAl is a lightweight, open-source web interface for interacting with Google’s Gemma 3 LLM locally on your hardware. I designed it for my day job to help keep our team’s sensitive info off cloud services, and to provide a modern AI experience to staff without the per-user cost required to achieve equivalent capabilities online. All responses are generated by Gemma 3 (12b or 27b), and enhanced by any documents, images, or queries you attach during a conversation. Gemma 3 4b will work if you’re hardware limited and want to try it out, but I found it was more Ai than AI.

While it wasn’t built for broad distribution, I’m sharing this generalized version in case it helps others looking for a local-only, account-free, multimodal LLM interface. . . whether to provide an operational tool, a staff learning environment, or bragging rights when friends visit on your home network.

View the full source code on GitHub

A screenshot of EphemerAl, a Docker-based self-hosted AI assistant for local LLM document Q&A and image analysis using Ollama


Core Features

This tool offers a straightforward set of capabilities to facilitate interaction with local LLM models.

Technical Stack

EphemerAl leverages a straightforward application stack to ensure reliability and ease of use.

System Requirements

To run this interface effectively, the following specifications are recommended.

Deployment

Refer to the System Deployment Guide for detailed, step-by-step instructions, which include copy-paste commands suitable for beginners.

For automatic startup, utilize the provided PowerShell script.

Accessing the EphemerAl website

Stopping the Application

Execute the following in an Administrator PowerShell window:

wsl --shutdown

To restart, either run wsl or reboot the system if you have the startup script installed.

Known Issues

These may be addressed if needed in the future.

Support

This project is provided as a resource for the community as-is. I hope it solves a problem or provides value outside my environment.

If you run into issues, consider submitting error details, including screenshots and system files, to an AI assistant for guidance. This isn’t meant to be snark, it’s amazing how well the big reasoning models can troubleshoot.

License:

MIT - (At least the parts of this stack that are mine to license)