AI Functionality (experimental)
Starting with version 6.1.1, the TagSpaces desktop application allows you to connect to Ollama as an external AI service provider. Ollama is a locally installed offline software that enables running AI models (LLMs - large language models) directly on your computer.
TagSpaces does not have its own AI engine or models but relies entirely on external AI software like Ollama for these features. All AI features are disabled by default.
Prerequisites
- A modern PC with a recent Nvidia/AMD graphics card or a Mac with an Apple Silicon processor. Ollama also works on regular CPUs, but performance may be slower.
- Download and install the Ollama software.
- At least 10 GB of free hard drive space for LLMs.
AI Configuration
In the settings, a new tab called AI is available. Here, you can add AI engines and manage their models.
Adding an AI Engine
After installing Ollama, you can add it by clicking the Add AI Engine button in TagSpaces.
If everything is set up correctly, you'll see a new section labeled "Ollama" with a green indicator, meaning Ollama is running in the background. If there are connection issues, the indicator will turn red.
By default, the configuration assumes the Ollama service is running on your local machine https://localhost:11434
. However, it can also run on other computers in your network. You can add multiple Ollama configurations using the Add AI Engine button. Select the desired engine in the Default AI Engine dropdown.
Downloading Models
The next step is to download a suitable model (LLM). This can be done via the Default model for text-based tasks dropdown.
In the "Example installable AI models" section, you can choose a suitable model. For text-based tasks, we suggest the llama3.2 model. For image-based tasks (e.g., image description), you can use llava or llama3.2-vision. While llama3.2-vision delivers better results, it is significantly slower.
Once you select a model, a progress dialog will show the download status. Note that models are typically several gigabytes in size, so patience is required.
If you can't find the desired model, you can install additional ones using Ollama's command-line tool.
AI Chat in Folders
If configured correctly, a new button will appear next to the perspective switcher in the folder area. Clicking it opens the AI Chat tab in the folder properties.
At the top of the tab, select the model for the AI chat. At the bottom, use the AI prompt to ask questions.
You can maintain a separate chat for each folder, enabling chat-based research or project organization directly in folders.
All chat history, including dropped images, is saved in the ai folder within the .ts subfolder of the current folder. Deleting the ai folder or the current folder will delete the chat history.
AI Features for Files
PRO
AI-related file features are part of the PRO version. These functionalities are currently available:- Generate image descriptions for JPG and PNG files.
- Generate tags from image descriptions.
- Generate tags for images in JPG and PNG formats.
- Generate PDF descriptions based on content.
Upcoming AI Features
The following features are on the development roadmap for TagSpaces.
- Text-based file summarization
- Batch processing for multiple files
- Integration with additional AI engines
- Extraction of dominant colors in images
- Translation of generated content
- Language configuration for generated content
Any feedback and new ideas are welcomed in our forum.