project-proposal-2025

Multi-Mind

Abstract

Multi-Mind is a collaborative AI aggregation platform that allows users to interact with multiple large language and image generation models — including OpenAI’s GPT, SDXL (self-hosted), and Meta’s LLaMA. The platform emphasizes high availability, interoperability, and scalability, enabling users to seamlessly switch between AI models, upload files, visualize outputs, and share sessions in real time. Additionally, Multi-Mind integrates with external services such as void-tech.cn, a community-driven technical site operated by the author. This integration allows AI workflows to be embedded in discussions and generated content to be shared seamlessly.

Author

Peiyan Lu

s4933335

Functionality

If fully developed, Multi-Mind will:

Scope

The MVP of Multi-Mind will support:

Quality Attributes

Interoperability

Multi-Mind’s core function is to serve as a unified interface for disparate AI systems. Each backend (OpenAI, LLaMA, SDXL) uses different APIs and response formats. Multi-Mind defines a shared adapter interface to interact with these models and normalize their outputs for consistent UI display.

It also integrates with an external user-driven community platform — void-tech.cn — supporting bi-directional interaction. AI-generated content can be shared to discussion threads, and site content can act as triggers for AI responses.

Availability

As a collaborative tool, Multi-Mind should be reliably available and responsive, even under failure of specific APIs or heavy load. Non-blocking calls, retry strategies, and asynchronous processing are critical.

Scalability

The platform will support simultaneous users submitting prompts to different backends. Since calls are stateless, horizontal scaling is possible. Chat history and uploaded files will be persisted and retrieved on demand.

Evaluation

The project will be evaluated via the following methods: