TypingMind
Create
Log in
Sign up
Roadmap
Feedback
Typing Mind
365
Boards
Typing Mind
Typing Mind Custom
Powered by Canny
Typing Mind
Ideas & feature requests for the Typing Mind License Version including the web app
typingmind.com
, macOS app & the Setapp version. Bug report? Contact us at
support@typingmind.com
for better support
Details
Category
UXUI Improvement
Chat Management/Interactions
AI Models
Prompts
AI Agents
Upload Files
Plugins
Macapp/SetApp
Sync/Backup
Account Management
Billing/Usage Management
Documentation
Integrations
Audio
Text-to-Speech
Speech-to-Text
Privacy
Others
Uncategorized
Showing
Trending
Sort
Trending
Top
New
Filter
Under Review
Planned
In Progress
Complete
posts in
All Categories
All Categories
UXUI Improvement (126)
Chat Management/Interactions (106)
AI Models (92)
Prompts (18)
AI Agents (32)
Upload Files (17)
Plugins (45)
Macapp/SetApp (11)
Sync/Backup (20)
Account Management (6)
Billing/Usage Management (13)
Documentation (4)
Integrations (17)
Audio (15)
Text-to-Speech (7)
Speech-to-Text (7)
Privacy (4)
Others (12)
Multi Model Responses at the same time
I would like to get response from Claude, Gemini and GPT at the same time and pick the one that best suits my needs in one page. Especially useful for creative tasks - some models are better at some topics than others. Having this options will make TypingMind a way more compelling product for people than going directly to client services.
6
·
planned
67
Native RAG Support
Simplified workflow to create RAGs (knowledge stacks) and use them. I would like to be able to create a knowledge stack (RAG) using TypingMind interface. I would specify a local folder or local PDFs on my computer and typingmind would use either openAI's embedding model or a local embedding model to create a RAG. Then, when creating a chat, I would be able to call upon a local knowledge stack. knowledge stack and files should be local since knowledge stack can contain large files All should be done through typingmind's UI without using endpoints etc. Currently, typingmind supports RAGs, but it's through a complicated API. It should all be done locally using the typingmind interface. It shouldn't just input the whole document as input. That would consume a lot of tokens like in agents system instructions.
5
·
planned
36
Chat with Epub files
When will other file formats be supported? Oftentimes epub files are smaller to upload and perhaps even faster to go through?
1
·
planned
30
Improve Add Custom Model flow: make it easy to import models from Open Router, Together AI, local model, etc.
[Original title: "auto populate models from model agregators (openrouter, together,...)"] instead of creating each model for openrouter, together, ... in model selection menu, group each agregator in it's subfolder and dynamicly retrieve compatibles models (with possible filters : vison/function/context/licence...) those agregators api keys should be hard saved in TM (has openai and claude keys are now)
2
·
planned
17
Select a different model for the chat title and keywords generation
It would be awesome to add ability to select a different model for the title and keyword generation instead of using the model of the conversation. For example, when using opus, it would be cheaper and faster to use haiku to generate the chat title and keywords.
3
·
planned
11
User plugin marketplace
I am excited to see what plugins and extension people are creating.
3
·
planned
40
Add support for voice input for TypingMind Mac App
Add support for voice input for Typingmind Mac App using OpenAI Whisper and Eleven Labs.
4
·
planned
17
Conversation mode / Hand free voice input
Add the neccesary tools (with openAI whisper in both directions) to have a totally conversation mode, with no interruptions or pressing buttons. I think we are near to get it, but now you need elevenlabs or other methods to connect.
8
·
planned
56
Enhancement of Text Processing for AI Readability
Requesting an update to the ElevenLabs API to improve handling of Generative Content. Currently, when markdown text is read by AI voices from TypingMind, the markdown characters are also read aloud, impacting user experience. A feature that enables the API to ignore or interpret markdown symbols aptly would significantly enhance the service. Thank you.
4
·
planned
8
Support multiple functions per pluging
Support multiple functions per plugin. This would avoid having to create/support a large number of plugins. It appears that OpenAI supports this (see https://openai.com/blog/function-calling-and-other-api-updates > Example in Step). But if you try to define an array for the function in TM, then you get an error.
5
·
planned
14
Load More
→
Powered by Canny