Simplified workflow to create RAGs (knowledge stacks) and use them. I would like to be able to create a knowledge stack (RAG) using TypingMind interface. I would specify a local folder or local PDFs on my computer and typingmind would use either openAI's embedding model or a local embedding model to create a RAG. Then, when creating a chat, I would be able to call upon a local knowledge stack. knowledge stack and files should be local since knowledge stack can contain large files All should be done through typingmind's UI without using endpoints etc. Currently, typingmind supports RAGs, but it's through a complicated API. It should all be done locally using the typingmind interface. It shouldn't just input the whole document as input. That would consume a lot of tokens like in agents system instructions.