It would be extremely useful if we could selectively regenerate model responses when using multiple models in parallel. Often, one or two models give a satisfactory response on the first try, while the others need to be regenerated. Currently, we have to regenerate for all models, losing the good responses we would have preferred to keep and driving up the overall cost of the chat by having to needlessly regenerate some models' replies. Obviously we would not be able to edit the prompt when selectively regenerating, but this would allow us to fine-tune responses on a model-by-model basis and save on overall tokens per chat. Please consider this.