Certain thinking/reasoning models (e.g. Perplexity models, DeepSeek R1 via DeepInfra, Qwen via DeepInfra) have their reasoning process permanently exposed. Their responses begin with the entire reasoning processes bracketed between <think> & </think>. Only after this text block does the actual user-intended response begin. This text cannot be hidden or collapsed like certain Grok or Claude models. Please address this!