Currently, DeepSeek-R1's reasoning process is handled differently depending on the API provider: ## Azure Foundry & Fireworks API: Model responses include <think> tags in the main response Need to parse and separate these tags into a dedicated 'Thinking...' block ## OpenRouter: <think> tags are not included in responses Need an alternative method to capture and display reasoning tokens in the 'Thinking...' block ## DeekSeek Direct API: Although TypingMind already supports "Thinking..." blocks via the Deepseek API (see release notes @ https://docs.typingmind.com/chat-models-settings/use-with-deepseek-ai ), many users hesitate to route their data through their servers because of privacy and security concerns. #Feature request Implement proper parsing and display of DeepSeek-R1's reasoning process in a dedicated 'Thinking...' block, with provider-specific handling: Parse <think> tags from Azure Foundry and Fireworks API responses Develop alternative method to capture reasoning from OpenRouter responses This would create a consistent user experience for viewing model reasoning across all providers, similar to how DeepSeek's reasoning is displayed. Suggested Implementation Priority: Azure Foundry: Highest priority due to free API access, which would encourage more users to try DeepSeek-R1 through TypingMind Fireworks API: Second priority due to superior token throughput, providing better user experience OpenRouter: Lower priority due to slower token generation speed and more complex implementation requirements.