I use GPT-4-32k through OpenRouter a lot, and this is a slight annoyance, especially with how expensive the token usage adds up for that model.
I know it would be hard to get the exact cost of each prompt, but an estimation would be great.
Would it be possible to include tokens utilized next to a prompt/reply and the estimated cost?