Right now, the only way to use custom system instructions is by setting up a custom agent. The problem is, once you use a custom agent, you're locked into the agent’s parameters like temperature, max tokens, etc. and can't dynamically switch models or modify those parameters mid-conversation. What’s missing is the ability to define system instructions on a per-model level, not just per-agent. I don’t use the same system message for GPT-4.1 as I do for claude 3.7 Having one static system prompt doesn’t reflect how these models are used differently. By allowing custom system instructions tied to each model (independent of the agent setup), we could use dynamic parameters during a conversation without being blocked by agent-level defaults.