Hi there, Thanks to the new âAutoâ setting that automatically chooses the best language model. That's a so great feature! However: After a request finishes, thereâs no indication of which model was actually used... I canât configure a block-list / allow-list to exclude certain models (for cost, latency, or compliance reasons). As a result, itâs hard to debug quality differences, reproduce results, or forecast cost, even if the answers are usually really good to me. This leads to several impacts: ⢠It's totally impossible to trace surprising answers back to a specific model. ⢠It would be harder to maintain consistency in production workflows. ⢠It could lead to potential budget/quotas overruns if a "high-cost" model was silently chosen. ⢠It has a significant compliance risk: some projects must eventually avoid certain models whose... That said, here are some suggestions: Surface the chosen model ⢠Show it in the UI (badge under the answer). ⢠Eventually return it in the API response header / JSON ( "model": "gpt-4o" ) for API users (if this is not the case, i can't check that) Propose a configurable allow / block list ⢠Let users exclude models globally in settings or per-request via a parameter. ⢠Alternatively, let us set a preferred order. Usage analytics ⢠Add a dashboard card summarizing how often each underlying model was selected by Auto mode over time. This could clearly leads to some benefits: ⢠Transparent debugging and reproducibility. ⢠Controlled quotas and compliance. ⢠Greater trust in Auto mode, leading to wider adoption in professional workflows.