Suggest New Features 💡

Let the You.com team know which features you'd like to add to the product. Before creating a new post, search to see if a similar one already exists and upvote it instead.
[Auto Mode] Need to know which model was picked + option to exclude specific models
Hi there, Thanks to the new “Auto” setting that automatically chooses the best language model. That's a so great feature! However: After a request finishes, there’s no indication of which model was actually used... I can’t configure a block-list / allow-list to exclude certain models (for cost, latency, or compliance reasons). As a result, it’s hard to debug quality differences, reproduce results, or forecast cost, even if the answers are usually really good to me. This leads to several impacts: • It's totally impossible to trace surprising answers back to a specific model. • It would be harder to maintain consistency in production workflows. • It could lead to potential budget/quotas overruns if a "high-cost" model was silently chosen. • It has a significant compliance risk: some projects must eventually avoid certain models whose... That said, here are some suggestions: Surface the chosen model • Show it in the UI (badge under the answer). • Eventually return it in the API response header / JSON ( "model": "gpt-4o" ) for API users (if this is not the case, i can't check that) Propose a configurable allow / block list • Let users exclude models globally in settings or per-request via a parameter. • Alternatively, let us set a preferred order. Usage analytics • Add a dashboard card summarizing how often each underlying model was selected by Auto mode over time. This could clearly leads to some benefits: • Transparent debugging and reproducibility. • Controlled quotas and compliance. • Greater trust in Auto mode, leading to wider adoption in professional workflows.
0
[Pro Plan] Opaque quotas – need transparent limits & real-time monitoring
Hi there, I’m on the You.com Pro plan and sometimes hit “quota exceeded” errors without any way to know which limit I supposedly crossed. Maybe am i wrong but it seems that there is no public documentation of: • the exact number of requests allowed per minute / hour / day; • how these quotas differ by model or by agent; • what the UI does when a threshold is reached (soft warning vs. hard block). Because of this opacity it’s impossible to tell whether an error is legitimate, a bug, or an accidental abuse, and it prevents us from integrating You.com reliably into production workflows. Impact • Production jobs can stop unexpectedly. • We purposely under-use the service for fear of being blocked. • Not in my case but: finance / IT of a company could not budget or approve the tool without hard numbers. Here are some of my suggestions: Publish a table of soft & hard limits per plan, per model, per time window. Eventually return clear 429 errors with JSON details, e.g. ```json { "error": "quota_exceeded", "quota": "requests_per_minute", "limit": 120, "retry_after": 47 } ``` Add a real-time usage dashboard showing “XX / YY requests” and 70 % / 90 % warnings. Offer e-mail or webhook alerts when usage crosses configurable thresholds. Provide controlled over-quota “burst” credits or pay-as-you fallback to avoid hard stop. Expose downloadable error / usage logs for auditing and debugging. Not in my use case but eventually: allow sub-quotas per API key so a test project can’t drain the whole company allocation. (Obvious) benefits: • Predictable capacity and cost planning. • Fewer support tickets about unexplained blocks. • Easier enterprise adoption thanks to transparent metrics. • Stronger trust in You.com as a professional platform.
0
Load More
→