Currently, the Pro plan offers a 64k context window, which is about half of what even an older open-source model like llama-3.1-8b (128k) supports. With this limit, it can be quite challenging to work with larger amounts of information — sometimes it's even difficult to summarize a single academic paper in full.
From what I understand, the cost side of things may not be a major barrier either. Most providers charge the same API rates for context windows up to 128,000 or 131,072 tokens, unless the model is an especially resource-intensive, high-performance one.
With that in mind, I'd gently suggest expanding the context window to at least 128k. Honestly, I haven't come across another $20/month service that caps its context at 64k, so I believe this adjustment would bring You.com more in line with user expectations — and make the Pro plan feel even more valuable.