Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With Claude the context window is quite small. But with adding too much context it often seems to get worse. If the context is not carefully narrowly picked and too unrelated, the LLMs often start to do unrelated things to what you've asked.

At some point it's not really worth anymore creating the perfect prompt, just code it yourself. Also saves the time to carefully review the AI generated code.



Claude's context window is not small, is it not larger than ChatGPT's?


I just looked it up, it seems to be the rate limit that's actually kicking in for me.


Yes, that's it! It is frustrating to me, too. You have to start a new chat with all relevant data, and a detailed summary of the progress/status.


(Because you reach the limit faster otherwise).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: