An interesting trick that does work is you can send a prompt requesting "more" and have the LLM pick up again where it stopped That requires round-tripping the work it has done so far, but with a long enough context window (and a will to spend the money) that's quite feasible
LLM Continuation Trick: Requesting More Output Within Context
By
–
Leave a Reply