AI Dynamics

Global AI News Aggregator

LLM Continuation Trick: Requesting More Output Within Context

An interesting trick that does work is you can send a prompt requesting "more" and have the LLM pick up again where it stopped That requires round-tripping the work it has done so far, but with a long enough context window (and a will to spend the money) that's quite feasible

→ View original post on X — @simonw,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *