I've been testing Gemini 1.5 Pro's reasoning capabilities this morning. It's near-GPT-4-level with regular prompts. BUT Gemini's performance improves as I add dozens of examples. There doesn't seem to be an upper limit. Many-example prompting is the new fine-tuning.
Gemini 1.5 Pro: In-Context Learning Outperforms Fine-Tuning
By
–
Leave a Reply