Anyone else find themselves constantly working on 2-3 projects at the same time with Claude Code? Not sure if it's productive but I can't help squeezing in voice-transcribed prompts while waiting for outputs
@daveebbelaar
-
Building Effective AI Agents in 2025 Without Complex Frameworks
By
–
That's how you build effective AI agents in 2025. Not with complex frameworks that make everything harder than it needs to be.
-
Break Down Big Problems Into Solvable Building Blocks
By
–
Take your big problem, break it down into smaller problems, then solve each one using these building blocks chained together.
-
AI Agents as Workflows: DAGs and Code-First Architecture
By
–
AI agents are simply workflows – directed acyclic graphs (DAGs) if you're being precise, or graphs if you include loops. Most steps in these workflows should be regular code – not LLM calls.
-
Seven Core Building Blocks to Solve Business Problems
By
–
You only need about seven core building blocks to solve almost any business problem with AI. 1. Intelligence (LLM)
2. Memory
3. Tools
4. Validation
5. Control
6. Recovery
7. Feedback -
LLM Context Engineering: Sending Right Context to Right Model
By
–
When you do make that LLM call, it's all about context engineering. To get a good answer back, you need the right context at the right time sent to the right model.
-
Pre-processing Information for LLM Problem Solving
By
–
You need to pre-process all available information, prompts, and user input so the LLM can easily and reliably solve the problem. This is the most fundamental skill in working with LLMs.
-
Breaking Down Components: When to Use LLMs in Software Engineering
By
–
The solution is simpler than most frameworks make it seem. Here's the approach that actually works: – Break down what you're building into fundamental components
– Solve each problem with proper software engineering
– ONLY use LLMs when deterministic code fails -
Strategic LLM Integration Over Full Automation Frameworks
By
–
They're mostly deterministic software with strategic LLM calls placed exactly where they add value. The problem is that most frameworks push the "give an LLM some tools and let it figure everything out" approach.
-
LLMs Should Reason With Context, Not Make All Decisions
By
–
But in reality, you don't want your LLM making every decision. You want it handling the one thing it's good at – reasoning with context – while your code handles everything else.