LangChain vs Plain Python: When Frameworks Help and When They Hurt


While the AI community debates which framework is best, a growing number of senior engineers are asking a different question: do I need a framework at all? Anthropic revealed that most successful AI companies don’t use frameworks for their core agent logic. Octomind dropped LangChain after 12 months in production. The pattern is clear: sometimes plain Python is the better choice.

This isn’t an anti-framework manifesto. LangChain has real value in specific contexts. The question is understanding when that value outweighs the costs, and when simpler Python code is the smarter investment.

The Framework Value Proposition

LangChain offers several genuine benefits:

Rapid prototyping. When exploring an idea, LangChain’s pre-built components let you assemble working systems quickly. A chain that calls an LLM, retrieves documents, and formats output can be running in minutes.

Integration library. LangChain connects to dozens of LLM providers, vector databases, and tools. If your stack includes multiple services, these integrations save significant boilerplate.

Community examples. Years of LangChain content mean almost any pattern you need has been implemented and shared. Stack Overflow answers, blog posts, and GitHub repos provide endless reference material.

Abstraction over complexity. Some AI patterns involve genuinely complex orchestration. LangChain’s abstractions can encapsulate that complexity behind cleaner interfaces.

For LangChain implementation patterns, my LangChain tutorial for building AI applications covers the essential approaches.

The Hidden Costs

The benefits come with tradeoffs that become visible in production:

Abstraction obscures understanding. LangChain’s layers hide what’s actually happening. When an agent misbehaves, you’re debugging framework internals rather than your own logic. Understanding where tokens are spent, why latency spiked, or why the output format changed requires deep framework knowledge.

Dependency complexity. LangChain’s dependency tree is substantial. Updates can break working code in subtle ways. Version conflicts with other libraries create maintenance burden.

Performance overhead. Framework abstractions add latency and resource consumption. For high-throughput applications, these costs accumulate.

Opinionated constraints. Frameworks encode opinions about how AI applications should work. When your requirements don’t match those opinions, you fight the framework rather than build your feature.

What You’re Really Building

Here’s what catches most engineers: LLMs don’t execute anything. They output text. When you “give an agent access to tools,” you’re writing code that interprets the LLM’s text output and decides whether to execute actions.

The “agentic loop” is just a for loop:

  1. Call the LLM with context and available tools
  2. Parse the LLM’s response for tool calls
  3. Validate and execute those tools in your code
  4. Pass results back to the LLM
  5. Repeat until done

This pattern doesn’t require framework abstractions. Plain Python handles it cleanly with full visibility into each step.

As I detail in my guide on building AI agents with plain Python, understanding this fundamental pattern changes how you approach AI development.

When Plain Python Wins

Choose plain Python when:

Debugging transparency matters. Production incidents require understanding exactly what happened. With plain Python, you control logging, can inspect every variable, and trace execution without framework internals.

Performance is critical. Removing framework overhead reduces latency and resource usage. For applications serving thousands of requests, the savings compound.

Requirements are stable. When you know what you’re building, plain Python’s initial investment pays off through maintainability. You’re not learning framework idioms, you’re building exactly what you need.

Team has strong Python skills. Engineers who understand Python deeply can build more robust systems with plain code than with frameworks they don’t fully understand.

Cost optimization is a priority. Controlling exactly when LLM calls happen, how prompts are constructed, and where caching applies is easier without framework abstractions.

For production architecture patterns, see my building AI applications with FastAPI guide.

When LangChain Wins

Choose LangChain when:

You’re exploring rapidly. Early-stage projects benefit from quick iteration. LangChain’s pre-built components let you test ideas without building infrastructure.

Integration breadth matters. If your application connects to many external services, LangChain’s connectors save significant development time.

Team knows the framework. Productivity in a familiar framework beats theoretical benefits of unfamiliar approaches. If your team ships faster with LangChain, that matters.

You need community support. When your problem matches common patterns, LangChain’s community has likely solved it. That existing knowledge has value.

Complexity is genuinely high. Some orchestration patterns involve enough complexity that framework abstractions genuinely simplify the code.

The Migration Question

Many teams start with frameworks and later question that choice. Migration paths exist:

Gradual extraction. Identify your core patterns and extract them into plain Python modules. Replace framework calls one component at a time. This approach reduces risk and lets you validate the migration incrementally.

Wrapper simplification. Sometimes you can keep framework usage but simplify how you use it. Replace complex chains with simpler patterns. Use fewer framework features, treating it more like a utility library than an architecture.

Complete rewrite. For applications where framework constraints have become problematic, starting fresh with plain Python can be faster than incremental migration. The second implementation benefits from understanding gained building the first.

Practical Decision Framework

Use this framework to decide:

ConsiderationPlain PythonLangChain
Time to first prototypeLongerShorter
Long-term maintenanceEasierHarder
Debugging in productionEasierHarder
Integration with many servicesMore workLess work
Performance optimizationFull controlLimited
Learning curve for Python expertsLowerHigher
Community examples availableFewerMany
Dependency managementSimplerComplex

My recommendation: Start by understanding what you’re actually building. Write the core loop in plain Python, even as an exercise. If the complexity genuinely warrants framework abstractions, add them deliberately. If plain Python handles it cleanly, you might not need more.

The Hybrid Approach

You don’t have to choose completely:

Use frameworks for integration, plain Python for core logic. Let LangChain handle connecting to services while your own code manages the agent loop and business logic.

Prototype with frameworks, productionize with Python. Build quickly to validate ideas, then extract working patterns into production-ready code.

Different approaches for different components. Your RAG service might use LlamaIndex, your agent might be plain Python, and your tooling might use LangChain components. Mix based on what each component needs.

Making the Decision

The framework vs plain Python debate often misses the real question: what helps your team ship quality AI features most effectively?

For complex integrations and rapid prototyping, frameworks provide genuine value. For production systems where performance, debuggability, and maintainability matter, plain Python often wins. Most real projects benefit from thoughtful combination of both approaches.

The engineers building the most successful AI applications aren’t framework loyalists or framework skeptics. They’re pragmatists who use the right tool for each specific need. Sometimes that’s LangChain. Sometimes it’s plain Python. Often it’s both.

For deeper implementation guidance, watch my tutorials on building AI applications.

Ready to discuss framework decisions with engineers who’ve made these choices in production? Join the AI Engineering community where we share real experiences building AI systems both with and without frameworks.

Zen van Riel

Zen van Riel

Senior AI Engineer at GitHub | Ex-Microsoft

I grew from intern to Senior Engineer at GitHub, previously working at Microsoft. Now I teach 22,000+ engineers on YouTube, reaching hundreds of thousands of developers with practical AI engineering tutorials. My blog posts are generated from my own video content, focusing on real-world implementation over theory.

Blog last updated