Prompt Engineering is Dead - Long live ...
We’re moving from "Prompt Engineering" to "Context Engineering."
Welcome to the very first edition of AgentDevs Weekly — your personal radar for tools, frameworks, and breakthroughs driving the next wave of AI agent development.
In this kickoff, we’re diving into:
Context Engineering — A new term gaining traction in the AI world, shifting the focus from prompt engineering to a broader, more powerful concept.
Trending GitHub gems — Including one that lets you chat with your data (SQL, CSV, even Parquet)
A must-watch RAG evaluation deep dive — Why most agents silently fail, and how to catch it
MCPJungle — An open-source, self-hosted registry and gateway for MCP servers. It allows developers to manage and access multiple MCP servers from a single place, with features like access control and ACLs.
Each week, we cut through the noise to deliver signal-rich insights — so you can stay focused on building, learning, and shipping smarter agents.
Let’s dive in.
🔥 Context Engineering: Beyond Prompting
Context engineering is a new term gaining traction in the AI world, shifting the focus from prompt engineering to a broader, more powerful concept. It’s about providing all the context necessary for an LLM to solve a task effectively. This includes:
Instructions/System Prompt: Initial guidelines and examples defining model behavior.
User Prompt: The immediate task or question.
State/History: The current conversation thread.
Long-Term Memory: Persistent knowledge from past interactions.
Retrieved Information (RAG): External data from documents, databases, or APIs.
Available Tools: Functions the model can call (e.g.,
send_email).Structured Output: Defined response formats, like JSON.
As Armand Ruiz, VP at IBM AI Platform, puts it, “Prompting is dead. Context is the product.”
Context engineering is about making AI agents work better by giving them the right information at the right time. Instead of just writing good prompts, it’s a way to carefully organize and manage the information an AI needs. This includes finding and creating the right details, organizing them clearly, and keeping track of them properly. These parts come together in smart systems like those that pull in extra information (like web searches), remember past conversations, use tools, or work with multiple AI agents.
Learn More:
📄 Dive into the details with this research paper on context engineering.
🎥 Explore the concept in depth with Neural Breakdown’s free 1.5-hour YouTube course, packed with practical examples
🧪 Framework Deep Dive: Diagnosing RAG Failures
Most Retrieval-Augmented Generation (RAG) agents fail silently — and dashboards won’t catch it. A recent talk from Quotient AI + Tavily reveals why.
Their framework focuses on three metrics that really matter:
Relevance — Are the retrieved documents actually useful?
Completeness — Does the final answer fully address the query?
Faithfulness — Is the response grounded in the provided sources?
Adopt this framework and you’ll catch issues long before your users do.
Watch the talk:
RAG Evaluation: The Missing Metrics
📦 GitHub Repos That Matter
🧠 Qwen3 — Open-Weight LLMs from Alibaba
⭐ Stars: 16,200 🍴 Forks: 1,200 📈 This week: +1,300
A scalable open-weight model family (0.5B to 72B) optimized for instruction-following, reasoning, and tool use — competitive with DeepSeek and Gemini.
Ollama-compatible (
ollama pull qwen:latest)Strong benchmarks in math, reasoning, and coding
Fine-tuning support included
📊 PandasAI — Conversational DataFrame Analysis
⭐ Stars: 21,200 🍴 Forks: 2,050 📈 This week: +165
Talk to your data in SQL, CSV, or Parquet formats. PandasAI enables cleaning, transformation, and visualization using LLMs and RAG.
Perfect for agent backends needing live data queries
Extensible with custom logic
💡 awesome-llm-apps — Curated LLM Tools & Apps
⭐ Stars: 53,900 🍴 Forks: 6,260 📈 This week: +1,200
A comprehensive, actively maintained list of real-world LLM projects — including agents, RAG chains, tool use, evaluation suites, and infrastructure.
Great for inspiration and practical learnings
Widely used and updated regularly
🧰 MCP Spotlight: MCPJungle
MCPJungle is an open-source, self-hosted registry and gateway for MCP (Model Context Protocol) servers. It allows developers to manage and access multiple MCP servers from a single endpoint, streamlining workflows for AI agent development. Key features include:
Unified Registry: Track all your MCP servers in one place.
Single Gateway: Agents connect to one endpoint to access all tools.
Access Control & ACLs: Fine-grained permissions for which agents can use which servers.
Designed to be lightweight and developer-friendly, MCPJungle is ideal for enterprise use cases, simplifying the management of internal AI tools and enabling modular, secure integration into agent stacks. It’s already stable, with a growing community praising its simplicity and control. Upcoming features include OAuth support, observability, metrics, and a web GUI.
What users are saying:
“This is awesome! Can’t believe this hasn’t been more widely produced yet.” — Loud-Bake-2740
“I like the control aspect of assigning agents to servers, good stuff.” — Fit-Pomegranate7781
“This is cool! Yamcp is also another minimal gateway with similar features…” — hacurity (noting security as a key area to watch).
Compared to alternatives like LiteLLM or MetaMCP, MCPJungle focuses on being lightweight and enterprise-ready, avoiding feature bloat while meeting developer needs.
→ Explore MCPJungle on GitHub
How did you like it?
If you enjoyed this format—packed with actionable resources and signal-rich content—make sure to subscribe to get weekly updates delivered straight to your inbox.
Happy building,
Johannes — AgentDevs
AgentDevs is a platform for AI agent developers to showcase their work, share resources, and connect with the global community of innovators



If you want to highlight your project please reach out to me