Stanford AI lab just released the Meta Harness paper which covers the meta harness strategy that self-optimise. Most conversations about coding agents focus on the model. People compare model quality, […]
Category: DSPy
Introducing Agentnetes: Self-Discovering AI Agent Swarms, On Demand
On 21 March, I attended Zero to Agent London, a hackathon hosted by Google DeepMind and Vercel. The challenge was simple: build something with agents. The result was Agentnetes, an […]
How Many Types of Agent Engineering Exist Right Now?
The AI industry has started producing a new engineering label almost every month. Prompt Engineering. Context Engineering. Harness Engineering. Eval Engineering. Memory Engineering. Skills Engineering. Guardrail Engineering. Inference Engineering. And […]
Superagentic AI Open-Sources SuperOptiX Agent Optimization Engine
Superagentic AI is open sourcing SuperOptiX. This is a major milestone in our journey and a practical step for teams building production agentic systems in a fast-moving ecosystem. Where SuperOptiX […]
Announcing DSPy Code: The CLI to Build and Optimize Your DSPy Code
Today, Superagentic AI is proud to announce the DSPy Code, the comprehensive CLI to build and optimize your DSPy and GEPA code. DSPy Code is now live: an AI-powered CLI […]
Superagentic AI Showcased Full-Stack Agentic Optimization at ODSC AI San Francisco
Last week, Superagentic AI proudly exhibited at ODSC AI West 2025 in San Francisco, the global innovation hub of AI. We showcased our pioneering work on Full-Stack Agent Optimization, connecting […]
Superagentic AI Bringing Agent Optimization to ODSC AI SF: What to Expect from Our Talk and Booth
Next week, Superagentic AI is coming to ODSC West 2025 in San Francisco, our first major public appearance in the US since launching the company earlier this year. We’re travelling from London […]
Intelligent RAG Optimization with GEPA: Revolutionizing Knowledge Retrieval
The field of prompt optimization has witnessed a breakthrough with GEPA (Genetic Pareto), a novel approach that uses natural language reflection to optimize prompts for large language models. Based on the […]
GEPA DSPy Optimizer in SuperOptiX: Revolutionizing AI Agent Optimization Through Reflective Prompt Evolution
The landscape of AI agent optimization has fundamentally shifted with the introduction of GEPA as a DSPy optimizer. Unlike traditional optimization approaches that rely on trial-and-error or reinforcement learning, GEPA […]
Optimas + SuperOptiX: Global‑Reward Optimization for DSPy, CrewAI, AutoGen, and OpenAI Agents SDK
Optimization has been central to SuperOptiX from day one regardless it’s prompt, weights, parameters or compute. It began with DSPy-style programmatic prompt engineering and teleprompting as it was the only framework […]
