The Three Layers of a Workflow: What Every Tool Gets Wrong

Eric Pelz7 min read

Remember that solutions engineer from the last post who set up a workflow to send docs after customer calls? In that post, we saw how his workflow evolved from a form to a Slack trigger to fully automatic—all in minutes, with no reconfiguration. But we skipped over something critical: how does software adapt like that without rebuilding everything?

He described what he wanted: "After customer calls, send them relevant docs based on what we discussed."

An AI agent (we call it the "architect") asked clarifying questions and helped him structure it into stages with outcomes. He started with a simple form, then added a Slack trigger so his team could use it. The whole thing took under five minutes.

When we added the Slack trigger, his jaw dropped.

"Wait. Where's all the code to support that? Where are the data mappers?"

The SE working with Malleable's architect. This canvas is the workflow that runs. As they collaborated, the canvas updated in real-time—when he added the Slack trigger (gray box at top), one block changed and the rest adapted automatically.

The Traditional Tool Tax

He expected to see the usual checklist:

  • Learn Slack connectors and fight with configuration
  • Field mappings (message.text → workflow_input; transform formatting across connectors)
  • Conditional logic for attachments vs plain text
  • Error handling for malformed messages
  • Hours of configuration before you can even test if it works

You know—all the stuff you configure in Zapier or n8n.

But there was none of that. Just one block on the canvas changed. That's it.

His question cuts to the heart of why workflow automation is broken: Traditional tools force you to work at the wrong layer of abstraction. They make you think in functions, conditionals, and variable parsing—even if they use a "no-code" UI, they're still representing the same low-level concepts.

What if you could describe workflows the same way you'd explain them to a coworker? What if the documents and diagrams you'd use to onboard a team member could become the configuration that runs the software?

Why No Configuration? The Agentic Runtime

For decades, the industry has seen waves of workflow tools pop up. They've always fit the same pattern: you write code—disguised as low-code blocks—to define everything upfront (API endpoints, field mappings, conditional logic), and the workflow executes your exact instructions. Recently, tools have improved the setup experience by using AI agents to help you write that code faster. But they're all still brittle: change your data format, add a new field, or switch your trigger, and you're back to reconfiguring.

Malleable is different. You specify what information you need and what outcome you want. An AI agent figures out how to get there—at runtime, while the workflow is actually running—based on what's happening in that specific execution.

When we added the Slack trigger, the rest of the workflow didn't change. The workflow just needs meeting context; the agent doesn't care whether that came from a form, a Slack message, or a voice call.

The diagrams you see on the canvas? That's what actually runs. Click any node to see what's configured—no hidden code being generated in the background. Everything is inspectable.

This works because workflows have three distinct layers—and traditional tools only let you work at one of them.

The Three Layers

Think about how workflows actually start in the real world. You figure out the Why: what you're trying to achieve and what success looks like. Then you think through the What: what needs to happen, in what order, what information you need at each step, what to do when something goes wrong. This structure often emerges organically—through running the workflow, scaling it up, learning what works, accumulating tribal knowledge and experience.

When you onboard a new team member, you don't walk them through API endpoints and field mappings. You explain the high-level flow: here's what we're trying to accomplish, here's how the stages break down, here's what matters at each step.

With Malleable, the documents and diagrams you'd use to onboard a team member—that becomes the configuration that runs the software.

Malleable's architect agent works like an experienced coworker who asks the right questions to understand what you need—then structures it so the system can reliably execute it.

You describe what you want:

  • "After customer calls, send them relevant docs based on what we discussed."

The architect asks clarifying questions:

  • What's the goal? "Get customers the right documentation."
  • Where's the data? "Meeting recordings in Fathom, docs in Google."
  • Who sends the email? "The system sends it automatically after the call."

This conversation produces the setup needed to run the workflow:

  • "Why" (Layer 1): goals and outcomes
  • "What" (Layer 2): structure and requirements
  • "How" (Layer 3): technical implementation—the runtime agent figures this out, so it never goes out of date

The Three Layers of a Workflow

Where you spend your time makes all the difference

Malleable

The Magic

Traditional tools force you to fight with connectors, field mappings, and brittle configurations. Malleable lets you focus on the big picture—define goals and requirements—while AI handles all the technical implementation.

How Do You Control What the Agent Does?

If you're technical, you're wondering: what stops the agent from doing random things? You might assume the agent "vibe-codes" each run—generating implementation on the fly, which would be inconsistent and hard to control.

That's not what happens. There's actually just not much code generated. The runtime is an agent loop calling tools that were configured by the architect during setup. Most of the time, it's just parameters and instructions—no hidden code being regenerated each run.

Compare this to ChatGPT: "Send the customer docs after their meeting." No structure. ChatGPT might hallucinate docs, forget to check what was discussed, or skip the email entirely.

Malleable's "What" (Layer 2) provides a spectrum of control. The architect can configure different parts of the workflow with varying levels of flexibility. For example:

  • Required conditions at the end of each stage: Data is collected, Google Sheet is updated, email is sent. The agent can't skip ahead until these are satisfied.

  • Tool restrictions you define: Only access Salesforce after the user is done interacting with the software. The agent can read customer data but can't write to your CRM.

  • UI mocks for consistent interfaces: Work with the architect to create interface specifications. The agent uses these as references, like an engineer would use a design system, but adapts the implementation as needed.

  • Procedural steps when reliability is critical: The architect encodes logic that can't be modified by the agent—send email to this specific address, or run this code as-is.

Each mechanism places different constraints on the execution agent. You choose how much flexibility vs. control each part of your workflow needs. The execution agent has flexibility in how it gets there ("How"), but must satisfy what you specified ("What").

Everything is inspectable. Click any node on the canvas to see what the architect configured: parameters for integrations, instructions for the runner agent, Google Doc IDs, email or interface "UI mocks" the architect wrote. The canvas itself is the source of truth—not generated files you can't see or understand.

This demonstrates a core principle: you operate at the level of abstraction that's best for you. Most users work at Layer 1 (goals) and Layer 2 (structure). Technical users can drill down to see exactly what's configured.

Example: Insurance Form Validation

A fintech company processes 350 compliance forms per month for a single state—tedious for humans, perfect for AI.

What you see is what executes. When she refines validation criteria or success messages, she's directly editing what the runtime agent follows—no regeneration, no compilation, no sync step.

The operations lead told the architect: "Validate insurance forms for compliance before we bind policies."

The architect looked up the state-specific form and drafted validation criteria. It asked if they have internal documentation to reference instead, or if it should start from scratch.

After the first run: "That was close! But remember to verify e-signature requirements."

Then, she made it customer-facing: "Sketch out a screen to show for success and one for errors." The architect asked about the success message. She refined: "Keep it simple: 'We'll get back to you soon!' For errors, show the specific field that needs correction."

She never touched "How" (Layer 3). The runtime agent adapted—what to show externally vs internally, how to format results, what tone to use. She was refining requirements based on what she learned, like coaching a team member. Layer 3 adapted automatically.

After seeing more forms, the workflow learned: what can be left blank, insurer contact formats, signature types. The intelligence lives in the workflow itself, not brittle configuration files.

The Missing Piece: Structure + Agentic Runtime

You might be thinking: "Can't I just use ChatGPT for this?"

Try it. Ask ChatGPT to validate that insurance form. It might miss digital signature verification, require full contact info when name + phone suffices. No scaffolding to ensure the workflow executes reliably—no required information checks, no side effects validation, no structure.

Traditional workflow tools (Zapier, n8n) give you structure but no intelligence. Change your data format? Reconfigure everything. Learn a new edge case? Update your conditionals manually. Your workflow is brittle.

"Vibe code" products (Lovable, Bolt) generate code fast. But they only capture "How" (Layer 3)—the scenarios you described upfront. They don't know "Why" you're doing this or "What" actually matters. Good luck maintaining that generated code when requirements change.

Pure AI agents (ChatGPT, Claude) adapt but have no guardrails. No structure to ensure consistency across runs. Every execution improvises from scratch.

Team workflows need both.

Malleable's "What" (Layer 2) provides structure—what to check, what counts as valid. The agent handles implementation at runtime. After 350 forms, the intelligence persists: some states are stricter, insurers provide websites instead of phones, digital signatures appear in different formats.

Traditional tools force you to work at "How" (Layer 3)—technical implementation. The people who know workflows (ops, CS, account management) don't have that expertise. So workflows don't get built, or they're built and fall immediately out of date.

Malleable lets you work at "Why" (Layer 1) and "What" (Layer 2). The expertise gap disappears. Your software adapts to how you work.


Ready to try Malleable? We're working with fast-growing companies to transform how their teams work. Join our waitlist to get early access.


Previous: What Is a Workflow, Really?

Next: How does the three-layer model scale when workflows get more complex—with handoffs between teams, branching logic, or integrations across multiple systems? (coming soon)


We're Hiring Founding Engineers

Want to help build this? We're solving technically fascinating problems at the forefront of agentic software.

What if you could program workflows the way you'd explain them to a coworker? What if the documents you use to onboard a team member could become the executable specification?

We're building that abstraction. Goals, structure, and requirements run the software. Implementation adapts at runtime.

Technically fascinating problems at the forefront of agentic software:

  • Creating runtime environments which operate on higher-level intent and structure (not implementation), where agents execute within guardrails while adapting to context
  • Creating feedback loops where workflows improve over time, AND make it easier to adopt new use cases
  • Building debugging tools for systems where the implementation emerges at runtime

If you're interested in Malleable software, agentic systems, or knowledge representation—applied to the messy reality of how teams actually work—jobs@usemalleable.com