All Articles

I Almost Built This in n8n

Brandon Gadoci

Brandon Gadoci

April 21, 2026

Tonight I sat down to build something I have been wanting for months. I wanted a way to bookmark a tweet from my phone, have something pick it up automatically, research the underlying article, write a draft in our voice, and park it on our website for me to review. Same pipeline every AI Ops consulting firm I know has been trying to wire up. Same pipeline every operator wants. Same pipeline that usually takes a weekend to duct-tape together with n8n, a webhook or two, and a Twitter API key that costs real money.

I built it in three hours without touching n8n, a webhook, or the Twitter API.

The stack I almost built

The original plan was the familiar one. An n8n workflow listening on a Slack channel I would share tweets to, hitting the Twitter API to fetch the full post, calling Claude through the API for the draft, posting back to our CMS, and notifying me in Slack. Probably six nodes, a couple of credential blocks, a Twitter developer account, and a Tuesday I was not looking forward to. I opened that project, read the pricing page for the X API, and decided to think harder.

My next thought was smaller. What if instead of calling the Twitter API directly, I shared tweets to a Slack channel and let n8n pick them up from there? Slack is free, the URL is in the message, and n8n already has a Slack trigger. That saved the API cost. It still needed all the other plumbing: fetch the tweet content somehow, call Claude, handle the CMS post, deal with retries, handle approval. Real work, but workable.

Then I stopped. I had been defaulting to n8n because that is what you reach for. I had not asked whether the glue layer needed to exist at all.

The stack I actually built

What I built instead is two Claude Code Cloud scheduled routines. One runs at 6am daily. It reads my private Slack channel, pulls the URLs I dropped in the last 24 hours, fetches the tweet through a public mirror when the paid API refuses, runs a web search to flesh out context, loads our writing style and overview from skill files already in our repo, dedupes against articles we have already published, writes a 900-word draft, creates a Slack Canvas for mobile review, and posts a thread message explaining how to approve, reject, or edit the draft. The second routine runs hourly. It reads my reactions and replies and either publishes the article, deletes it, or regenerates it based on feedback. I can manage the entire pipeline from my phone in Slack.

The pipeline uses zero integration code. No n8n, no Zapier, no Make, no webhook handlers. No Python scripts stitching APIs together. The model reads its own instructions from a markdown file in our git repo. It calls tools like Slack and our CMS through standardized MCP connectors. It makes its own branching decisions. It handles its own error recovery. When the tweet mirror got blocked by the cloud IP range, the model pivoted to web search on its own, found the content another way, and kept going. No error-handling node was needed. It was a sentence in the prompt, and the model did the rest.

The framework is shrinking into the model

For a long time, the shape of automation has been: a model does one thing, surrounded by a framework that orchestrates everything around it. Make an API call, pass the result to the model, take the model's output, pass it to another service, branch on some condition. The framework was the hard part. The model was a smart function in the middle. Teams learned n8n and Zapier because those platforms were the only way to get models to do useful end-to-end work.

What I saw tonight is that the framework is shrinking into the model. Not disappearing entirely, but collapsing from a full orchestration layer into a handful of MCP connectors and a well-written prompt that the model runs on a schedule. The model is not a node in the workflow anymore. The model is the workflow.

We have been telling clients for two years that AI transformation is about rebuilding workflows rather than adding tools. Tonight was a small example of the same principle applied to my own stack. The question I kept not asking was whether the integration layer even needed to exist anymore. I almost reached for n8n because that is what you reach for. The actual question I should have asked first is the one we ask clients: what does the work actually require, and what is residual from a pre-agentic way of building?

What this means for AI Ops leaders

The practical implication is uncomfortable. A lot of the integration infrastructure being built right now will look overbuilt in eighteen months. Not because the tools are wrong, but because a layer of the stack is compressing. Teams that are standardizing on Zapier, n8n, or Make as a core platform commitment should think carefully about which workflows really need that orchestration and which ones can live entirely inside a model on a schedule. It is not all of them. Heavy stateful processes, long-running jobs with strict SLAs, regulated workflows with auditability requirements, and anything that needs to run at sub-minute latency still want a real orchestration framework. But a lot of the workflows teams are writing glue code for right now are the kind the model can just hold.

What I built tonight is what we usually call a Level 3 custom application. In our framing, Level 3 work is bespoke AI engineering that previously required months of build time, custom data pipelines, and a real team. The pipeline I deployed before midnight would have qualified as a Level 3 project a year ago. It was not. It was a weeknight.

The stack I almost built was the stack I would have built a year ago. The stack I actually built could not have existed a year ago. Remote agent runs, MCP connectors, and skill-packed repos together produced an architecture where the AI is not something you integrate with. It is the integration.

Someone will use this pattern to ship something material to their organization this week. Probably several someones. The operators who move fastest from here will be the ones who keep asking, every time they feel the reflex to reach for another integration tool, whether the tool is still needed at all.

I am asking that question a lot harder now than I was at dinner.

Want to Learn More?

Explore our full library of resources or get in touch to discuss how we can help your business.