How an MCP Server Turned n8n from Frustrating to Fast
Watch the full session
I was sitting in the audience of a Product Makers Romania live session, watching Adi build an automation workflow in real time, and the moment that got me was not the workflow itself. It was when he dictated a prompt into Claude Code, Claude read the existing workflow through an MCP server, and spat out a new one in seconds. No copy-paste. No downloading JSON files. No switching tabs. Just “go look at that workflow and make me a new one without the Slack part.”
The setup: 30 mentors, 300 mentees, and a lot of forms
The Product Mentoring Programme is not small. We are talking 30+ mentors, 300+ mentees, 12 tracks, and a team of volunteer organizers who need to know what is happening without drowning in admin work. The session was Adi walking two new organizers, Ioana and Diana, through the n8n workflows that keep the whole thing running.
The core flows are deceptively simple on paper. A mentor fills out a form on QuestionScout. The form submission triggers a webhook. The webhook pushes data into a Google Sheet. Twice a day, a scheduled trigger reads the sheet and sends a summary to Slack. That is it. Mentor signup, done.
But “that is it” hides the interesting parts.
The manual mapping problem nobody warns you about
Here is what I appreciated about this session: Adi did not hide the ugly parts. He showed the field mapping step where you have to manually connect each form field to a spreadsheet column. He tried the automatic mapping, it pulled in a ton of noise he did not want, so he switched to manual. One field at a time. Name, email, phone, LinkedIn, expertise, topics, photo, introduction, submission date.
It is tedious. It is the kind of work that makes you question your life choices. And it is exactly the kind of thing nobody shows in a polished demo.
He also showed a trick he learned from Georgiana, a QA person in the programme: pin your test data. Instead of filling out the form every single time you want to test a step, you pin the webhook output and reuse it. He said he wasted hours before learning this. I believe him. I have been there with different tools, different contexts, same frustration.
The MCP moment that changed everything
Adi was transparent about his history with n8n. He started using it in August last year, and it was rough. Copy-pasting between Claude and n8n, taking screenshots of errors, manually downloading JSON workflow files to give Claude context. He described hitting a wall where he simply could not make progress.
Then he discovered the n8n MCP server. Game changer. His words, not mine, but I agree with the assessment.
With the MCP connected, Claude Code can read workflows directly, understand the node structure, create new workflows, update existing ones, and debug problems without Adi having to be the middleman shuttling information back and forth. He showed it live: asked Claude to describe an existing workflow, then asked it to create a simplified copy. It worked. The new workflow had the same webhook configuration, the same field mappings, ready to go.
The friction reduction is the point. Adi said he spent 8 hours on a flow with the MCP that would have taken 3 days without it. Not because the MCP is magic, but because it eliminates the thousand paper cuts of context-switching between tools. Death by a thousand paper cuts, as he put it. I use that exact phrase for half the problems I solve at work.
The badge generator: where it gets fun
The more complex workflow was the badge generator. When a mentee completes the programme, they get a personalized badge they can post on LinkedIn. The flow goes: QuestionScout form, extract data, look up the mentor mapping in a spreadsheet, build a payload, send it to RenderForm API, download the generated image, rename the file to something human-readable (instead of a random hash), split the output to both email it to the mentee and upload it to Google Drive, and log everything to an audit sheet.
And none of the code was written by Adi. Claude wrote the JavaScript nodes. Claude built the formatting logic for Slack messages. Claude handled the LinkedIn URL normalization when someone forgot to include https://. Adi’s job was to describe what he wanted and debug when things broke. That is product thinking applied to automation. You define the outcome, the tool figures out the implementation.
LLMs as amplifiers, not replacements
The best line of the session came when someone asked about what skills matter in this world. Adi’s answer was blunt: LLMs are amplifiers. If you cannot explain what you want to Ioana and Diana, you will not be able to explain it to Claude either. If you communicate well with humans, you will communicate well with AI. The reverse is also true.
He mentioned using WhisperFlow to dictate his prompts, and being embarrassed at first because his spoken thoughts were messy and repetitive. But then he would tell Claude to “play it back to me” and see how well it structured his rambling. That is a workflow I have stolen for my own use. Thinking out loud into a dictation tool, then letting the LLM organize it, is genuinely better than trying to type perfectly structured prompts.
Why n8n and not just Claude Code for everything
Someone asked this directly: why not do everything in Claude Code or Cursor? Adi’s rule is practical. If it is something only he runs, even if he runs it 15 times, he keeps it in Claude Code. If other people depend on it and it needs to run without his laptop being open, it goes to n8n. The mentoring programme workflows need to run 24/7 regardless of whether Adi is at his desk. That is the whole point.
He even showed a personal example going the other direction. He had a client time-tracking workflow in n8n that reads color-coded calendar events and generates billing summaries. It worked, but he was already living in Claude Code all day, so he rebuilt it as a Claude Code skill. One less browser tab. One less context switch. The right tool for the right context.
The real takeaway
The session was not really about n8n. It was about how much faster you can move when you reduce the friction between your intent and the tool that executes it. MCP servers are the current best answer to that friction for workflow automation. They let the LLM see what you have built, understand it, and modify it without you being the translator.
Adi ended by asking Ioana and Diana how ready they felt on a scale of 1 to 10. Ioana said 8. Diana said “10, but I feel ready to break some things.” That is the best answer. You are never ready to use a new tool until you are ready to break it.
I walked away wanting to connect more of my own tools through MCP servers. Not because it is trendy, but because I have felt that same copy-paste friction Adi described, and I know how much time it eats.
If you are running community operations manually, look into n8n with an MCP connection to your LLM of choice. The setup cost is real, but the iteration speed after that is something else entirely. And if you are presenting your automation work, do it like Adi did: live, messy, with real data and real mistakes. That is how people actually learn.