Pattern Context
You're running multiple open-source platforms in your energy operation. ERPNext manages your business processes, Nextcloud handles document collaboration, OpenProject tracks capital projects, and Matrix/Element provides secure communications. Meanwhile, your operational systems—SCADA historians, asset management databases, GIS platforms—need to exchange data with these business tools.
The naive approach is point-to-point integration. Connect ERPNext directly to your asset database. Have Nextcloud pull files from your document management system. Let OpenProject sync with your project accounting module. Within six months, you have a rat's nest of custom scripts, API calls that break when versions change, and no one person who understands the complete data flow.
We've seen utilities spend $400K on integration consultants to untangle these messes. The problem isn't the platforms—it's the architecture.
The Problem Statement
Energy operations require data to flow between three distinct operational domains:
- Business systems: ERPNext for procurement and financials, OpenProject for capital planning, HR systems for workforce management
- Collaboration platforms: Nextcloud for engineering documents, Matrix for incident communications, knowledge bases for procedures
- Operational technology: SCADA historians, asset health monitoring, outage management systems, work order platforms
Each domain has different security requirements. Your OT environment might be air-gapped or highly restricted under NERC CIP-005. Your business systems need audit trails for financial compliance. Your collaboration tools need to support mobile access for field crews.
Point-to-point integration creates N*(N-1)/2 connections. With just six platforms, that's fifteen integration points to maintain. Each integration is a potential security boundary violation, a point of failure, and a maintenance burden when platforms upgrade.
More critically, point-to-point architectures make it nearly impossible to implement consistent data governance. Which system is the source of truth for asset serial numbers? Where do you enforce data validation rules? How do you audit who accessed what data across system boundaries?
Solution Architecture: The Integration Hub
The hub-and-spoke pattern designates one platform as the central integration hub. Every other system connects only to the hub, never directly to each other. In our energy sector deployments, that hub is n8n.
Why n8n specifically? Three reasons that matter in operational environments:
First, it runs entirely self-hosted with no cloud dependencies. You can deploy it inside your NERC CIP boundary, air-gapped if necessary. We've run n8n instances that have never touched the public internet, pulling all integration logic from version control on an isolated network.
Second, it provides a visual workflow canvas that operations staff can actually understand. When your protection engineer needs to trace why asset data isn't flowing from Maximo to ERPNext, they can open the n8n workflow and see the exact transformation steps. No digging through Python scripts or reverse-engineering REST API calls.
Third, n8n's 400+ pre-built integrations cover both modern platforms (ERPNext, Nextcloud, PostgreSQL) and legacy protocols common in energy operations. We've built workflows that poll Modbus registers, parse IEC 61850 messages, and write to vintage Oracle databases—all in the same visual canvas.
Hub Architecture Layers
A production n8n hub in an energy operation runs as three logical layers:
Ingestion layer: Workflows that receive data from source systems. These workflows do minimal processing—they validate basic data types, check authentication, and queue messages for processing. If your SCADA historian pushes equipment alarms, the ingestion workflow confirms the message format and writes to a PostgreSQL queue table. That's it.
Why queue instead of processing immediately? Because source systems have unpredictable data rates. During a storm event, your outage management system might push 10,000 updates in ten minutes. The ingestion layer queues these without blocking the source system, then processing workflows consume the queue at a sustainable rate.
Transformation layer: Workflows that normalize, enrich, and validate data. This is where business logic lives. A transformation workflow takes a raw work order from your CMMS, looks up the asset serial number in ERPNext, validates that the assigned technician has the required certifications in your HR system, and formats the output for delivery.
Transformation workflows are stateless. They read from the queue, perform their logic, write results to an output queue, and acknowledge the input message. If a workflow fails, the message stays in the queue for retry. This makes the system resilient to temporary outages in downstream systems.
Delivery layer: Workflows that push processed data to destination systems. These workflows handle the messy details of each platform's API—authentication tokens, rate limiting, retry logic, idempotency checks. If ERPNext's API returns a 429 rate limit error, the delivery workflow backs off exponentially and retries without bothering the transformation layer.
Security Boundaries
In NERC CIP environments, the hub typically runs in a DMZ between your business network and OT network. Two separate n8n instances—one facing each network—communicate through a unidirectional data diode or tightly controlled firewall rules.
The OT-side hub can pull data from SCADA systems and push processed results through the boundary. The business-side hub receives that data and distributes to ERPNext, Nextcloud, and other business platforms. This maintains network segmentation while enabling necessary data flows.
We implement separate service accounts for each spoke system, with credentials stored in n8n's encrypted credential store. Each workflow runs with minimum necessary permissions. The workflow that reads from your asset database doesn't have credentials for your financial system.
Implementation Considerations
Queue Management
You need a persistent queue. We use PostgreSQL with a simple schema: message ID, source system, message type, payload (JSONB), status, timestamps, retry count. N8n workflows write to this queue using the Postgres node, which is far more reliable than trying to hold state in n8n's internal database.
For high-volume scenarios—think real-time sensor data—consider Redis for hot queues with PostgreSQL as the persistent backing store. Messages stay in Redis for fast access, but get written to Postgres for durability and audit.
Monitoring and Alerting
Every workflow needs instrumentation. We add a logging step to each major workflow stage that writes to a dedicated monitoring database: workflow name, execution ID, stage name, duration, success/failure, record count processed.
This monitoring data feeds Grafana dashboards that operations watches. When the "Work Orders: CMMS to ERPNext" workflow shows increasing execution time, we investigate before users notice delays. When retry counts spike on the delivery layer, we know a downstream system is struggling.
N8n can also post to Matrix channels for critical failures. Our standard pattern: workflows post to a #integration-alerts channel with severity tags. Webhook workflows in n8n watch these messages and escalate to pager systems if certain patterns appear.
Version Control and Deployment
N8n workflows export as JSON. We store these in GitLab with standard branching and review processes. When you modify the "Asset Data Sync" workflow, you create a branch, make changes in a dev n8n instance, export the JSON, commit to Git, and open a merge request.
Production deployments use n8n's CLI to import workflows from the approved Git commits. This isn't as polished as modern GitOps tooling, but it's sufficient for audit requirements and rollback scenarios.
One gotcha: n8n's credentials don't export for security reasons. You need a separate process to manage production credentials, typically using environment variables and a secrets management tool like Vault.
Performance Characteristics
A single n8n instance on modest hardware (4 cores, 16GB RAM) handles hundreds of workflows executing thousands of times per day. We've run production hubs processing 2-3 million messages monthly without performance issues.
The bottleneck is usually downstream systems, not n8n itself. ERPNext's REST API, for instance, slows significantly above 10 requests per second. The delivery layer needs rate limiting and queuing to avoid overwhelming spoke systems.
For truly high-throughput scenarios—real-time telemetry processing, for example—you might stream through Kafka and use n8n only for business-logic workflows that operate on aggregated data. Don't try to push individual sensor readings through n8n; aggregate to 1-minute summaries first.
Real-World Trade-Offs
Centralization Risk
The hub becomes a single point of failure. If n8n goes down, data stops flowing between all systems. We mitigate this with active-passive clustering: two n8n instances sharing a PostgreSQL backend, with keepalived managing a floating IP. The passive instance takes over in under 30 seconds if the active instance fails.
For truly critical data flows, implement dual-hub architectures where urgent operational data (like SCADA alarms) flows through a dedicated hub separate from routine business integrations. This costs more infrastructure but prevents a workflow error in your invoice processing from impacting grid operations.
Learning Curve
N8n's visual interface is approachable, but building robust production workflows requires understanding error handling, state management, and data transformation patterns. Expect 2-3 months for your team to become proficient.
The alternative—writing custom integration code—has a higher ceiling but takes longer to deliver initial value. With n8n, junior developers can build working integrations in days, even if optimizing for production takes weeks.
Vendor Lock-In Concerns
N8n is open source (Apache 2.0 for the core, proprietary for enterprise features), but you are coupling your architecture to its workflow paradigm. Migrating away would require rewriting every integration.
That said, this is less risky than coupling to a closed SaaS platform. You have the source code, can run any version indefinitely, and could fork if absolutely necessary. The n8n project is actively developed with a healthy community.
When Point-to-Point Makes Sense
Some integrations are simple enough that routing through a hub adds complexity without benefit. If you just need ERPNext to back up its database to Nextcloud nightly, a cron job with rsync is fine.
The hub pattern pays off when you have:
- More than 4-5 integrated systems
- Complex transformation logic between systems
- Audit requirements for data flows
- Multiple consumers of the same source data
- Operational staff who need to troubleshoot integrations
If you're just connecting two platforms with straightforward data mapping, skip the hub and use direct integration.
The Verdict
After running hub-and-spoke architectures in production for three years across multiple utilities, we deploy this pattern by default for any energy operation with more than three integrated platforms. The upfront investment—typically 3-4 weeks to establish the hub infrastructure, monitoring, and first few workflows—pays back within six months through reduced integration maintenance burden.
The key insight is that integration complexity grows combinatorially with point-to-point connections but linearly with hub-and-spoke. Your sixth platform integration is easier than your second because the hub infrastructure already exists.
Start with n8n as your hub. Deploy it in your architecture's appropriate security zone, implement the three-layer workflow pattern (ingest, transform, deliver), instrument everything for monitoring, and version control your workflows. Begin with one critical data flow—typically asset data synchronization between your CMMS and ERPNext—and prove the pattern before expanding.
The hub-and-spoke pattern isn't architecturally elegant in the abstract sense, but it's maintainable by operations teams in real-world conditions. That matters more than elegance when your grid operations depend on reliable data flows.