In This Article
- The Automation Trap: Automating Bad Processes
- Best Practice 1: Redesign Before You Automate
- Best Practice 2: Test Automation Like Software
- Best Practice 3: Build the Automation CoE
- Best Practice 4: Governance That Scales
- Best Practice 5: Monitor Automated Processes Continuously
- Best Practice 6: Change Management for Process Users
- Scaling From 10 to 100 Automated Processes
- Go Deeper
The Automation Trap: Automating Bad Processes
A shared services team automates their accounts payable process. The manual process has 7 steps, 3 approval loops, and 2 rework cycles. The automation team faithfully automates all 12 touchpoints — including the 3 approval loops that exist because of a policy from 2009 that nobody remembers why, and the 2 rework cycles that exist because the intake form doesn't validate input correctly. The automated process is faster than the manual process (10 minutes instead of 45) but it's still fundamentally inefficient — 5 of the 12 touchpoints shouldn't exist. Automating a bad process produces a fast bad process. The first best practice: redesign the process before automating it.
Best Practice 1: Redesign Before You Automate
Before building the automation, analyze the current process and ask: which steps are unnecessary? (approval loops for amounts that don't warrant approval, data re-entry between systems that could share data, manual checks that validate things the system already validated), which steps cause rework? (missing information on the intake form → rejection → re-submission → re-processing), and which steps exist because of constraints that no longer apply? (the 3-level approval existed because the paper form couldn't track who approved — the digital workflow can). Eliminate unnecessary steps, fix root causes of rework, and simplify approval chains. Then automate the streamlined process. The redesigned process typically has 40-60% fewer steps than the original — meaning the automation is simpler, cheaper, and more reliable.
Best Practice 2: Test Automation Like Software
Automated processes are software. They need software testing practices:
Unit testing: Each automation component (document extraction, decision rule, API call, data transformation) tested independently with known inputs and expected outputs. Does the invoice extraction correctly handle the 15 invoice formats you receive? Does the approval routing rule correctly route $15K invoices to VP and $5K invoices to manager?
Integration testing: End-to-end process tested with realistic data through all systems. Submit a test invoice → verify extraction → verify PO match → verify GL coding → verify approval routing → verify ERP posting → verify payment scheduling. Every step validated against expected behavior.
Regression testing: After any change to the automation (new rule, updated extraction model, modified workflow), re-run the full test suite to verify nothing broke. Automated regression suites run in CI/CD — no deployment without passing tests.
Exception testing: Deliberately test exception paths — what happens when the invoice has no PO? When the approval times out? When the ERP is unavailable? When the document is unreadable? Exception paths handle 15-25% of real-world volume — they must be tested, not assumed.
Best Practice 3: Build the Automation CoE
The Center of Excellence (CoE) is the organizational function that scales automation from "a few bots built by IT" to "enterprise capability serving every department." The CoE provides:
Platform management: Maintain the automation platform (Power Automate, UiPath, ServiceNow). Manage licenses, environments, security, and updates. Provide the infrastructure that process automation teams build on — they shouldn't manage infrastructure, just build automations.
Standards and patterns: Define how automations are built — naming conventions, error handling patterns, logging requirements, testing standards, and documentation templates. Standards ensure every automation is maintainable by anyone on the team — not just the person who built it.
Reusable components: Build once, reuse across automations — document extraction templates, approval workflow patterns, ERP integration connectors, email notification templates. Reusable components reduce development time by 40-60% for new automations.
Training and enablement: Train citizen developers (business users building simple automations) and professional developers (building complex enterprise automations). Certification programs ensure quality. Office hours provide guidance. The CoE multiplies automation capacity by enabling business teams to build their own automations — not by centralizing all development.
Best Practice 4: Governance That Scales
Automation governance prevents: ungoverned bots accessing production systems without security review, broken automations running undetected (processing incorrect data or failing silently), and automation sprawl (200 automations nobody tracks, some redundant, some abandoned). Governance framework:
Automation registry: Every automation registered with: what it does, which systems it accesses, who owns it, when it was last updated, and its operational status. The registry is the single source of truth for "what automations do we have?" Without it, automation sprawl is invisible.
Security review: Every automation that accesses production systems undergoes security review before deployment — what data does it access? What credentials does it use? What are the failure modes? Can it modify or delete data? Security review prevents the "we gave the bot admin access to everything because it was easier" pattern that creates production risk.
Change management: Automation changes follow the same CI/CD process as software changes — version control, automated testing, staged deployment (dev → staging → production), and rollback capability. No "just update the bot in production" changes.
Best Practice 5: Monitor Automated Processes Continuously
Automated processes fail silently — unlike humans who report problems, bots that encounter unexpected situations may: process data incorrectly (wrong GL code, wrong approval route), fail and retry indefinitely (consuming resources without progress), or succeed technically while producing wrong business outcomes (posting a duplicate invoice). Monitoring catches these failures before business impact:
Execution monitoring: Did the automation run? Did it complete? How many items processed? How long did it take? Deviations from normal patterns trigger alerts.
Business outcome monitoring: Did the automation produce correct results? Sampling (review 2-5% of automated decisions for correctness) catches systematic errors that execution monitoring misses.
AI model monitoring (for ML-powered automation): Are model predictions still accurate? Is input data drifting from training data? Are confidence scores declining?
Best Practice 6: Change Management for Process Users
When the AP clerk's daily work changes from "process 200 invoices manually" to "review 30 exceptions from the automated process," the role fundamentally shifts from data entry to exception management. Change management must address:
Role redefinition: The clerk isn't replaced — they're elevated. Instead of typing data into the ERP, they investigate exceptions that automation can't handle: invoices without POs, amount discrepancies, new vendors not in the system. This is higher-value work that uses their domain expertise. Communicate the role change as an upgrade, not a threat.
Training on exception handling: The new role requires: understanding the automation's decision logic (why did it flag this invoice?), using the exception management interface (the queue, the context panel, the resolution actions), and knowing when to override the automation's recommendation vs. when to escalate.
Gradual transition: Don't automate 100% on day one. Start at 50% automation (easiest cases only), with the team processing the other 50% manually. Increase automation percentage over 4-6 weeks as the team gains confidence. Gradual transition reduces: anxiety (the team sees automation helping, not replacing), errors (the team catches automation mistakes during the hybrid period), and resistance (the team experiences improvement before full automation).
Scaling From 10 to 100 Automated Processes
| Scale | Challenge | Solution |
|---|---|---|
| 1-10 processes | Proving value, building the team | Focus on high-ROI processes, build CoE foundation |
| 10-30 processes | Maintaining quality at speed | Reusable components, standards, citizen developer enablement |
| 30-60 processes | Governance overhead growing | Automated monitoring, registry, self-service governance |
| 60-100+ | Organization-wide adoption | Federated CoE model, business unit autonomy + central standards |
The key to scaling: reusable components (each new automation reuses 40-60% of existing components), citizen developer enablement (business teams build simple automations independently, CoE handles complex ones), and automated governance (monitoring, registry, and compliance checks that scale without proportional headcount). Organizations that scale successfully invest 20% of automation budget in CoE infrastructure (platform, components, training) — organizations that don't stall at 20-30 automations because every new one is built from scratch.
Citizen Developer Enablement: Scaling Without Centralizing
The CoE can't build every automation — there aren't enough professional developers. Citizen developer enablement trains business users to build simple automations independently: approval workflows, notification flows, data collection forms, and simple integrations between M365 apps. The guardrails: citizen developers build within defined boundaries (approved connectors, approved data sources, no production system write access without review). Complex automations (multi-system orchestration, AI integration, production data modification) remain with professional developers. The split: 60-70% of automation requests are simple enough for citizen developers. 30-40% require professional development. This split means the CoE handles the complex 30-40% while citizen developers handle the 60-70% — multiplying automation capacity 3-4x without proportional headcount increase.
Automation and Compliance: Regulated Industry Considerations
Regulated industries (financial services, healthcare, insurance) add compliance requirements to every automated process: audit trail (every automated decision logged with: input data, rules applied, decision made, action taken — required for SOX, HIPAA, state insurance regulations), explainability (for AI-powered decisions: the model's reasoning must be documentable — "the claim was denied because the procedure code is excluded under the policy terms, confidence 94%"), human override (automated decisions must be reversible by authorized humans — the system can't prevent human intervention), and validation (automated processes undergo the same validation as manual processes before regulatory approval — testing documentation, control mapping, and operational procedures). Building compliance into the automation design (not retrofitting after deployment) adds 10-15% to development cost but eliminates the 200-300% remediation cost of non-compliant automation discovered during audit.
The Xylity Approach
We implement BPA with the 6 best practices — redesign before automating, test like software, build the CoE, governance that scales, continuous monitoring, and change management for process users. Our Power Automate developers and automation specialists build the CoE foundation alongside your team — reusable components, standards, and the governance framework that scales from 10 to 100+ automated processes.
Go Deeper
Continue building your understanding with these related resources from our consulting practice.
Scale Automation From 10 to 100+ Processes
Six best practices — process redesign, software-grade testing, CoE, governance, monitoring, change management. BPA implementation that scales.
Start Your Automation Program →