Most Dynamics 365 implementations don't fail because of the software. They fail in the months before anyone opens a configuration screen and again in the months after go-live when everyone stops paying attention.

75%
of ERP projects fail to meet scope, timeline, or budget goals
$12.9M
average annual cost of poor data quality per organisation
more likely to succeed with excellent change management in place


I've worked through enough D365 implementations to know that the failure stories almost always start the same way. Not with a bad product. Not even with a bad partner. They started with a project team so focused on getting to go-live that nobody stopped asking whether the organization was actually ready for what go-live would demand of them.

The numbers reflect this. Between 55% and 75% of ERP implementations miss their original targets on scope, on budget, on timeline, on adoption. For a platform as capable as Dynamics 365, that failure rate isn't a software problem; it was always a preparation and execution problem.

What follows are seven of the most common places things go wrong. I'm not writing this as a checklist of best practices you've seen before. I'm writing because these patterns keep showing up in the same forms, and most of them would have been avoidable if someone had asked the right questions six months earlier.

PITFALL 1

Going Live Without Knowing What's Actually in Your Data

Data migration is underestimated in almost every ERP project I've seen. It's not glamorous work; it doesn't feature in demos, and there's always pressure to treat it as something you sort out closer to go-live. That pressure is one of the most expensive decisions a project team can make.

The real problem isn't that dirty data gets migrated. It's when the damage shows up. Rarely during the migration itself. Typically, three or four weeks after go-live when reports produce figures nobody recognizes, automated processes fail on fields that should have been populated, and the finance team is reconciling by hand what the new system was supposed to handle automatically. At that point you're not fixing a data problem; you're managing a confidence crisis.

$12.9M
Gartner's estimate of what poor data quality costs the average organization annually — before a migration concentrates and amplifies that damage into a single window of time.


WHAT THIS LOOKS LIKE AT SCALE

Target Canada shut 133 stores in 2015, two years after launch. The root cause was their ERP migration. Thousands of product records were wrong, such as wrong dimensions, wrong costs, and broken SKU references. Inventory management fell apart and when they dig into it, they found out that the software was not the issue, the data that was fed in it was.

A real data readiness assessment isn't a cleanup project. It's a stress test of your current records what D365 actually requires, done before configuration locks any of that in. If your project plan treats migration as a task rather than a dedicated workstream, you're already carrying risk that will find you at the worst possible moment.  

DCG's data migration practice is built around front-loading this assessment. The SPEAR framework Surveillance stage exists to surface these issues before they get baked into configuration decisions.

PITFALL 2

Configuring the New System Around Your Broken Old Processes

Every ERP implementation starts with some version of this: "We need the new system to work like the old one." I understand the impulse of the old system, whatever its problems, is a known quantity. But this instinct, taken too far, is how organizations spend millions automating processes that were already broken.

D365 has strong native functionality across most business workflows. When companies override or work around that functionality to replicate legacy behaviors, they're paying more for configuration, getting a less stable result, and setting themselves up for problems with Microsoft's twice-yearly Release Waves which break heavily customized setups at a rate technical teams would rather not discuss openly.

Map the process first because what gets configured from institutional memory is usually a rough approximation of how things work, not how they should. The technical design workshops DCG runs during implementation are structured around exactly this distinction. Build the future-state process first and then configure against old habits.

PITFALL 3

Mistaking Training for Change Management

Most implementation plans include training. Usually, it's scheduled in the final few weeks before go-live, covers how the system works, and runs for a few days. Boxes get checked. Then the system goes live, and adoption numbers are disappointing; workarounds start appearing by week two, and leadership is asking what happened.

What happened is that training addresses knowledge. Change management addresses belief. People resist systems they don't trust, and changes done to them rather than with them. A system walkthrough two weeks before go-live doesn't fix that.

Projects with excellent change management are seven times more likely to meet their objectives than those where change management is absent or weak. When it's skipped entirely, Prosci's research puts the odds of finishing on time at 16%.

Change Management Effectiveness vs. Meeting Project Objectives
Excellent Change Management
88%
Good Change Management
73%
Fair Change Management
39%
Poor Change Management
13%


The organizations that get this right involve end-users in process design before configuration is finished, identify change champions early, and track actual adoption behaviors post-go-live, not just training completion.   

Up to 75% of ERP ROI depends on how well adoption is managed. Every dollar of value the system is supposed to generate runs through whether your people actually use it and use it correctly. That's not a soft metric it is the number that shows up in the CFO's post-implementation review.

PITFALL 4

No Performance Baseline Before Go-Live

If you don't measure before the system goes live, you can't prove anything after. This gets missed constantly, not because teams don't know it matters, but because the pressure to hit the go-live date makes baselining feel like a distraction when you're already behind schedule.

A year post-go-live, the CFO asks whether the ERP investment is showing up in the numbers. And nobody has a clean answer, because nobody recorded what the numbers looked like before. Days Sales Outstanding, inventory accuracy, order-to-cash cycle time, manual hours per function. If these weren't documented before migration, improvement is now measured against memory.

83%
Of organizations that ran an ROI analysis before implementation and had been live for more than a year, 83% said the project met or exceeded their expectations. Without that baseline, the picture looks very different.


WHAT PROSCI FOUND ON DEFINING SUCCESS UP FRONT

Organizations with well-defined success criteria and performance measures going in are up to 5x more likely to meet or exceed their objectives. Define what good looks like. Measure what it looks like today. Then you have something real to run toward and something credible to show the board twelve months later.

The KPIs worth capturing before go-live aren't complicated; they're the ones the business already tracks. DSO, DPO, inventory turns, order-to-cash cycle, manual touchpoints per transaction type. What matters is documenting them in a date-stamped, agreed format before migration begins, so post-go-live comparison is clean, credible, and board-ready.

PITFALL 5

Treating Go-Live as the Finish Line

Go-live is not the finish line. It's where the real test begins.

The weeks after a D365 implementation goes live are when you find out how well the configuration maps to real operating conditions, not UAT conditions, not the scenario the consultant had in mind when building the test scripts, but Monday morning volume, live exception handling, edge cases that never appeared in testing. This is when the gaps surface. The question is whether the organization has a structure in place to catch and close them, or whether they get absorbed as workarounds that harden into permanent habits.

60%
of ERP initiatives start losing momentum within their first year post-go-live. That window right after launch is when momentum either builds or drains, and it requires just as much intentional management as the implementation itself.


What separates good post-go-live management from bad is mostly structure: a defined hyper care period with real support, monthly reviews against the baselines captured before migration, and an active enhancement backlog that gets worked, not just maintained. Panorama Consulting's research is consistent here: the organizations that generate the highest ROI treat post-implementation optimization as a continuing discipline, not a cooldown period. The go-live date is a milestone. The work is ongoing.

PITFALL 6

Choosing a Partner That Prescribes Before It Diagnoses

Partner selection largely determines the outcome of a D365 implementation before a single configuration decision is made. The most important thing to evaluate isn't Microsoft certification level or case study count; it's whether the partner diagnoses before it prescribes. One that arrives with a fixed methodology and slots your business into it is a fundamentally different engagement from one that runs a real assessment first.

A Prescribing Partner… A Diagnostic-First Partner…
Brings a templated approach and shapes your project around it Runs a structured assessment before recommending a delivery model
Defines success as hitting the go-live date Defines success criteria with the client before configuration opens
Hands off at go-live and moves to the next engagement Plans the post-go-live optimization phase from day one
Treats change management as a service line you can purchase separately Builds adoption planning into the delivery architecture from the start
Senior consultants sell the work; junior resources do it Keeps senior people engaged through delivery, not just visible at kickoff


THREE QUESTIONS WORTH ASKING BEFORE YOU SIGN

What does your data readiness assessment look like before configuration begins? Who specifically owns our account post-go-live and what does that actually look like in practice?

DCG's D365 implementation practice is built around the view that discovery and design is where projects are won or lost. The time we invest with every level of the organization, from C-suite down to daily users, in those first week’s shapes whether what gets configured matches what the business needs to run on.

PITFALL 7

Having No Escalation Path When Things Are Already Off-Track

ERP rescue is more common than the industry admits. A stalled D365 implementation, a go-live that went badly, a system that's technically live but practically abandoned; these aren't rare edge cases. They happen often enough that purpose-built recovery methodologies exist for them. The problem isn't that projects get into trouble. It's that organizations often don't recognize the warning signs in time to act, or they recognize them but have no path for escalation.

25%
of ERP implementations are abandoned entirely during the process. 72% of failed ERP projects trace back to poor stakeholder management — a fixable organizational problem, not a technical one.


DCG Recovery
Healthcare Technology Provider
Two previous partners had tried and failed to modernize a Dynamics 365 Finance and Supply Chain platform for a U.S.-based healthcare technology company serving over 10,000 medical practices. By the time DCG was brought in, the project was in serious trouble.

A rapid SPEAR assessment identified the actual problems: governance gaps, scope that had drifted without anyone formally acknowledging it, and a technical architecture that had never been validated against the business's real operating model.

The platform was rebuilt through structured sprints and delivered. D365 didn't fail. The structure around the project did, and a structured recovery is what fixed it.
Full recovery approach here

Warning signs worth taking seriously: the timeline has slipped more than 20% with no formally revised baseline; executive sponsors have gone quiet; users are building workarounds rather than using what was built; or the partner's responses to escalation questions have become vague. Any one of these is worth a conversation. All of them together means something that needs to change soon.

DCG's ERP Recovery Roadmap Guide walks through how to diagnose what actually went wrong and build a recovery path. And if you're mid-implemented and unsure whether what you're experiencing is normal friction or something more serious, these three diagnostic signs are worth reading first.

ONE FINAL THOUGHT  

"These patterns keep appearing because they're easy to rationalize away when the pressure is on. The project timeline is tight. The budget is committed. Everyone just wants to get to go-live."

That's when the biggest mistakes are made. Not from incompetence but from momentum. The team is moving fast, stakeholders are watching and stopping to do a proper data assessment, or a current-state process workshop feels like losing ground when you're already behind.

The organizations that get D365 right have learned to slow down at the right moments. Diagnostic work before configuration. People brought into the process early enough that their input actually shapes something. Go-live is treated as the beginning of the measurement period, not the end of anyone's responsibility.

If anything, here sounds like where you are right now, still evaluating, mid-rollout, or six months past go-live and wondering why the numbers aren't moving, pushing harder on the same approach isn't the answer. Get a clear picture of where things actually stand. Then build from there.

That's the work we do.

SPEAR Framework
Don’t Wait for Go-Live to Find the Problems
Whether you're evaluating a D365 implementation, sensing misalignment mid-rollout, or dealing with post-go-live underperformance, DCG’s SPEAR framework gives you an honest diagnosis and a structured path forward.

Will Donovan

Will Donovan is an Operations Strategy and ERP Consultant with DCG. He is a former Supply Chain/Logistics Product Director for global logistics and operations systems deployed by US Marine Corps, US Navy, US Air Force, and Shell Oil. Will has deep experience in analytics, ERP, WMS, TMS, and digital strategy for US Military, oil & gas, retail distribution, and manufacturing in Asia, Europe, Middle East, and America.