Well, hello everyone, I'm really delighted to welcome you all to TNC's latest 'Down The Wire' podcast. And today's topic is an absolute cracker: "How Network Transformation Goes Wrong, And How To Prevent It". I'm John Waterhouse, CEO of TNC. I'll be your host for the next 20 minutes. As I'm sure everyone joining knows, TNC is the UK's largest independent network and telecoms strategy and sourcing consultancy. We support over 280 major UK multinational organisations, and help them get the best possible commercial, technical, operational, and contractual results from their network and telecom solutions. Joining us today to share his expertise is one of our most experienced Principal Consultants - our lead deployment expert Jonathan Copley. Jon, would you like to say hello to our viewers and listeners?
Hello to our viewers and listeners.
Great to have you. Now, I've had a sneak peek of our forthcoming White Paper on this topic, and I know it is extremely interesting. In fact, I'm going to open our podcast today by quoting from the opening paragraph of that paper. "Picture the scene. You've just completed the procurement of your organization's new network solution. The business case you presented to the CFO showed a pretty rosy picture, and the final results you've achieved more than align with what you've promised: a next generation technical solution that delivers more bandwidth, more resilience, and more functionality, a step up in SLAs, with an improved operating model, all delivered with significant cost savings. You're a hero, right? High fives all round. But let's roll the tape forwards a year. Did all those benefits really happen? Did the network really get deployed on time? Does it really perform as expected? Most importantly, the CFO is having trouble identifying the cost savings, and she's asked for a report from you to prove they landed. This might sound like a nightmare... - we really should have done this as our Halloween podcast shouldn't we? - ...it might sound like a nightmare. But unfortunately, it is the reality for far too many network transformations". The good news is though, we have Jon here, grizzled veteran of dozens of such transformations, and he's going to talk us through exactly where such transformations can go wrong. And most importantly, what you can do to avoid that terrible fate. So Mr. Copley, let's start with the ugly truth. In your White Paper, you quote three pretty shocking stats about how transformations go wrong. Can you start off by upsetting our viewers and listeners and reminding us of those stats?
I surely can. Let's get the bad news out of the way to start with. Yeah, so firstly, on average we see deployments taking, you know, six months longer than the original plan which is a hefty amount of time in anybody's book...
...it's pretty painful to start with...
It is, and it doesn't get any better either, and during the time to procure and then deploy, 40% of organisations say that the functionality potentially doesn't meet some of the requirements that were a driver to deploy the new technology.
Because the requirements have changed, and it's taken so long to buy it and deploy it, that by the time you deploy it, it doesn't even do the thing that you set off to address?
Fantastic. Okay, good. You've more bad news?
My personal favourite and I think the most shocking of all, is that 50% of potentially forecasted savings are lost by the end of the deployment.
So from the time you promised your CFO savings in the business case, to the time you've actually finished the deployment, half of those savings have disappeared?
Please don't stop listening now. We are going to talk to you about how to avoid this - but that's pretty shocking. So let's just recap that: 1.) the average deployment takes six months longer than planned; 2.) 40% of organisations find they've got a functionality shortfall at the end of the deployment; 3.) and 50% of cost savings can be lost between the business case and the end of the deployment.
Yeah, that's statistics.
That is pretty grim. That is pretty grim. So okay. What's driving that? Okay. I mean, that doesn't just happen right? So now I happen to know that we've been doing quite a bit of research into this space, and delayed deployment is obviously a huge part of this. Could you take us through what is causing all that delayed deployment?
I can yeah. So I think where we would start is, there's often a delay with the data, right? So when you when you start in your contract, you're doing your procurement, and it all starts with that data set at the beginning and typically what we see is that procurement data is quite high level: so for site lists you know, it's literally site name, and address, pretty much and that's it. When you come to deploy, you need much more detailed site information, and that gap in knowledge can be significant. And on a large estate, it can take a lot of time to gather that missing information because a lot of organisations don't track the information that the vendors will need to deploy.
Okay, so it's not the lack of data that's the problem, it's the time it takes to go and get that data. Yeah. And we're talking here about getting right down to the kind of real detailed level, site contacts, names, phone numbers, email addresses...,
...yeah, location of the Comms Room. Those types of information, you know, power space, all that kind of good stuff...,
...all the super detailed stuff you're going to need as part of a deployment.
Yeah, and if you haven't got that, any delay on that, you know, the vendors are off the hook. Because that's a prerequisite that you'll sign up to. So it's really important that you start that data capture early.
And so presumably, the key issue here is organisations aren't started to gather that data until after they sign the contract. So at that point, the clock's ticking?
Yeah, because everyone's focused on getting a great deal, getting a great price, getting a great solution. They're not in the weeds of "where are we going to plug it in?"
Okay. Interesting. And as you say, when you're talking about big sitelists, presumably, that's, that can take, what? Weeks? Months?
Months, yeah. Even on a sitelist of 100, it can take weeks to get the contact information, the phone number, the email of the LCon, you need two sets of contacts on site, gathering all that data you see, organisations don't understand, that it's not there ready to give you, you have to go out and find that information.
Interesting. Interesting. What's next? Okay, so, getting the data, I can see the problem there. What's next?
So the next one is the solution development, right? So you've signed up to your products: SDWAN let's say, and you're working on your contract, you've negotiated it all, you've got your price. But all you're working from is a very high level description of the solution, you need to get down into the weeds of the low level design, how it's going to work, how it's going to function, what your policies are going to be like, what's your routing? What's this? What's that? And again, you don't get down into that detail until you've got a contract, and you've really got to allow time to get to that, again, it's a massively time consuming piece of work.
There's now a slight spoiler alert, because I've read the white paper, but one of the things you say there is that the challenge as we're moving away from an old world of what was MPLS, to a new world where quite possibly, it's SDWAN, is that no more time has been baked into project plans to make to reflect that complexity. So what might have taken a couple of months in an MPLS world because everyone knew what they were doing, is now taking much longer because there's more testing, there's more piloting. So is the reason that this process step is taking longer, because we're having this technology change? So back in the days of MPLS, everyone knew it was going to take two months, say to do this process step, and that's what goes in the project plan. But now it's world of SDWAN, the technology's new, skills, knowledge, etc. are less developed; more testing, more piloting is required. So what took two months is now taking four months and no extra time has been put into the project plan for that. Is that what's causing this?
Yeah, it's exactly that. Sticking with SDWAN as an example, the same could be said for any of the new cutting edge, bleeding-edge technologies, it's that the solution is so complex, there's lots of policies that need to be created, and they're all from scratch; they're all from new, and they're all bespoke to each customer. And there's a lot of thinking that has to be going in and information that has to be captured from a customer side, which the customers aren't always ready for. You have to then go into well, "what do we want our policy to be on watching cat videos on Facebook at lunchtime?" Yeah, whatever it is...,
...strongly encourage, strongly....
...Yeah, whatever it is, you got to go out and find that information, which takes time, but then layered on top of that, is the point that because it's such a new technology, we are seeing the vendors - they've got the capability, they've got the knowledge - but whereas in an MPLS world, for example, they'd have 50 people on the bench who can deploy MPLS and they've been doing it for years and they can do it with their eyes shut to a certain extent. But with SDWAN, they're still learning - they've got the knowledge, but instead of 50 people they've got five. And then the delivery teams who are deploying it, when they hit an issue that they've not come across before, it's taking them time because they go "Oh, I've not seen that before". So they have to go away internally, and the customer is only one of multiple deliveries of SDWAN at the same time so there's a bottleneck into the people that...
...I was gonna say you got bottlenecks, and traffic jams all the way up the chain...
...all the way up, which again, all these things get fixed, but it's just taking the time to get there.
And when we come on to cost overruns - spoiler alert that's what's coming next, we're gonna talk about how this impacts - but let's just finish off with delays first, because you know, with things like SDWAN, the next point here, we're talking about sort of hardware shortages and so on, again, presumably, that's exacerbated by the sort of the newness of the technology?
Yeah, and I think it's demand because everyone's now deploying SDWAN, it's definitely what we're seeing people buy. But then you overlay that sort of high demand anyway, with the global chip shortage that we're seeing, we are seeing impacts on deliveries, you know, certainly on certain lines of hardware, and it's changing all the time. So it's really hard to plan in what that delay is going to be.
Yeah, I can see how this is all adding to what I presumed we're desperately trying to avoid these using the term "Perfect Storm". But it's a perfect storm, right? There's just a whole bunch of things happening all at the same time, and this last point here around project management failures, again, presumably, this is all compounded by the same factors, that it's new and there's a shortage of people who know what they're doing, etc?
Exactly, yeah. And it's like any project, you've got to have that strong governance structure set up right from the start, with the clear roles and responsibilities. So everyone knows what they're doing.
So okay, so I think it feels like we've sort of done to death the topic of delayed deployments, but there's a lot of things that a lot of organisations are going to experience, which which are contributing to delays, so look, we're gonna come on now to the second part of this. And, you know, that's the cost impact of these things. Now, one could ask a pretty dumb question. I want to say: "Well, wait a second, I accept that things have taken longer but why are they costing more?" Take us through that? What what is it about delay that then hits us all in the wallet?
So a.) the big one is increased dual running costs. So the longer you sit, colouring in your your profiles and finding out your information, all the things we talked about previously, you're not deploying your new network. And chances are, you're deploying your new network, because the new network is cheaper than your old network, and the longer it takes, that costs and you're not getting the benefit. So you know, at the start of the year you set your budgets, based on the fact we're going to be deploying, and you're instantly up against it from a budget perspective, because you get you've not hit that delayed dual running. Another element is b.) ref your existing/your legacy supplier - the chances are you bought a new network because your old network contract is coming to an end, well, that's got a definitive end date and there may be clauses in there which are quite common in older network contracts, that if there's a rollover, it'll roll over for 12 months potentially.
Right, yeah. Okay. So it's not only that it's taking longer to get to the new network, if you're dual running some elements of the new network with some elements of the old network, which you presumably haven't budgeted for, but quite possibly, you could hit a kind of auto renewal or whatever it is, where you've got to negotiate an extension of that contract presumably, on not particularly great terms because, again, one of the reasons you're leaving that supplier and that contract is you're perhaps not wild about them. And they're perhaps not wild about you if you've just served notice on them. So that's all pretty commercially tricky territory.
It is yeah - and then, it's not just the commercial elements: indirect costs will go up potentially, because you got an uninterested supplier now who's firmly sat in the exit departure lounge, and they might take their eye of the service, so you need to put more service management resource on it. There's an indirect cost of this as well, not just a straightforward benefit.
So this is pretty horrible stuff. What else? I know one of the things you talked about in the White Paper is unexpected change control cost. What are you getting at there?
So go back to the sort of previous horror points we were talking about around the lack of data. Well, you know, that can come back and bite you in the backside further down via change control because new sites are going to come in, and maybe you didn't have all the data to even know about the new sites, and the problem with that is you've got to add them in by change control, and once that contract is signed, you're just dealing with one supplier - so lack of information drives big issues in change control.
Yep. And part of that presumably, as you go through that discovery process, is you're potentially also discovering requirements you didn't know you had, etc., etc.
Yeah. And all that takes time to test and again, impacts your delivery, and you're back on the nightmare roundabout...
Cor, this is a spiral of horror, isn't it? And so, I'm just conscious, if we're talking about the greatest hits of work, of your final point in this - and I absolutely love this one, because it's it just sounds bonkers - you mentioned organisations not turning off legacy services? You can't be serious? Are you genuinely saying, it's not just that it's taken longer to deploy the new service, but you're not turning off the old one?
You would you would not believe how common it is. It is unbelievable. And you think that'd be the first thing on any project managers "To Do" lists right? Turn off the old, sir.
Listen, and clearly, I'm no project manager, because it seems it seems pretty obvious to me. Well, why not? I mean, come on, this seems crazy...
It does seem crazy, right? So you know, when you're in the hustle and bustle of a of a large network deployment, there's a lot going on - it's complex, it's fast moving, and it takes a long time, right? So the chances are your project manager was never involved in the business case, he wasn't involved in the procurement, knows none of the history. All he knows is, he's got to install a new box, a new circuit, get sites up and running without any impact, and the budget envelope has been set. Right?
So he's focused on the new?
He's focusing on the new and 12 months later, that's a distant memory. And if it's not, if the governance isn't set up that we've talked about before, around that clear governance, roles and responsibilities, if no one's got that on their list of jobs to do, you'd be amazed - it just doesn't get done.
So what you're saying is, there's no one's kind of coming in behind and sweeping up. Everyone's running off after getting the new solution in. But it falls through the cracks. It's someone else's job to turn...
We've seen this on WAN contracts, on Voice contracts, all kinds of contracts. It's unbelievably common.
Now, having heard already pointed out that I'm, no project manager, presumably, though, it makes your business case an awful lot harder to achieve if you've gone from paying for one network to paying for two networks?
And when you do get pulled up and you have to write that report for your CEO: "Ah - we just spotted that we should have turned it off 18 months ago, we've been paying for two..."
A little bit of sympathy for the project teams here. Presumably, there's an element with this of getting off the old network is pretty challenging, making sure all the services have really been transferred over - no one wants to accidentally create a bit of downtime between the old and new etc. - so a lot of pressure to be ultra careful with turning off the old solution. I can see how that can easily kind of segue from caution to well, we never quite did it...
Yeah, because I mean, in the in a delivery phase you always leave a period of dual running, (which is what we were talking about as one of the costs earlier), you've got to set that dual running window, appropriate to the risks that the client wants to sign up to - at the end of the day, service is King: you don't want to take a retail estate down. You don't want to take a manufacturing site offline, or whatever it might be, so you leave that dual running period there in case you need failbacks if there's a problem with the new circuit, if there's a problem with, whatever.
But presumably the more caution, the longer it dual runs, the greater the risk that it kinda gets forgotten...?
...and the longer it dual runs. Yes, exactly. You know, if a customer says right, we want six weeks, dual running, it's not uncommon. Then six weeks seems a long time because they're now focusing on you know, Site A, B, C, that doesn't work anymore, and they've got an issue with this policy that they need to focus on and unless there's that clear role and responsibility as somebody's job to do that, it just doesn't get done.
Wow, okay. Unbelievable, but extremely Interesting all at the same time. Okay, so You scared us all half to death, thank you. We did promise we wouldn't just leave it there, though we would talk about what can be done. Now clearly job one is to find someone to turn off all the old stuff, but anyway, let's raise the level a little bit. In the White Paper you talk about the top three things - I know you say there's 10, 12, 15 other things you could do - but talk us through the top three things that you would advise any organisation to do who is facing into a transformation.
The good thing is this stuff isn't rocket science, right? It's not, it's not massively over complex it's just you just need some good rigour and process around this. So the first place to start is "de-risking", and doing that through better definition and requirements. It's not complicated, it's definitely not glamorous, but you've got to get all that information - you know, start building your site lists, start building your contacts lists, bake stages into your procurement process, to refresh your data once you've done your RFP, refresh it again for the BAFO, and then refresh it again for the contract signature BEFORE contract signature. Because once you put that pen on paper, you've lost all that competitive tension that you've had through the the rigour that you really put into the procurement phase. Once you're talking with one supplier, if you suddenly need to add in a change of scope for a higher spec box, then you've lost that competitive tension. But if they know latterly, you could still go to Supplier X because they'll give us a better price, you've still got that window.
Yeah, and presumably once you put pen to paper, the business case is fixed. You've by then told the CFO or the CEO, this is how much it's going to cost. This is how much money we're going to save. You can't then rock up later on and say "Oh. I was wrong!" So again, getting that done pre-signature. Okay, cool. I like that. It ain't glamorous...
...it definitely ain't glamorous. So then once you've got those much more detailed set of requirements, from a technical perspective, build-in some proof of concept and value into it, ideally before you sign the contract, or if not in the contract once you signed it, a clause to get out of it if it didn't work.
I was going to say, so the next point I think we're going to talk about is this kind of, passing the risk back to the supplier, presumably things like POC, POV pilots, etc. They're great opportunities to say, if it doesn't work (to your supplier), if it doesn't work, then that's on you?
Yeah, totally. I mean, it's vital to pass on as much risk as possible, and when we say risk, what we're really talking about is cost right? To pass as much of that on to the supplier in the contract for things like, delays due to failure of POC testing, which just talks about low level designs taking longer than planned - all these things that we've talked about earlier in the podcast with regard to getting that contract protection in there, to protect yourself.
So presumably, these aren't things you can think about a week before you sign the contract, you need to be planning from the day you start your procurement. You need to have the deployment in your mind and start these parallel processes.?
Yeah, you're absolutely bang there. The days of just signing your contract, and then going into delivery, that's just a recipe for taking another six months. I mean, your six months will go right before you know it, you know you've got a three month lead time on big circuits anyway - yeah, you can do a lot of work in that time. But it takes longer than that to build, design, test. It's definitely taking longer to do those things. And the information gathering - again, you can't place your order with a lot of vendors now until you've got all this information - and again, they've got a get out of jail free card on any delivery if you haven't given them those prerequisites - they'll just sit back and go, if you haven't got your site list, that's fine, we can't place an order, can you tell us who we can contact in the site in Azerbaijan? You need all that information so definitely start that delivery phase while you're in the procurement phase.
Yeah, that's absolutely fascinating. I'm really disappointed to say, we are running out of time on the podcast, and we could be at this for another half hour - I dare say, if we were at the pub, we could be at this for another couple of hours...
I'm sure we would - intentionally...
Intentionally absolutely! No, this is absolutely fantastic. I mean, some of these things sound so counterintuitive, right? That you're going to get cost overrun because you're not going to turn the old network off, and people will go "Well, of course, we're going to turn the old network off!" But the practical reality is this, you're gonna have cost overruns because the solution probably won't do what it says it will do, the first one or two times it goes through testing. But hey, if you're working on anything next gen UCaaS, SDWAN, whatever it is, it probably isn't going to work first time out of the box, etc. You know, unless you're baking these things into your process, you're going to end up in a in a world of pain. It's no wonder you've got those horror stats that we started the podcast with?
Absolutely. Yeah, the final thing to do is I guess, be realistic - build in some contingency into the plan. Don't take the happy plan approach that the vendor's selling, they're saying, "Oh, we'll get a new network in in nine months, we'll get your new Voice system in in six months..." Be realistic especially if it's a new technology, build in some contingency to allow for tests, to allow for changes, to allow for retesting...
Presumably yourself in a year's time, who is making that long trip up to the CFOs office will thank your old self hugely for having stuck a bit of contingency into the plan?
Absolutely. And if it does work first time, (I've never seen it work first time, but if it does work first time), then you're going back with good news rather than bad news. You're actually going up those stairs saying we delivered early, rather than going up those stairs saying, we're six months late.
Yeah, - "and sorry about that...!!
Jon fantastic. Excellent, just really super interesting as always. Yeah, I'm really sorry we're gonna have to draw things to a close. But thank you so much for providing us with your insights. That's been really, really interesting. I'm sure everyone listening and watching has really enjoyed it. Please, as always, do let us know any questions you may have about this or any other network and telecoms topic; you can get in touch through our website, www.networkcollective.co.uk, or any of our usual social channels. We look forward to talking with you again soon.