Skip to main content

On (digital) transformation




Digital transformation and agile are all the rage these days. However, for companies that have existing businesses and services, every change is hard and in some cases (mine) really hard. I wrote this article primarily for self-reflection and to remind myself what challenges we have to face with change.
Eight years back, I was involved in a large transformation initiative for a large bank (institutional securities trading) that was meant to resolve the fragmented business and system landscape that the company had acquired through purchases of broker/dealers over the years.
The idea was simple: create new systems with a uniform messaging and business logic layer so that it’s possible to normalize the data with the goal to retire the legacy processing platforms. I have taken away several key themes from that undertaking:

People

Your initiative is D.O.A. if you’re not capable to bring the people on board. I’ve seen this play out in many scenarios but all ended badly if you do not have the support of the folks that work in the environments that you’re trying to retire. This is not about draining their brain for knowledge and understanding of the systems, you have to be able to provide a vision of the future and a path in how they can go about participating in it. In some cases, that means you have to pay for re-training or satisfy other needs. The worst kind of situation arises when trust falls apart and the project essentially stalls because the cooperation is no longer possible. Always keep in mind that you’re dealing with people and their emotions.

Management & Organization

There is nothing more disrupting than management changes or organizational changes during a several year-long program. This is not new but when it comes to transformation it has in fact lasting scars. This could go so far that half of what the transformation program was meant to do is implemented while half of it is not. Usually this happens after budget cuts followed by a large reorganization. The fall out is only felt several years after, or because, the management that put the change in place has left. It is therefore imperative to plan the program with that in mind and to pick manageable pieces that will not make a challenging situation worse.

Business logic and software complexity

We grossly underestimate the complexities of business logic and software systems.
One of my personal anecdotes to this theme: a year before I started at Citi, I was a consultant in Banc of America’s prime brokerage unit and had the dicey job to take apart a particular process that evolved around street side trades (or market executions). The process couldn’t scale because the code couldn’t keep up. I went through some 20’000 lines of code (in C nonetheless) to figure out what exactly the code did to the memory structure after it loaded the street side trades from a file and why it couldn’t cope with the volume. The code was in such bad shape: there were lots of redundancies and literally copy & paste of whole fragments. But it was only at the end of the program that I realized I wasn’t the first one to try and make sense of it. When I deciphered the last subroutine, I realized it simply called a stored procedure (a piece of code that runs directly in the database) after the trades were inserted into the database. The stored procedure undid a lot of what the C-code was trying to do. I still don’t know why this path was chosen but can only assume that people gave up with the C-code because the complexity became too much to manage and instead they were trying to implement new business logic on top of the old one through a new mechanism.

This is the world of legacy software systems - or what I heartwarmingly call organic software growth - where you carry with you layers upon layers of complexity that were put in place for some reason or another but that nobody understands anymore. Saddly, the reason for this complexity usually has to do with management not understanding the mess it is asking the people, that work on these things, to create. What I suggested about management and planning your transformation applies equally here. You have to look for bite-sized chunks if you undergo business logic transformation. If you try to do it all at once, you end up with spaghetti. The lesson for management is that your ROI needs to take into account the refactoring of these systems. In fact, there should be a budget line just below compliance and above maintenance (or BAU) that should say refactoring!

(Legacy-) Technology & Moore’s law

When we set out to rebuild the company’s new infrastructure, we did of course look forward in terms of how we would use technology. However, complexity increased tremendously in software architecture and system design. In many ways this is a direct result of moore’s law and led to the believe that all legacy constraints (CPU cycles, bandwidth, memory, storage, etc.) have disappeared. Unfortunately, that’s not the case - everything is just elevated. For example, XML seemed like the perfect way to standardize on the message format until we realized how inefficient it is and how difficult it is to store with traditional relational-databases. But more to the point, in transformation, you simply don’t get away from the legacy technology and that means the complexity of your new architecture + business logic will only increase. In every company that I have seen, you will never be able to replace all systems at once, which means you will have to carry over the legacy technology and business logic in your new architecture which means you create complexity that you didn’t plan for which ultimately means you only go as fast as your legacy goes.

Platform & Roadmap

There are a multitudes of streams that impact your roadmap - from regulatory requirements to urgent business needs. When it comes to transformation all these things make life very difficult. The roadmap automatically becomes a concept that allows you to share the vision - but you have to have luck, buy-in and no resource constraints to be able to implement and you need to defend it against incursion from business and others that have their own changes planned. In short, you never control everything and trying to maneuver a particular change into the roadmap so that it blocks or prevents others from continuing their efforts is futile and will backfire. Look for the biggest bang for the buck (or the lowest hanging fruit) even if it’s not perfectly aligned with your plans. In the end, it’s better to be able to move forward on one front than not move at all.

Digital transformation

It was not until I left the institutional side and became a product manager for mobile payments in the consumer space, that I realized how important transformation is and what it means for the future of the company. While I was in my old job, the main driver for transformation was cost because of too much bifurcation and exception processing. The theory being that if you could eliminate some of the bifurcation and retire the old mainframes you could save a lot on cost.

On the consumer side the story was very different - here it was all about user expectations and experience. We were investigating new mobile payment methods to see what could stick and help our consumers and ourselves to differentiate from the sameness that is consumer banking. However, the underlying problems were the same that I had already seen on the institutional side - except that now time to market was a lot more important than saving costs. The development and deployment cycles of consumer applications was at times a very frustrating process. The good news was that our competitors had exactly the same problems, the bad news was that our competitors started to shift to startups and smaller companies.

Now we live in the world of digital transformation and businesses run the risk of being disrupted by players that not only get away with not having to carry around legacy but also because they can take full advantage of new paradigms that lower the barrier of entry (cloud computing). For financial services institutions this means a complete rethink and rewrite of their playbooks. The challenge is how to tackle it. I have made my own experiences and observations which I’m sharing here:

Choose your team structure (and/or location) wisely

Offshore, near-shore, on-shore and mid-shore are buzzwords with a single focus to cut costs. What is not a buzz word is productivity. All these mentioned models can work but there is always a give and take, short term wins and losses and long term implications.

Agile teams work best when in the same room for the duration of the sprint. Trying to do agile across multiple timezones and/or across different disciplines is hard. It’s not impossible but hard and the outcome will not be optimal. However it is also clear that especially for large decentralized organizations, it’s not viable to fly in whole teams for sprints. That undermines the primary reason why these teams were decentralized: to cut cost. The company therefore has a decision to make: keep certain team decentralized and fly in key resources when needed, relocate strategically (all resources to the same offshore center) or abandon the decentralized model in favor of higher productivity and higher cost. All these options work but they have to be done in such a way that the disruption to the organization is limited. More pragmatic is a combination of these models that can be implemented and then streamlined as the best strategy develops. The challenge is to keep the approach consistent and insulated from senior management reshuffles.

Start lean product development in your organization

The basic idea behind the lean product development is to create an MVP (minimum viable product) that can be launched to gauge and measure the interest of early adopters (Test & Learn). The goal is to fail or succeed quickly before there is a significant resource commitment made. The MVP can be a non-functional mock-up of an idea or it can be a limited functional prototype. Anything that provides sufficient understanding to what the actual (or real) product will do or look like and that provides an opportunity to measure that interest.

Another core principle of ‘lean’ is to iterate over concepts until they are either proven to be failures or successes. To measure this, the product manager might use tools such as A/B testing or co-creation. In the case of failure, the MVP can be abandoned or a pivot needs to take place to steer the concept into a different direction. Obviously, the challenge is to recognize when the end of the line is reached and when further pivots become counterproductive.

The obvious challenge of this approach is the need to fail. This isn’t a very popular notion in large and complex organizations where someone's elbowing upward aspiration depends on successes. There are ways to mitigate this by, for example, outsourcing the process to a third-party but this only works if there is buy-in at the top. Moreover, it is not conducive to the existing processes that a large organization has worked hard to harmonize across all teams and all regions. It is therefore something that should be tried outside of the existing processes.

Abstraction is important (or how I changed my platform with all the legacy in place)

As we have seen with my anecdotes about system and business logic complexities, In complex enterprise environments, abstracting functionality through the use of services and API is fast becoming the remedy to reduce impacts to the platform and the underlying systems and with that allow a quicker go to market. The idea is to create APIs (application programming interfaces) that standardize the underlying service in such a way that the consumer (or contributor) of data is forced to leverage the service the way it was provided. If there is a need to change the service (for example: to add an additional data element), this would be done as part of a new API release through versioning. The existing service would remain and continue to work. The result is: a) little or no impact to existing infrastructure - or at a minimum a standardized stress-test, b) enhancements to existing services that every consumer can leverage [when ready to do so] and c) opportunity to externalize or internalize these layers of abstraction so that other parties and systems can benefit from the service. This all sounds complex and to a certain degree it is - the hard part is to set the organization up in such a way that IT groups follow the principle of service generation consistently: be it for new system or legacy. If they do, they will create platforms that can be tremendous sources of revenue and agility. The best example of this is Amazon where according to Steve Yegge, Jeff Bezos mandated that internal systems should only communicate through services. In his opinionated rant, Steve thinks this is the single most important thing Amazon has ever done.

Transformation is change

The important thing to takeaway (for me - and hopefully others) is that transformation is change and that impacts the organization, the people and the processes. Further, the digital transformation (or disruption) that we see happening in the market with some industries requires that existing business go all out and try things. The fear of failure is not something that you can afford if you’re no longer competitive because of your costs or inability to go to market fast (or both).

Comments

Popular posts from this blog

Will smart phone cameras be better than digital mirrorless cameras?

  If you believe Terushi Shimizu or rather, the way the press is formulating it , then camera phones will have better image quality in 2024 than your trusty DSLR or mirrorless digital camera. He backs this up with sensor technology advancements and computational photography. He has a point.     However, as a digital camera enthusiast myself, I must strongly disagree with this point of view. The message might be interpreted in such way that its meaning reflects a view that we are no longer bound by physics to get the best image quality.     The thing is this, the bigger your camera sensor, the more photons it can capture. However, this comes at the realization that big sensors require big lenses which in turn makes the camera big and heavy. I’m simplifying of course, but that’s physics. For camera makers it is therefore always a question of tradeoffs: do you want better image quality or do you want a smaller and lighter camera. Camera phones or cameras in smartph...

Apples Vision Pro Headset strategy is all about its Arm-chips.

  Apple has given us a vision of what their VR and AR future might entail. But as have others pointed out numerous times, the whole point of the showcase at the WWDC 23 was to let people experiment, I’ve heard others say that it’s like the launch of the Apple Watch when Apple didn’t really know what would become of it. This is similar and yet different.  Just like the Apple Watch (and the iPad before it), Apple sought to porting its whole ecosystem onto a watch – granted, the Apple Watch can’t live on its own and a better comparison would probably be the iPad. The iPad can live without any other Apple device and unlike the iPhone, never really had a clearly defined function other than to watch movies and browse the web. It was not until it gained the ability to be used with a pencil that artists and designers started to explore the potential.  I’m trying to point out that Apple took 5 years from the first iPad in 2010 to the iPad Pro with pencil in 2015 to find its “kille...

The new shiny armor of AI

If we listen to the media, business leaders, and the press, we should be getting behind the AI wagon because of its potential to automate many of the processes everyday companies struggle with. I don’t dismiss this notion entirely because I think it’s true if you have the ability to integrate this technology in a meaningful way. For example, the startup company " scrambl " (full disclosure, I’m a minority investor) is making use of gen-AI by "understanding" CVs (curriculum vitae) from applicants and identifying the skills to match them to open positions. This works great – I have seen this in action, and while there are some misses, most of that "normalization of skills" works. There are other promising examples, such as Q&A systems to understand the documentation of a complex environment. When combined with RAG ( retrieval augmented generation ), this has the potential to significantly reduce the time it takes to make complexities understandable. But ...