Updates from February, 2019 Toggle Comment Threads | Keyboard Shortcuts

  • Mark Sinnathamby 9:58 PM on 16/02/2019 Permalink | Reply  

    Migrating Legacy Monoliths to Microservices 

    A short discourse on the topic of moving monolith legacy systems towards a modern Microservice based architecture. Inspired by the talk given by Vaughn Vernon titled “Rethinking Legacy and Monolithic Systems” (source: infoq), which is shared below.

     

    Legacy monolithic systems are a bit like the imperial star destroyers in Star Wars: Large and complex, slow to maneuver, unable to land and function on planets, and highly expensive to replace if destroyed. Modern microservices are more akin to the x-wing fighters or the multi-troop transport ships, which are very good at doing a single job well, available in large numbers in order to scale as needed, and are relatively low in cost to replace/change.

     

    Imperial star destroyer – a monolith of functionality and resources (source)

     

    Multi-troop transport – Best at deploying B1 battle droids only (source)

    Maybe not the best of analogies provided above, but monoliths are mostly legacy systems that are still in operation because they still bring great value to the business, and may even be the reason the business exists and survives. These systems are mostly found in the larger enterprises – the fortune 500’s and the multinational corporations, but have also existed in small to medium businesses throughout the years. They sit and churn out value and become so hard to change or replace to the point where there are cases when enterprises sometimes bring out COBOL developers in retirement and pay them huge sums to maintain and update the systems with business changes.

    Microservices are a relatively new approach to how we design complex systems, and also reflect units of deployment of the system in production. Although the advantages of microservices is well discussed and presented in many books, talks, and real life case studies, something that is not very apparent up-front is the ‘why’ and ‘when’ should you migrate a legacy monolith to a microservices architecture, and the best approach to do so based on the context of the business domain.

    In my work experience in the past, I have encountered two types of monoliths:

    1. One was a huge system that was deployed each time a code line had to change, but had well defined interfaces through which components and sub-modules talked to each other. And the software components within this monolith were structured based on domain driven design bounded contexts, and spoke to each other via a message bus, and this made it very easy to maintain, but it was still a monolith.
    2. The other was essentially a big ball of mud. In fact, the opposite had happened, where the system was historically a collection or product suite of different applications, but all of that had been merged into one code base down the line. Though having a semblance to layered architecture, most of the code never followed the architecture and there were no business domain verticals defined/reflected in the code for easy maintainability.

    Sam Newman (author of Building Microservices) describes that a bounded context is a good fit for the size of functionality which can be wrapped into a microservice and be treated as a unit of deployment. This is important for two main reasons: first, a bounded context scopes business functionality that can be grouped together under a common context, and second, this common context is decided by the ubiquitous language. Because the ubiquitous language is basically the business domain, the design of bounded contexts, and ultimately microservices, is linguistically driven in the language of the business.

    In the above examples of the two types of monoliths I have worked on, the big ball of mud monolith which followed no proper architecture has to undergo a lot of refactoring internally, as a monolith, before any consideration can be given to break it apart. This type of system is very hard to migrate, and generally are overhauled down the line for complete re-writes. The other type of monolith, with well defined bounded contexts and communication interfaces, was more interesting, which begins resembling a microservices architecture but is not quite there yet. In either case, to migrate the monolith to a microservices architecture, one very economical approach we can use is the Strangler approach.

    When using the strangler pattern, what we are essentially trying to do is to write any new functionality outside of the monolith in a separate application. This could be new features to the system, or existing features from the monolith, or even a complete bounded context located within the monolith. The new application is connected to the monolith via an anti-corruption layer which is so called because it protects the domain model of the new application/service from the stale or obsolete domain model of the monolith. It also serves to update the new application with the monolith state by streaming events, and also serves as a reverse-migration channel  to the monolith, as data coming in via the new service or application may need to be ingested within the monolith to serve other external consumers of the monolith. The image below shows the principles in action, and the bridge between the ‘glue code’ shown below is basically the anti-corruption layer in this case

     

    Strangler pattern in action (source)

     

    This is in a nutshell, how a ‘strangler application’ works to remove (or strangle out) the old functionality from the monolith, into a new service. Extracting a piece of functionality from the monolith and creating new external services sounds trivial initially, but there are complexities in building an anti-corruption layer and keeping the new service(s) and monolith in sync. There are more strategies such as Event Interception, Asset Capture, and Content Based Routing that aid in setting up a strangler migration, but the trade-off of this initial complexity is that you

    • spread the development effort over time of migrating the monolith into microservices
    • you do not disrupt existing business operations
    • you can economically show incremental business value compared to a full rewrite
    • And in general, this approach would provide a faster migration path than completely rewriting the application

    Why, when, and how would you decide to go about an endeavor such as this? First, there is a famous old adage that states “when you find yourself in a hole, stop digging”. This is because the more you dig deeper, the harder it is to come back up to ground. The same applies to monoliths – the more features and functionality you add to a monolith, the harder it will be to transform and migrate it later. Instead, treat any new set of requirements that come in as the initial strangler application to kick start the controlled migration into a new service. This may need a lot of management buy-in (specially with in-house enterprise IT departments, which I can attest to), but the argument is for a slice of functionality (which is more economically viable) not a complete re-write.

    If not a major new feature, an existing bounded context within the monolith can be used for extracting out into a new service as a strangler. In this case, the bounded contexts should be ordered by their size and business value, and the smallest one with lowest business value from this list should be selected first, and migrated into the new strangler app. This gradual migration done right, will result in the new microservices architecture required by the business providing maintainability and business agility, to support enterprise transformation which would not have been possible with the monolith in place. Any type of transformation and change in IT should provide value to the business, and that includes migrating legacy systems into modern microservices. If there is inherent value, but it is not transparent enough to observe due to the approach used in migrating the monolith, then the migration is a failure even before it has started.

    Following is the video by Vaughn Vernon, which made me remember similar work I had done in past tenures, and also inspired me to put my thoughts into this blog post:

     

     

    Advertisements
     
  • Mark Sinnathamby 2:04 AM on 19/04/2017 Permalink | Reply  

    The Design Thinking Approach 

    Design Thinking

    Design thinking is a human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.” – Tim Brown | President and CEO, IDEO

    Today evening, I attended a very inspiring talk titled ‘Trends in Innovation and Knowledge’ delivered by Dr. Raomal Perera, on innovation, creativity, and particularly the design thinking process. It was a short and succint talk touching upon how the economic center of gravity has traversed the globe from past to present, the mechanics of an IDEO team who innovated and  improved the design of a shopping cart, and the design thinking process. This post is mainly about the key takeaway from the session (design thinking), interwoven with a little bit of my own insights, learnings, and musings.

    Design thinking, both the philosophy and the process, has a rich history as I found out at this article in medium. It is a human centric problem solving and innovation approach, which is creative, effective, and applicable to many varied fields and industries. During the talk, Dr. Raomal discussed and conveyed the concept that ‘design’ is not a product, or artifact, or aesthetics, nor document/diagram, but a process. This brought to my mind (coming from a software engineering background) a quote from a recent book I read by James Coplien titled ‘Lean Architecture for Agile Software Development’. That particular quote goes as follows (emphasis added by me):

    You might ask: Where does (software) architecure start and end? To answer that, we need to answer: What is architecure? Architecture isn’t a sub-process, but a product of a process – a process called design. Design is the act of solving a problem” (Lean Architecture for Agile Software Development, James Coplien – 2010)

    To me, many of the ideas in the design thinking process resonates with the idealogy and outlook that James Coplien has towards building really great software systems. His many talks on capturing the human mental model in code, and his formulation of the DCI (Data, Context, and Interaction) architecure, is testament to that opinion. The design thinking process and approach, used correctly, would be a game changer in any industry, domain, field, and academia. More so, in the software services domain, whose core objective could basically be summed up as eradicating end user pain points and enhacing their experience through value addition.

    Of course as Dr. Raomal mentioned at the end of his talk, in order for any creativity, innovation, and seeds of design thinking to take root in a company or enterprise, there should be a high level of support and fostering in terms of culture, habitability, and resources (and in this context resources does not always equate to R&D budgets). But done right, the design thinking approach will produce amazing products, services, and systems, that go beyond end users expectations, and solves their problems in some of the most innovative and creative ways possible.

    The video at this link  contains a short but informative high level overview on what design thinking is all about. For a short explanation of design thinking process and components, the video here does a great job in my opnion. Finally, for those who want to delve deeper into the design thinking process and related activities, take a look at this Stanford webinar

     
  • Mark Sinnathamby 2:53 PM on 22/03/2017 Permalink | Reply  

    Engineer or Programmer? The (non existent) Existential Dilemma… 

    EVERYTHING

    This article describes my opinion of the term and title ‘software engineer’ and the implications behind it, and how it ties (or should tie in) to the viewpoints of the engineering discipline at large. (Blog header illustration created using canva)

    I very rarely spend attention on philosophical debates relating to software development, but last week I was reading an article I came across at ‘The Atlantic’, which prompted a train of thought, including this blog post. I suggest you read it before continuing, as it has a very interesting premise on why programmers should not call themselves (software) engineers. I neither agree nor disagree with the author or the article, but it had some very interesting ideas, that motivated me to document my thoughts and opinions in this post…

    First, a bit about the term ‘software engineering’ from my own perspective and understanding. Throughout my career, I used to work with people who were (are) referred to as software engineers, themselves being graduates of computer science, information systems, or software engineering, the titles being appropriated to the name of the undergraduate degree that each followed in his or her academic journey. I myself hold a degree in ‘software engineering’ issued by a well known university situated around Regent street, north-west of central London in the United Kingdom. Therefore, the term or title ‘software engineer’ was something I was (and am) quite comfortable with, whether in how I introduce myself, or how I refer to my colleagues.

    On the origins of the term ‘software engineering’, the article in question quotes a fact that is commonly taught in the industry and field of software development today, which is that the term ‘software engineering’ was first deliberately used in the 1968 Garmisch-Partenkirchen conference organized by NATO. It is said that the term was coined provocatively, in a bid to challenge software development at the time to align with the rigid processes and methods followed by the established branches of engineering. But it was interesting for me to come across an article by Bertrand Meyer, that provides some evidence and basis that the term was used at least two years prior to the NATO conference, and in positive light, indicating (at the time) that software development should be considered an engineering discipline in its own right.

    The established engineering disciplines, are coined as ‘established’ based on the rigid processes, regulatory bodies, certifications, licenses, continuous learning, and ethical codes they follow. I am quite firm in my understanding that this is a good thing. But some of these aspects came about mainly due to the problems and engineering disasters that were prevalent in the 19th and 20th centuries, and saw the need to bring in standards, ethics, regulations, certifications and the rest. There is always a scenario which prompts the need for better processes, regulations, and policies. This was widely characterized in the software development world in the last two decades, where a myriad of development philosophies and approaches were invented. Even Robert C. Martin discusses about the situation, and tries to convey the message that if software developers (engineers?) don’t start getting there act together, we will be imprisoned by regulations and policies dictating how we should develop software.

    If the practice and business of developing software is expected to evolve into an (established) engineering discipline, the progress will always be compared to the other mature engineering disciplines. This outlook and comparision was not favoured by some, which resulted in a large number of software development circles stating that software is more art/craft/science than engineering, and that there are better ways to build software than using rigid engineering processes. This viewpoint is widely respected and quite popular in the software development world today. On this side of the wall, software development should be lean, agile, and dynamic, without heavy weight engineering processes and regulations retarding these ideas, which is practical for all sense and purposes. Software is an artifact and material unlike anything that traditional engineers are used to working with, and early lessons taught us that building software  to specifications with rigid processes containing blueprints and schematics was not the best way to go about things. But the problem was (and still is) that the computing infrastructure (used by the public, consumers, clients, business users etc.) which is the end product of software development, still requires the same rigidity, reliability, safety, and conformance that a bridge or building built by a civil engineer should have. This is expected by a society which functions daily on the trust and understanding placed on the infrastructure around them, built by engineers. And therein lies the disdain towards the software development world (mostly by the engineering community, I suppose), where systems go wrong, bugs seem to evolve and reproduce, and software development is an activity that seems random and haphazard. The engineering world has its fair share of problems and disasters, just as there are software project failures and disasters. But a crucial difference is that the engineering disciplines will emphasize and priorotize the regulations, standards, safety to public health, and ethics, even if failing to deliver.

    Just as engineering has foundations in the natural sciences, software ‘engineering’ too, has its origins in philosophy, mathematical logic, and various other branches of science, that has no direct correlation to what we do today as developers of software. The early proponents and pioneers who set the stage for software development to be a wide spread phenomenon, were computer scientists, mathematicians, particle physisicts, and information theorists. Not engineers per se. I was searching online and came across the professional engineers ontario website, where the definition of a professional engineering was formally stated. Based on my reading, I specifically chose canada as they are one of the countries who are extremely strict with the title ‘engineer’ and who is licensed to carry it. I would like to directly quote the definition of the practice of engineering, taken from their website:

    Professional engineering is:

    1. any act of planning, designing, composing, evaluating, advising, reporting, directing or supervising (or the managing of any such act);

    2. that requires the application of engineering principles; and

    3. concerns the safeguarding of life, health, property, economic interests, the public welfare or the environment, or the managing of any such act.

    Rings a bell? At least for me it did. This seems to sum up what I have been doing for the past few years in my tenure as a software engineer. In a sense, what’s missing is the regulatory bodies, the policies, licenses, code of ethics, and the rest of the bells and whistles.

    Let me summarize (thank you, if you are still reading) and propose an approach towards building better software, gaining the respect of traditional ‘mature’ branches of engineering, and proving ourselves worthy of the title ‘software engineer’.

    • First, I lied about software development not having a code of ethics: Software engineering does have a code of ethics known as the IEEE CS/ACM Code of Ethics and Professional Practice, but no regulatory body to enforce it. It would be a great start to read this, and attempt to see how applicable it is in our everyday work. We may not have regulatory bodies, policies or practice licences etc. but that is not an excuse for not committing to the best standards and practices, and/or not being ethical. In the popular book Content Inc, one of the main (controversial) ideas conveyed by the author is that you should build an audience before you figure out what it is you want to sell them. Ours is a somewhat similar situation, where we need to build our audience (society expecting traditional engineering recipes) and figure out what to sell them (trust and reliance in software development using tailored/non-traditional engineering processes, backed by ethics and a sense of responsibility).
    • Second, the use of sound engineering principles: The design and architecure of the software intensive systems we build, the practices and guidelines followed when coding, the project management and costing aspects, the control and quality checks, all rely on the application of sound engineering principles in the construction of software. True, there is no universal standard or guideline dictating how we go about doing this (in fact there are a great many variety of approaches and methodologies), but whatever we tradeoff or cut-down on, it would be advisable to always reflect and check whether whatever we are doing can still be comfortably called ‘applying sound engineering principles’, without any hint of doubt in our mind.
    • Third, continuous learning and certifications: The software development world is plagued with people who are too busy learning and have no time to work, or are too busy working and have no time to learn. Jokes aside, continuous learning is something we developers are familiar with, due to the evolving trends and technology waves that tow us in their wake. There is no one person or body to standardize it and tell us the how, why, what, and when to learn, but we have ample certification programs to showcase to employers how good we are. Maybe we should have that same pride and attitude, in showing that we are capable of building great software, and that our certifications and achievements are a testament to our responsibility and obligations towards our stakeholders as well.

    That was not a short summary, and I apologize for that. Looking back, this post is not a negative response to the original article at ‘The Atlantic’, but instead I was interested in critically analyzing and identifying the weaknesses in software engineering, which is constantly being targeted by traditional engineering disciplines. That being said, in ending, a big salute to all the software engineers, developers, and programmers, who constantly and daily contribute in advancing and moving the field of software engineering, towards being a mature and well respected discipline and industry.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel