Sunday, July 23, 2017

Innovation Speed — Should you avoid applying UI if you can?

A lot of times, as the technical team in a company, you get pressured to build a self-service tool so that business/product-team can perform their configuration and content changes without involving the tech team.
Should you just build what the business/product teams requested or should you justify why this should not be done in the interest of both the business and the tech team and ultimately in the interest of your end customers?
To question the requirements from your product/business teams, you need to have very good justifications and counter proposals. In a good work environment, the requirements should not just come from business and be executed by the tech team. It should be a two way conversation. The perception is that the tech team just wants to deal with technical details, but the reality is that the tech team wants to be involved at the idea phase before the requirements get baked and turned over for design and implementation. Early in my career I was happy with just focusing on technical details and enjoying the process of solving problems through coding. Then later I realized that my motivation comes from knowing what the business team wants to achieve and for me it meant that I was able to translate that to a much better working software.
Let’s use the Price & Product Configuration and its UI tool as an example. This subject is very dear to me and I have certain vision that I would like to share in order to get to the essence of this article.
Let’s assume you built a set of microservices and database structures behind each service to support the following capabilities:
  • * Creating offers (no UI but configurable) and specifying the price
  • * Creating benefits/features (no UI but configurable)
  • * APIs that allows you fetch offers and query offers in order to decided what offer(s) to display to customers
  • * API that allows you fulfill offers and ultimately create customers’ subscription records
Let’s say these feature are very stable in production, but you still need to continue to iterate and build new capabilities and enhance existing capabilities for the business requirements that are geared towards the end customers.
At the same time you maybe getting business requirements that are are about the self-servicing of these features for internal customers (your product and business team). These requirements are all about above mentioned capabilities of microservices and introducing some UI tool that allows the business team to perform price and product configuration (creating offers, creating benefits, configuring and changing prices for offers, …etc).
The goal purely looking from the tech team’s angle is to have agility and to innovate. When you introduce UI tools on top of your evolving microservices, then it really slows down your ability to innovate and adapt quickly. Why does the building of a UI tool slow you down? Think about the possible impacts if you change the underlining functionality and structure of your microservices:
  • * You will also need to change the UI to accommodate for these changes
  • * You will need to possibly train the users of UI if the changes are drastic enough
  • * And you will need to roll this out without causing too much disruption.
All of this requires a lot of effort and coordination. This effort and coordination is acceptable after you have a mature product in production, but it is too much overhead if you are still rapidly evolving.
If you have a choice, you really need to consider delaying any self-service UI tools for your business until the data structure and microservices for your data structures are mature enough; otherwise, you would be just slowing down the delivery of new features in these microservices for end customers even though the tech team can deliver all the changes that the UI tool would do for business users by just making application config changes. This is a very good alternative during this crucial innovation phase.
Therefore, as the tech team you need to be able to convey this message to your business and propose investments into the capabilities of these microservices for end customers before building a self-service UI tool. But at the same time, you need to provide to business some visibility to the configuration from these microservces so they can better visualize what what capabilities they have so far. That could be just a BETA version of some simplistic UI tool that is about purely REAL-ONLY information . If it is a “read-only” version of this tool, then the effort is much lower and there is less to consider from the security point of view. Overall, I think this is a good compromise for some amount of time to keep the agility at a high level and to keep the innovation going.
Let’s go back to the days when Google released the beta version of GMail. They kept the beta version for quite some time and they informed users that they don’t have any obligations to support customers requests. Why did they do this? They did it because they did not want to slow down the innovation of GMail in its early days by introducing the formal UI; that’s why kept it as a beta version. I am not saying that you should be doing this for as long as Google has done it, but the overall conclusion is the same.
In conclusion, consider delaying building the official self-service UI tools for your product/business users in order to keep the innovation and agility at a high pace during crucial phase in the evolution of your microservices and data structures/configurations.
Thank you for reading this article. Please follow me here on Medium.com or check out my personal blog: http://www.almirsCorner.com
Almir Mustafic




Code to get around the process OR Code it right and improve your deployment methodology and the process

Code is just code. Process is just process. However, when you combine the two, they impact each other in ways that they should not.
Let’s assume you built a platform for your team. Then the process came later and slowed down things but kept things as stable as possible. Then software engineers realized that they can follow the process and do the development in such a way that they can go quicker through this process. I am talking about writing code that is so over-configured that it actually allows you to deploy changes to production quicker just because it is a configuration change that you are deploying. When your platform is over-configured, the weight of a configuration change is so huge that you can introduce functional changes to the degree that makes you wonder if you are adding so much risk to the business because you are taking a shorter path in the process workflow. The risk is there because software engineers don’t feel comfortable making these heavy-weight configuration changes and nobody likes doing any change in the engine of the code that drive this heavy configuration.
This really smells as coding to get around the process. Is this right?
I am all about automating the process, but when it comes to over-configuring your code just for the purposes of getting around or going quickly through the process, it is totally wrong.
As a software engineer or a technical lead, every time you find yourself thinking how to adjust the code to make it very configurable to get around the process, you should ask yourself the following questions:
  • How can I help automate or implement the continuous integration in such a way that I can do the right thing in my code. The right thing could be that you actually make a code change that requires a compilation, build and deploy and the code instead of trying to make it configurable and complicating your codebase and increasing maintenance costs.
  • How can I work with the Change Control team and improve the SDLC/PDLC process in order to make it jive with the platform/framework you have implemented in your company.
It is ok to have a simple code change instead overcomplicating the code to make it configurable so that you change ends up being a configuration setting.
In conclusion, it is important that you design your process at the same time you are building your new platform. That’s how they compliment each other better. As for software engineers, please don’t code to get around the process. Please code it right and identify incremental changes how you can improve your software deployment approach.
Thank you for reading. Please follow me here on Medium.com or check out my personal blog: http://www.almirsCorner.com
Almir Mustafic





Sunday, July 9, 2017

Figure out a solution before you open a book of design patterns — Quick Advice

A lot of times software developers get caught in the situation where they are reaching for the book of design patterns to see how they could solve a specific problem.
Programming languages are rich enough to help you solve different problems and a lot of programming languages and its frameworks can be used to easily implement different design patterns. However, when something like this is readily available to you, you need to have enough discipline NOT to reach for it right away.
What do I mean by this?
It is important to first truly understand what problem you are trying to solve. Work with your product owner and make sure that requirements are true business requirements without inclining any type of implementation. Once the requirements are in good shape, then you know what problem you are trying to solve.
Then you need to work out different solutions for that problem. This is where you would use your overall software engineering experience. You should NOT open a book of design patterns and look for what could solve your problem, but you would rather work out a solution and then if it happens to be one of the known patterns, the implementation of it would be that much easier.
Please think about this any time you are presented with a problem to solve.
Thank you for reading. 
Almir Mustafic





Friday, July 7, 2017

Plotting graphs with Python — Simple example

As plotting graphs is something that my oldest kid started doing in school, I decided to finally spend some time doing it in Python. This article is about the basics, but I will also later get into the calculations of medium, mean, mode and other more complex concepts of plotting graphs using mathematical formulas.
Here is an example that shows you the following:
  • * takes a list or more lists and plots it on the graph
  • * plotting of points on the graph with or without connecting it into lines
  • * applying the labels on the x-axis and y-axis
  • * applying the overall title on the graph
  • * adjusting your scale based on what type of data you have for each axis
Install the following libraries before running the code below. By the way, the example is written for Python 2.7.x:
pip install matplotlib
pip install matplotlib-venn
Here is the example that you can play with as it has a command-line menu that allows you to choose which type of graph you want:
__author__ = 'Almir Mustafic'


from pylab import plot, show, title, xlabel, ylabel
from pylab import legend
from pylab import axis


def main():
    print("main program")
    while True:
        print_menu()

        choice = raw_input('Pick from the menu? ')

        if choice == '1':
            plotting_example01()
        elif choice == '2':
            plotting_example02()
        elif choice == '3':
            plotting_example03()
        elif choice == '4':
            plotting_example04()
        elif choice == '5':
            plotting_example05()
        elif choice == '6':
            plotting_example06()
        elif choice == '7':
            plotting_example07()
        elif choice == 'q':
            break

        print("")

def print_menu():
    print("=========================================================================")
    print('1. Plot a few points WITHOUT dots for the given point and connect as line')
    print('2. Plot a few points WITH dots for the given point and connect as line')
    print('3. Plot a few points WITH dots and WITHOUT the LINE')
    print('4. Plot Ladera Ranch town temperature with DOTS and LINE and NO X-Axis specific range')
    print('5. Plot Ladera Ranch town temperature with DOTS and LINE and X-AXIS range specified')
    print('6. Plot Ladera Ranch averages in 1996, 2006 and 2016 for 12 months. 3 graphs')
    print('7. Plot Ladera Ranch averages - Adjust AXIS based on min and max')
    print("q. QUIT")
    print("=========================================================================")


def plotting_example01():
    print("Plotting example 1............")

    x_numbers = [1, 2, 3]
    y_numbers = [2, 4, 6]

    # Plot the line WITHOUT dots for the given points
    plot(x_numbers, y_numbers)
    show()


def plotting_example02():
    print("Plotting example 2............")

    x_numbers = [1, 2, 3]
    y_numbers = [2, 4, 6]

    # Plot the line WITH dots for the given points
    # Markers can be 'o' or '*' or 'x' or '+'
    plot(x_numbers, y_numbers, marker='o')
    show()


def plotting_example03():
    print("Plotting example 3............")

    x_numbers = [1, 2, 3]
    y_numbers = [2, 4, 6]

    # Plot the line WITH dots and WITHOUT LINE
    plot(x_numbers, y_numbers, 'o')
    show()


def plotting_example04():
    print("Plotting example 4............")

    ladera_temp = [53.9, 56.3, 56.4, 53.4, 54.5, 55.8, 56.8, 55.0, 55.3, 54.0, 56.7, 56.4, 57.3]

    # Plot the line WITH dots and the LINE
    plot(ladera_temp, marker='o')
    show()


def plotting_example05():
    print("Plotting example 5............")

    ladera_temp = [53.9, 56.3, 56.4, 53.4, 54.5, 55.8, 56.8, 55.0, 55.3, 54.0, 56.7, 56.4, 57.3]
    # years = range(2000, 2013)
    years = range(2000, len(ladera_temp) + 2000)

    # Plot the line WITH dots and the LINE
    plot(years, ladera_temp, marker='o')
    show()


def plotting_example06():
    print("Plotting example 6............")

    ladera_temp_1996 = [53.9, 56.3, 56.4, 53.4, 54.5, 55.8, 56.8, 55.0, 55.3, 54.0, 56.7, 56.4]
    ladera_temp_2006 = [43.9, 66.3, 46.4, 63.4, 34.5, 75.8, 46.8, 65.0, 75.3, 64.0, 56.7, 46.4]
    ladera_temp_2016 = [23.9, 26.3, 36.4, 33.4, 44.5, 55.8, 66.8, 75.0, 65.3, 54.0, 46.7, 36.4]

    months = range(1, 13)

    # Plot the line WITH dots and the LINE
    plot(months, ladera_temp_1996, months, ladera_temp_2006, months, ladera_temp_2016, marker='+')

    # OR you can plot separately like this and the final show() method will know that it needs to plot everything that is queued up.
    # plot(months, ladera_temp_1996, marker='o')
    # plot(months, ladera_temp_2006, marker='o')
    # plot(months, ladera_temp_2016, marker='o')

    # Apply the legend to tell the graphs apart
    legend([1996, 2006, 2016])

    # Apply the TITLE, X-axis and Y-axis label
    title('Average monthly temperature in Ladera Ranch town')
    xlabel('Month')
    ylabel('Temperature in F')

    show()


def plotting_example07():
    print("Plotting example 7............")

    ladera_temp_1996 = [53.9, 56.3, 56.4, 53.4, 54.5, 55.8, 56.8, 55.0, 55.3, 54.0, 56.7, 56.4]
    ladera_temp_2006 = [43.9, 66.3, 46.4, 63.4, 34.5, 75.8, 46.8, 65.0, 75.3, 64.0, 56.7, 46.4]
    ladera_temp_2016 = [23.9, 26.3, 36.4, 33.4, 44.5, 55.8, 66.8, 75.0, 65.3, 54.0, 46.7, 36.4]

    months = range(1, 13)

    # Plot the line WITH dots and the LINE
    plot(months, ladera_temp_1996, months, ladera_temp_2006, months, ladera_temp_2016, marker='+')

    # OR you can plot separately like this and the final show() method will know that it needs to plot everything that is queued up.
    # plot(months, ladera_temp_1996, marker='o')
    # plot(months, ladera_temp_2006, marker='o')
    # plot(months, ladera_temp_2016, marker='o')

    # Figure out AXIS X and Y min and max and set it for the graphs.
    print(axis())
    xy_tuple = axis()   # returns tuple (x-min, x-max, y-min, y-max)
    my_ymin = xy_tuple[2]
    axis(ymin=my_ymin-20)
    axis(xmax=xy_tuple[1]+6)

    # Another way to set X-min, X-max, y-min, y-max
    # axis([xy_tuple[0], xy_tuple[1], xy_tuple[2], xy_tuple[3]])

    # Apply the legend to tell the graphs apart
    legend([1996, 2006, 2016])

    # Apply the TITLE, X-axis and Y-axis label
    title('Average monthly temperature in Ladera Ranch town')
    xlabel('Month')
    ylabel('Temperature in F')

    show()


################################################

if __name__ == "__main__": main()
Here is the command-line menu that you should see when you run this Python code:

For example, if you select option 6 (graph that uses a lot features summarized above), then you will the following graph pop up:



If you select option 7, then you will see how the scaling on the graph is adjusted explicitly through code:






Almir Mustafic





Sunday, July 2, 2017

Fractions in Python

Doing fractions or math using Python is not something new, but I was anxious to try it out because my kids have been learning fractions in school. So I wanted to be that super dad who can solve the fractions very quickly or at least I could use it to verify their homework.
Here are samples that you can copy and paste into your text editor and try out without needing to do any modifications. These examples are tested in Python 2.7.13 in Mac OS.
Here are some basic examples setting fractions and doing some basic operations. There is also exception handling for zero division:
__author__ = 'Almir Mustafic'


from fractions import Fraction


"""
Playing around with fractions
"""


def main():
    print("main program")

    set_fractions()
    fraction_operations()
    input_and_calculate_fractions()


def set_fractions():
    print("set_fractions method ..............")

    f1 = Fraction(3, 4)
    print(f1)

    f2 = Fraction(8, 4)
    print(f2)


def fraction_operations():
    print("fraction_operations method ............. ")

    fsum = Fraction(3, 4) + Fraction(6, 4)
    print(fsum)

    my_float = float(fsum)
    print(my_float)


def input_and_calculate_fractions():
    print("input_and_calculate_fractions method ..............")

    # Enter for example: 2/3
    # Try entering 2/0 to see how it tells you that it is invalid
    try:
        a = Fraction(raw_input("enter a fraction: "))
        print(a)
        print(float(a))
    except ZeroDivisionError:
        print("Invalid fraction")

################################################

if __name__ == "__main__": main()
Here is a more complete example that allows you to interact with it. You can perform add, subtract, multiply and divide.
__author__ = 'Almir Mustafic'


from fractions import Fraction


def main():
    print("main program")
    perform_fraction_operations()


def perform_fraction_operations():
    while True:

        try:
            print("========================================================")
            fraction01 = Fraction(raw_input('Enter fraction: '))
            fraction02 = Fraction(raw_input('Enter another fraction: '))

            my_operation = raw_input('Perform one the following operations: Add (A), Subtract (S), Divide (D), Multiply (M) : ')

            print("________________________________________________________")

            if my_operation.capitalize() == 'ADD' or my_operation.capitalize() == "A":
                add(fraction01, fraction02)
            if my_operation.capitalize() == 'SUBTRACT' or my_operation.capitalize() == "S":
                subtract(fraction01, fraction02)
            if my_operation.capitalize() == 'DIVIDE' or my_operation.capitalize() == "D":
                divide(fraction01, fraction02)
            if my_operation.capitalize() == 'MULTIPLY' or my_operation.capitalize() == "M":
                multiply(fraction01, fraction02)
        except ValueError:
            print('Invalid fraction entered')
        except ZeroDivisionError:
            print("Zero division fraction. Do NOT do this :)")

        print("========================================================")

        answer = raw_input('Do you want to exit? (yes) for yes or just press enter to continue: ')
        if answer == 'yes' or answer == 'y':
            break


def add(f1, f2):
    print('Result of adding {0} and {1} is {2} '.format(f1, f2, f1+f2))


def subtract(f1, f2):
    print('Result of subtracting {1} from {0} is {2}'.format(f1, f2, f1-f2))


def divide(f1, f2):
    print('Result of dividing {0} by {1} is {2}'.format(f1, f2, f1/f2))


def multiply(f1, f2):
    print('Result of multiplying {0} and {1} is {2}'.format(f1, f2, f1*f2))


################################################

if __name__ == "__main__": main()
Thank you for reading. 
Almir Mustafic



Saturday, July 1, 2017

Car Technology — Who is betting on what technology? Transmissions? Engines?

The technology in cars is very interesting to follow. A lot of innovation is happening around the entertainment systems in cars and the integration with our smart phones. However, there is another part of technology that is kind of happening in the background with car enthusiasts being positively or negatively impacted by the decisions that manufacturers have been making.
I am talking about the strategic approach that car companies have been taking with regards to:
  • * Transmission technology
  • * Engine technology
(1) Transmission Technology:
Here are the current transmission technology choices:
  • * Regular manual
  • * Dual-Clutch Automatics with shifting capability
  • * Traditional automatics with or without shifting capability
  • * CVT with or without emulated shifting capability
It is interesting to see how different car companies are betting on different technologies and it is not clear yet if that is just a tactical approach or long-term strategic approach that they are taking.
We know that regular manual transmissions are slowly going away and there are fewer and fewer choices from year to year. That is not a surprise, but it is definitely not a move that purist car enthusiasts (petrolheads) are fond of.
However, the interesting thing is following the decisions that car companies have been making with regards to Dual-clutch automatics, traditional automatics and CVTs.
For example, Honda is betting on CVTs for all their low-horsepower cars (Fit, Civic, Accord, CR-V, HR-V). However, Acura (their sister company) is betting on the traditional automatics and dual-clutch transmissions (8-speed and 9-speed) that they are for example putting into their Acura ILX, Acura MDX and other vehicles. As a car enthusiast, it makes me wonder if the decision with using CVTs in Hondas is just a tactical / interim decision and ultimately they are putting their bets on dual-clutch transmissions that they have been using in their Acura vehicles. It could be that the dual-clutch transmissions are costly and they are just waiting for the right time to introduce them in Hondas.
On the other hand, Nissan seems to be putting all their bets on the CVTs in majority of their lineup and for their high-horsepower sports cars and SUVs, they are sticking with traditional automatics and dual-clutch automatics.
As for Toyota, it is hard to tell what their direction is. Their hybrids and Toyota Corollas are using CVTs, but the 4-cylinder and 6-cylinder Camry is using the traditional automatics and the newest one even has a rev-matching feature on downshifts. Maybe they are deciding to try the 8-speed traditional automatics on the Toyota Camry to see how it is working out. Maybe that transmission would also be introduced in the new Corolla when it comes out.
On the other hand, Jeeps and Chevy Camaro are betting on their new 10-speed traditional automatic that is supposed to improve fuel consumption and at the same time keep the revs in the power band for spirited driving.
BMW has always been investing into their traditional automatics for their regular line-up and they are just increasing the number of gears and optimizing the shifting. Other luxury car companies are also betting on traditional automatics and dual-clutch automatics.
It is interesting to watch this unfold from the side-lines, and I hope that traditional automatics and dual-clutch automatics prevail even though the CVTs I have experienced in Hondas are really good.
(2) Engine Technology:
Here are the technology choices for engines:
  • * Turbocharged 3-cylinder engines to replace 4-cylinder naturally aspirated engines
  • * Turbocharged 4-cylinder engines to replace V6 engines
  • * Turbocharged 6-cylinder engines to replace V8 engines
  • * Sticking with naturally aspirated 4-cylinder engines and improving efficiency and power with high compression ratio
  • * Sticking with naturally aspirated 6-cylinder engines and improving efficiency and power with high compression ratio
  • * Hybrids with intention to be pure economy choice
  • * Supplementing regular engines with electric power for performance as the main goal and also helping out with the fuel consumption
  • * Sticking with naturally aspirated V8 and V10 engines
  • * Pure electric motors
Governments in many countries are requesting from car companies to improve the fuel consumption and emissions from year to year. Therefore, car companies are under pressure to deliver and many of them have no other choice but to drop the number of cylinders and introduce turbochargers on their engines in order to meet the emission and fuel consumption regulations.
For example, Mercedes and BMW are going with turbocharged engines. They are replacing their naturally aspirated 6-cylinder engines with turbocharged 4-cylinder engines and in this process, they are keeping the horsepower the same while increasing the torque numbers and improving fuel consumption and reducing the emissions.
On the other hand, Honda as always been known for free-revving naturally aspirated engines with low torque, but they actually gave in and started releasing cars with turbocharged engines. Their recent introduction in Civics and CR-V is a 1.5L turbocharged engine that does not have VTEC any more. This is the first time you can say that Civics have torque, but the purists out there are disappointed about the high-revving VTEC being gone. I have owned cars with both free-revving naturally aspirated engines and with turbocharged engines. There is definitely something about the high-revving engines that makes car enthusiasts smile, but on the other hand you have the torque of turbocharged engines that is very practical in daily driving.
As for Toyota and Mazda, they seem to be betting on naturally aspirated engines with higher compression ratios; higher compression ratio helps you with fuel consumption and power gains as well. Mazda MX-5 (Miata), Mazda 3 and Mazda 6 all have naturally aspirated high-compression engines. Toyota is also betting on high-compression-ratio engines with their 4-cylinder and 6-cylinder engines that seem to be promising based on professional reviews.
It is interesting to watch these different tactical and strategic moves unfold in the automotive industry. I am sure that some of these moves will open up the market for some collectibles. For example, Acura Integra Type-R becomes even more valuable with the release of the new Civic Type-R with a 2.0L turbocharged engine. BMW M3 with the naturally aspirated 6-cylinder and 8-cylinder engine will be that much more valuable with the release of new M3 and M4 in turbocharged form.
In conclusion: As car enthusiasts, we are living during fun times when you have very good choices without spending too much money; it reminds me of early and mid 90s when the car scene was blooming.
Thank you for reading. 
Almir Mustafic





Monday, June 26, 2017

Top 25 things to achieve Enterprise-Level Microservices — What your team should accomplish

Developing applications / microservices is a definitely a skill that your team needs to gain, but what does it mean when you say “I am done”. What is the definition of “DONE”?
A lot of times you hear from us software engineers the following statements:
  • “I am done coding that part but I still need to wrap up my unit-testing”.
  • “My user story is done, but we just need to finish writing the automation tests.”
  • “I am done, but I just need to add better logging in the code and add some better exception handling.”
  • “I am done, but I have not deployed to QA environment yet; it works on my machine.”
We all have been in these shoes. We need to step back and look at this from the software engineering 10,000-foot level and truly understand what it means to deliver something and call it done and be enterprise worthy at the same time.
Here is my recommended list of things that your TEAM needs to accomplish in order to truly have production and enterprise worthy applications/microservices:
(1) Architecture diagram representing interactions of your family of microservices:
After going through the design and contract definitions of your APIs/microservices, your team needs to put together a single diagram that explains the interaction among microservices (owned by your team) and how these microservices interact with other teams’ microservices. This is basically the blueprint for your team’s work and this picture needs to be constantly updated as you are evolving your services.
(2) Service names, boundaries (domain lines) and API definitions/standards:
Names mean things. So you first need to properly name your services and that’s the names that you would use when talking to your teammates and clients of your services/APIs.
I have a separate article on how you go about defining what a microservice is: https://medium.com/@almirx101/micro-in-microservices-677e183baa5a
Essentially, you need define the purpose and boundaries of your service.
Then you get into API routes and properly defining them for each service. The goal is to keep the routes RESTful and if you run into the situation when they are not, then it should trigger you to revisit the purpose and boundaries of that given microservice.
(3) Application CONFIG file defining your service:
The assumption would be that your platform is set up in such a way that you as the owner of a microservice would properly set up your application CONFIG file which defines your microservice and all the resources that it needs. This config file defines anything from what database you need, database details, what cache you need, what queues you need, how much memory and CPU you want your application to use, what physical server you want your application to run on, and etc.
Basically this config file is the contract between you (as the owner of a microservice) and DevOps scripts/tools that take this config file and use as the definition to deploy your service into a given environment whether it is in the cloud or on-prem.
You own this config file but with ownership comes responsibility. It is important that this config file and code-reviewed properly and managed properly to keep the stability of your application.
(4) API Dependencies (Direct or via Pub/Sub approach):
As you are defining your microservices and API contracts, it is important that you explicitly think about API dependencies. Think about the type of dependency you want to achieve. You can directly call another microservice via http call (sync or async), or you can use the publish/subscribe methodology. It depends on your use case and there is no right or wrong here. The important thing is that you give it an explicit thought as you are designing your family of microservices and as you are designing the interaction among them and interaction with outside services. Ideally you could have this API dependency definition within the config file mentioned above.
(5) API Dependency Analysis Tool:
As you have all these API dependencies, they need to be managed somehow. You need a holistic view of all your services. Therefore, you need to develop a tool that performs static analysis through your config files and maps a graph of all microservices and how they interact with each other. To supplement this static analysis tool, you would also need a tool that in real-time maps all the microservices interactions that are actually taking place in production.
(6) Logging / correlationId / Log-Levels:
First agree on the format of your log messages. Then implement a logging framework in a library that all your microservices can utilize.
Make sure that you introduce the concept of correlationId that can be used to map the flow especially between microservices. For example, your microservice could be logging a requestId or some form of a GUID unique to that service, but in order to connect things among multiple components/services, you need the concept or correlationId.
Now that you have the foundation and the principles down, your team needs to have the right mindset; it looks simple on paper, but there is much more to it. How many times have you been in a situation when you were troubleshooting a production issue and you said to yourself: “I wish had a log line for this condition”. When it comes to logging, here are some tips:
  • Log positive as well as negative cases
  • Any time you add an IF condition in your code, you need to think about what you will log in the IF, ELSE IF and ELSE.
  • Log your exceptions if you caught them and you need to enrich your log with the context you have at that level and not at the level above in the stack-trace.
  • Introduce the concept of log-levels and make sure that the production environment is not logging data that should not be logged (personal info) and that the excessive logging does not negatively impact performance.
Unwritten rule with logging is that you do NOT put a log statement in a loop unless you truly understand the number of iterations will not impact your performance with multiple requests going into your service.
(7) Log Aggregator Tool:
Your team needs or your company overall needs to pick a log aggregator tool that allows you to view all your flows through a website and it should allow you to perform fancy queries on your logs. For example, you can pick Splunk. Your service will use a library to stream all the logs into Splunk and Splunk would be your eyes into the system.
Now that you have a tool like Splunk, you need to take it to the next level. Each team should be responsible to build dashboards in Splunk that track the health of your microservices. These dashboards could be used by any technical person troubleshooting things in production.
Then the next thing to do is for your Network Operation Center to set up alerts in Splunk so that they get informed when certain limits are reached and when to open a severity 1 or 2 ticket with the right microservices team.
(8) Exception Handling in your code:
In your choice of general purpose language, you have the concept of exception handling (try-catch or try-except blocks). Even though your services would be micro-services, it does not mean that the whole microservice internally contains only one layer. Most likely you will have your API level or controller layer or resource layer. That’s the code where your API routes are defined and where you have the entry points for each route. That code typically should do just some validation and then proxy your call down into your business layer and your business layer makes calls into the database/repository layer and/or into another another API via your adapter layer. As you are writing code in these layers, you need to think about the try/catch blocks if you need it or not. For example, there is no reason to catch an exception at the lower level unless you need to log something that you have ONLY available at that layer and not at the layer of the invoker.
I don’t want to get into too much details on this topic as this deserves a totally separate article. I just wanted to point out that the exception handling needs to be taken seriously if you want to achieve that enterprise grade.
(9) Error Handling:
Don’t mistake the Error handling with Exception handling. Error handling is different. An error could be a result of a specific IF condition in your code or it could be a result of an exception that got caught. Basically your team needs to decide what how to translate all the errors within the microservice into HTTP codes and you can also enrich the JSON response with some extra details. Don’t forget that you can for example use specific range of 5xx custom http codes to provide a more meaningful response to your client. Your team needs to analyze all the paths in the code and decide what HTTP codes will be sent to the client. Don’t just have all errors be HTTP 500; that does not help your clients nor you if you are troubleshooting your service.
(10) Timeouts:
Timeouts in your API calls are something that is typically very lightly taken. We engineers would just code it and then when things stop working, you are wondering what your defaults are and if those defaults make sense. Understand what the defaults are in the library that you are using to perform HTTP/API calls. Then assess what is reasonable from client’s point of view.
Dynamically variable timeouts are something even better if your team has time to implement. Let’s assume you have a soft timeout of 15 seconds for outbound API calls from your microservice. Now your service is calling another microservice and it starts timing out at 15 seconds. You could build smarts into your system where timeouts dynamically change from the 15-sec soft limit to 20 seconds. Then your application knows what percentage improvement you made by increasing the timeout to 20 seconds. Then your application based on data can decide if to stay at 20 seconds or to increase the timeout to 25 seconds or go back down to 15 seconds. This mechanism allows your application to still function and service customers and obviously you would need a hard limit for your timeout because you need to draw a line and determine what timeout is truly something that is not acceptable for end customers. This mechanism allows you to be a bit more aggressive by starting with lower soft-limits which improves the overall performance of your microservice because you don’t have that many connections opened due to longer timeouts.
(11) API Contracts & Stubbed or Mock services:
This item is more about the development methodology that I would strongly recommend. I have a short article that talks about developing the layers of cake instead of slice of cake: https://medium.com/@almirx101/developing-layers-of-cake-instead-of-slices-of-cake-in-your-software-engineering-projects-71483aff2df1
What I strongly recommend that you force yourself to fully define the API contracts for all your microservices. As you go through this exercise, you will realize that you need to reach out to other teams to agree on how you will invoke their APIs and at the same time, you should reach out to your clients and agree on what the best contract for your API is.
After defining the contracts, my strong recommendation would be to develop the stubbed or mock versions of your APIs honoring these contracts. So if I called any of your microservices, I would get proper JSON responses. That means you internally would have that JSON harded or sitting in some temporary JSON file until you eventually implement the data/repository layer and your choice of database itself.
If everybody did this for each one of their microservices, then you are in theory building a thin layer of the cake or a thin layer of your microservice platform. This would enable your Front End team developing a website or a mobile application to implement the full journey A to Z by calling all these mock services. The sooner this full UI journey is given to product management team, the sooner they will get a chance to feel how good their requirements are and the sooner they will be able to adjust things and steer the ship in the right direction so that your end customers ultimately win.
(12) API Documentation and Swagger docs:
Don’t underestimate the power of documentation. I am not a big fan of LONG documents, but I am a big proponent of single-page documents. If it is longer than a page, the chance that people will read it is low. However a singe-page document will be read and if supplemented by good examples, it is even more powerful. Please follow this recommendation for your API documentation. In theory, the Swagger documentation could replace the majority of your API documentation.
(13) Postman app in Chrome:
Postman app for Chrome is what I have been using to try out different APIs. If you use this app or similar type of app, I strongly recommend that you keep the collection of API calls saved and committed into your GitHub repo so that other members of the team don’t have to figure it out from scratch. Examples are the best way to share the documentation as long as you maintain these examples.
(14) Service/API Versioning and Regular clean-up:
In the API world, you can introduce API versions and you can require from clients to pass you the version of the API that they want to call. Typically you can follow the Major.Minor.Patch versioning. For example, you can start with version 1.0.0 of your API. As you are making small changes and fix defects, you can keep incrementing the patch version. If you introduce some new features, then you can increment the minor version. However, if you introduce a big set of new features and also refactor your API to break some contracts, then you would increment the major version.
My recommendation is that you as a team don’t maintain/support too many versions at any given timeAs you depreciate the old API versions, please make sure that you properly clean up all the resources that have been used by that version in order to save the costs. For example, that could be anything from dedicated servers for that API version, dedicated cache, dedicated database, configuration files, and etc.
(15) Cache:
Cache can be used as a mechanism to save some configuration values and improve the performance for accessing values in this cache. Cache can also be used to cache transactional information. For example, you could have a Customers microservice that could be saving the transactional information into cache for 15 minutes because the Front End (HTML/Javascript) code of your website accesses your Customers API on many pages. That’s where you can improve the performance and cache this information and obviously any time customer’s information is changed, you need to update the cache. My recommendation is that you analyze your use cases and see how and if you need to use cache. If you do decided to use cache, then make sure that your application/microservice cache size can be determined on service by service basis. Some services need more and some less cache and this approach will keep your costs down.
(16) Backwards Compatibility:
One way to retrieve backwards compatibility in APIs is to introduce a new API version, but if you did that for every little thing, you would end up in API version management hell very soon. Introduce guidelines in your teams for incrementing new API versions. But let’s focus a bit on keeping backwards compatibility within the existing API version. When it comes to model classes in your Java/C#/Python/NodeJS code and the data structure/scheme in your choice of database, it is important that these changes are always backwards compatible. For example, you could introduce a new attributes in these model classes, but if you make these new attributes mandatory, then you need to think how your code would read the pre-existing records in database. You need to make sure that your code gracefully handles that. As for changing the existing attributes in your model classes, that should never happen; that’s an interface change and you should never do interface changes.
Therefore, my general advice is to always think about backwards compatibility and make it a part of your programming culture.
(17) Security behind your API routes and header information in POST/PUT calls:
Assuming that you have a mechanism to configure your API routes as internal to the platform or external to outside client, you need to have a process around tracking the security of your API routes. This is important especially if your external routes could have different roles depending on what token you get authenticated for.
Then when we get a bit deeper into the HTTP call itself, it is important that you have standards around what is acceptable in the header of the POST/PUT HTTP calls. This will save you a lot of headaches. Now that you know what is acceptable in the header, you need to have mechanisms for deciding what if the given header attribute can be injected by your external client or not. Or even if it is injected, it would not matter if you further down this HTTP call, you strip out that information and replace it with correct values; if you have this type of capability, you are that much better.
(18) Security — Encryption of data at the attribute/column level:
To be at the enterprise level and pass PCI regulations, you need to have a library for your microservices that fully abstracts the encryption of data and the management of keys needs to be done on dedicated appliances.
(19) API Automation approach:
First, you need to implement an automation framework that your teams can use.
Then your team needs to focus on automating the most likely real-life scenarios with APIs instead of trying to exhaust the permutations on each individual API.
Initial automation tests should exercise how your family of APIs work together. Then you can expand your horizon and focus on more holistic view joining forces with automation engineers from other teams.
(20) Batch process within your platform:
The microservice architecture has the event driven methodology built it in, but there are still 3rd parties that interact with your platform and require a batch oriented design. Therefore, your team needs to make sure that you have a reference architecture to handle batch oriented implementations. Depending on whether you use NoSQL or Relational type of database, this batch oriented solution will be different, but you need to have it.
(21) Load and Performance testing:
Load and performance testing needs to be done for each microservice before that microservice can be accepted into production environment and opened to real customers. Your team needs to be able with confidence to determine for every microservice:
  • What is the comfortable memory & CPU usage while the service is functioning fine but close to crashing?
  • What is the memory & CPU usage when the service starts crashing in the load test.
  • How many requests your service can handle per minute or per second?
Your team needs to document all of this in order to be accepted into production.
(22) Circuit-breaker pattern especially when calling external (3rd party) APIs:
In order to prevent the back-pressure effect on your microservices, you need to introduce the circuit-break pattern for situations when your microservice calls into another service/API. This is especially crucial when your microservice calls into an API external to your company; you don’t want the stability of 3rd parties to have direct impact on your application and cause so much back-pressure that the other pieces of your service are affected.
(23) Database design:
First you need to decide if you will embrace NoSQL world or you will stick to more traditional SQL/Relational database world.
Let’s assume you decide to pick NoSQL, then you need to keep in mind that there is NO SUCH THING as no schema. You technically always have some data structure but your database system may not be enforcing it, and with code-first methodology, your application code would be enforcing all of this. Even if you pick NoSQL, your data still needs to be eventually streamed or replicated into some form of relational database if you want your reporting/analytics team to perform analysis on it.
Therefore, regardless of NoSQL or SQL choice, your team needs to implement the streaming/replication of data into a reporting database.
Don’t forget about the database backups. I am talking about the cold backups or snapshots. For example, if you realized that your data has been unreliable for last few hours, or the system fully went down, then you need to have a choice to pull the backups and restore from earlier that day.
(24) Database capacity provisioning:
Your team needs to provision your database for your scale of traffic and the amount of data. For example in AWS DynamoDB NoSQL world, you need to provision the reads and writes per second and you pay for what you provisioned. At the same time you need to be able to handle quick bursts of traffic or longer or consistent increases in traffic. Similar concepts need to be applied in the relational database world as well. For example, in Amazon Aurora you don’t necessarily provision tables ahead of time, but you do need to worry about the overall amount of data you can store in Aurora instances.
(25) Data / Data structure Governance & Reporting/Analytics:
What do I mean by “data governance”? Your team needs to have procedures with the goal of preventing breaking changes to data structures and negatively impacting the integrity of data. For example, in the NoSQL world, code comes first, but to manage data structure changes, there are no stored procedures; you really need to have procedures for software engineers to inform the data team about any model classes changes in your Java/C#/Python/Go/NodeJS code. You need to be always backwards compatible and since this NoSQL data eventually ends up in some form of relation database, you need to keep informing the data team about the attributes that have been added in the data structure. Ideally you would need an automated process that does static code analysis and always detects changes in these model classes even if programmers forget to inform each other and the database teammates.
On the other hand, if you are in the relational database world, you are controlling things through your stored procedures that are abstracting the data structure from software engineers and the database team is in more control.
Reporting and analytics has always been of those after-thoughts. In theory, you should start with this. Before a line of code is written, you could specify what defines “success” for you and you should be able to break it down into multiple measures. I am sure that the requirements and designs will change throughout the implementation phase, but at the principle level, it stays the same. Therefore, introduce into your work environment the concepts of reporting and analytics from early phases of design and development.
In conclusion, writing code is relatively simple and when your software engineers say they are done, you should clearly define what “done” means from enterprise point of view.
Thank you for reading this long article and I hope you found it useful as you go down this path of implementing and releasing enterprise level applications/microservices.
Almir Mustafic