Thursday, June 15, 2017

AWS — When to use Amazon Aurora instead of DynamoDB

Amazon DynamoDB as managed database will work for you if you prefer code-first methodology. You will be able to easily scale it if your application inserts data and reads data by your hash key or primary key (hash+sort key). It is also good if your application is doing some queries on the data as long as the resultset of these queries returns less than 1Mb of data. Basically if you stick to functionality that is typically required by websites in real-time, then DynamoDB will perform for you. Obviously you will need to provision the reads and writes properly and you will need to implement some auto-scaling on DynamoDB WCUs and RCUs, but after you do all of the homework, it will be smooth for you without needing to manage much.
However, there are cases when you will need to go back to relational databases in order to accomplish your business requirements and technical requirements.
For example, let’s assume that your website calls one of your microservices which in turn inserts data into its table. Then let’s assume that you need to search the data in this table and perform big extracts which then have to be sent to a 3rd party that deals with your data in a batch-oriented way. If you need to for example query and extract 1 million records from your DynamoDB table, it will take you up to 4.7 hours based on my prototypes using standard AWS DynamoDB library from Python or C# application. The way you read this amount of data is by using LastEvaluatedKey within DynamoDB where you query/scan and get 1Mb (due to the cutoff) and then if the LastEvaluatedKey is not the end of resultset, you need to loop through and continue fetching more results until you exhaust the list. This is feasible but not fast and not scalable.
My test client was outside VPC and obviously if you run it within the VPC, you will almost double your performance, but it comes to bigger extracts, it still takes long. If you are dealing with less than 100,000 records, it is manageable within DynamoDB, but when you exceed 1 million records, it gets unreasonable.
So what do you do in this case? I am sure that you can improve the performance of the extract by using Data Pipeline and similar approaches that are more optimized, but you are still limited.
Basically, your solution would be to switch to a relational database where you can manage your querying much faster and you have a concept of transaction that helps with any concurrency issues you might have been challenged with. If you want to stay within the Amazon managed world, then Amazon Aurora looks very attractive. It has limitations on the amount of data, but most likely those limits are not low enough for your business. As for the big extract performance challenge, your extracts will go from hours (within DynamoDB) to minutes with Aurora.
Please consider this in your designs. Performing big extracts is opposite of the event driven architecture, but these type of requirements still exist due to a need to support legacy systems that you need to interact with or systems that have not adjusted their architecture to your methodologies.
Thank you for reading.
Almir Mustafic.




Wednesday, June 14, 2017

Success of good tech leads is really attributed to the SUM of many LITTLE good moves

How many times have you been asked as a tech lead what you do to make things successful?
If the answer to that question were simple, then the experience of tech leads would be totally undervalued.
We all know that there is no silver bullet for the success of projects that certain tech leads have been achieving. The success of good tech leads is really attributed to the sum of many little good moves.
It starts from day 1 when requirements are being delivered to the team. For example, analyzing the business requirements, detecting what are the true requirements and removing all requirements that contain implementation details is very crucial in the overall equation. Sometimes you, as the tech lead, will need to put a project management hat and a product management hat on in order to steer the ship in the right direction, and at the same time you need to be very engaged in the low-level technical details with your fellow software engineers. This is just the beginning and the moves made at this phase are the foundation to the rest of challenges that follow. In the agile methodology world, this requirements phase happens very often and that even puts more pressure on you as the tech lead to keep the foundation solid.
Then you get to the point where you are taking the ironed out business requirements (epics) and turning them into a collection of user stories that different squads will be receiving into their squad’s backlogs. Understanding how to do this breakdown and estimation is crucial in laying the foundation for all the squads and the backlogs that squads would take on. You, as the tech lead, plays an instrumental role in this process because you would be working very closely with solutions architects in determining the high-level design and architecture. The decisions that you and solutions architects make at this step are impacting multiple squads and the squads’ timelines (pre-planned sprints) and the overall calculated production launch date for the given milestone/release.
Then you get the point where you take the stories from your squad’s backlog and you work with fellow engineers on your squad/team. Every interaction and every little move or piece of advice that you give to developers is important; every recommendation that is given to you needs to be properly assessed. It starts before even a line of code is written. It starts with the coding methodology and mindset. It is anything from introducing guidelines on how to define boundaries of microservices/APIs to unit-testing culture to database design in NoSQL vs. SQL to concepts of backwards compatibility to introduction of feature toggles for product team and launch risk mitigations and etc. There are so many little things that create the sum that defines the success.
If you are a developer on this team, learn to listen and hear what the good tech leads are preaching and also provide your feedback as great tech leads also know how to accept your input and build on it. If you are a tech lead, show the values you care about through examples and let the team experience it, and then sense when you need to step back a bit and let the team move forward and ride on the momentum and the vibe you have established on the floor.
Keep in mind that while you are going through this exercise, you do NOT think that you are smartest person in the room because if you think you are, then it really means that you are in the wrong room. If nobody applies any humbleness, then it means everybody is in the wrong room and the concept the team disappears. Understand your skills and understand the skills your teammates bring on the table and try to create a culture where complimenting each other happens naturally. This is the culture where you respect each other so much at the professional level that you are happily sacrificing your time to save each other’s work.
Geek out and enjoy the process because it requires a lot of patience and perseverance.
Thank you for reading.
Almir Mustafic



Sunday, June 11, 2017

Knowing when to put code into your API/microservice vs. outside API (wrapper microservice)

As you are defining the purpose of each one of your microservices, and as you are developing these microservices, it is very important to know if you need to make changes to an existing microservice, or you need to build a wrapper microservice.
What I am talking about here are not rules; I am just giving you some examples so it can trigger you to think about this while you are going through the similar exercise. I just want to make developers aware of the differences.
So the main question is:
  • Put code into an existing microservice/API
  • OR develop a wrapper microservice API?
Let’s assume you have a “Package” microservice/API and this API gives you the list of packages and their details:
  • /api/packages (this gives you all the packages)
  • /api/packages/{id} (this gives you a specific package based on that ID)
  • /api/packages?type=type101 (this gives you all the packages of type=type101)
Let’s assume that behind this microservice you are using a NoSQL table “Package” and that table has the packageId as the hash key of the table and then you have a bunch of other attributes that you are saving as part of the JSON object.
Let’s say that this API has been used in your production system for many months. Then you get a requirement from your product team that they want to introduce the concept of “Package Bundles”. A package bundle would be just a collection of different packages. For example, you can create the Bundle 1 that contains:
  • package 1
  • package 2
  • package 3
Whereas the bundle 2 could contain:
  • package 1
  • package 7
  • package 8
  • package 9
Ultimately the product team instead of presenting individual packages to customers and having them pick one by one, they want to also present some pre-configured bundles of packages to customers to simplify their experience.
Now the big question is: How do you as a software engineer and solutions architect design this, or in other words how do you incorporate this requirement into your existing design of your microservices?
Before I get into the design, one unwritten rule in software development is that you need to keep things backwards compatible regardless of what type of design you come up with.
Let’s get into the design a bit. If you have been working in monolithic applications for many years (as many of us have), you would be naturally inclined to open up the existing “Package” microservice and you could be possibly introducing the list of bundle IDs in the existing data of the package. Or you could be adding another table within the Package microservice to introduce the concept of bundles and that database structure would be leaning towards a relation DB design. I talked about NoSQL and avoidance of relation DB design in my YouTube video: https://www.youtube.com/watch?v=49YvbcqjEiA
As you get deeper into this approach, you can realize that you are polluting the purpose of the Package microservice and you are slowly turning it into a macro-service. I have a full post about “micro” in micro-services: https://medium.com/@almirx101/micro-in-microservices-677e183baa5a
You also can realize that the API routes cannot stay RESTful. How do you actually search to get the bundles using /api/packages API route?
That’s when you conclude that you need to change your design.
The solution that I recommend for this specific requirement or use case is to introduce a new “PackageBundle” microservice. I really classify this as a wrapper microservice. This PackageBundle microservice would have the following type of API routes:
  • /api/packagebundles (this gives you all the package bundles)
  • /api/packagebundles/{id} (this gives a specific package bundle by id)
  • /api/packagebundles?package=packageA (this gives you the list of bundles that contain packageA)
This microservice would have a NoSQL table with the following data structure:
- packageBundleId   (hash key for the table)
- packages [ ]   (list of packages)
   - packageId
   - packageName
   - other package attributes
- Other attributes needed to represent the bundle
In JSON, this would look like this:
{
  "packageBundleId": "",
  "packages": [
    {
      "packageId": "",
      "packageName": "",
      "packageDescription": ""
    },
    {
      "packageId": "",
      "packageName": "",
      "packageDescription": ""
    }
  ],
  "otherPackageBundleInfo": ""
}
Now let’s visualize how package bundles would be created. Imagine a business-enablement tool that allows you to create a bundle and then let’s assume that you drag packages into a bundle. Behind the scenes, that tool would be calling the Package API to retrieve the details on the package that is dragged into a bundle and then as the final step the tool would be calling the PacakgeBundle API to save all those details into the record for that given bundle. That’s what this business-enablement tool would do. On the other hand, the main website that customers are using would be calling the /api/packagebundles API to get a list of bundles or a specific bundle to display on the screen for customers.
With this approach you kept your existing microservice as micro and you introduced a new microservice that has a sole purpose to manage collection of packages. If I wanted to create some new packages, I am not dependent on the PackageBundle API; I can directly call the Package API to perform the actions. If the product team tomorrow decides that the concept of bundles is not needed any more, then I can just stop using the PackageBundle API and get rid of it in the long term without any impact on the Package API.
I hope this example can help you save some time if you have similar type of requirements and design.
Thank you for reading.
Almir Mustafic.



Sunday, June 4, 2017

Defense is the BEST Offense — how do you apply this concept in Software Engineering?

Defense is the BEST offense. If you play sports, this is very familiar to you. I played high-school basketball and I learned that this is very true.
As for applying this concept in software engineering field, let me start by saying that I am NOT talking about developing software in such a way to protect your job; I am totally against that. I am talking about something else here.
I am talking about a mindset in software engineering that allows you to be offensive and the buzz word that describes this is “being disruptive”. You can develop software very fast, but can you consistently repeat this without creating chaos?
One real life example is the following: You developed a piece of software or a list of features and your QA and Stage testing passed perfectly. Then you are about to go to production and on the day of the production launch, the decision gets made not to launch those features for technical or business reasons. Now the question becomes: Can you easily disable these features and still launch everything to production with these features turned off? If the answer to this question is a CONFIDENT YES, then you are implementing things the right way and you have development methodologies in place that present you with a lot of capabilities and options. It is easy to say this, but this methodology starts before you even write a line of code. On the other hand, one might say “why do I need to develop the feature toggle if I can just be fast and go for it?”. I would say that going fast and taking chances is fine if you have the foundation, but without the foundation and methodology to support you, it is just reckless.
The confident YES in above scenario gives your product management and your senior management enough confidence to try different things without worrying about failing because the failures or sudden decisions will not cause chaos. I provided just one example that accomplishes this goal.
In conclusion, you are building a defensive capability called “feature toggle” but in turn you are actually providing offensive capabilities for you and your leadership team to innovate and disrupt. It sounds counter-active, but it is not.
Thank you for reading.
Almir Mustafic.


Make components reusable and NOT necessarily the orchestration layers in your software platform

Making things reusable in software engineering is very important. That’s why all the general purpose programming languages have methods/functions and they also have object oriented capabilities. Using these capabilities you can implement different patterns. Yes, different languages implement these patterns differently, but my point is that they are all achievable.
Generally speaking junior programmers write code that is simple, but it may not be as reusable as we would like it to be. On the other hand, when programmers gain some experience and learn these different programming patterns, they may get over-excited and start applying these patterns without figuring out first what problem they are trying to solve. They may kind of look at the list of patterns and say: “Oh, this one would be cool”. It is totally fine to use a known programming pattern, but how you arrive at using one decides if you are doing it for the right reasons. Here is what I mean.
First, understand what problem you are trying to solve.
Then figure out the solution for this problem without being disrupted by available patterns. Focus strictly on the solution and work it out on paper or a piece of napkin.
Now that you have a solution to your problem, you can look at the list of known programming patterns at your disposal and most likely one those patterns would be the one that exactly fits the solution. However, there could be a chance that you don’t have a known pattern or it may mean that you need to use a combination of two patterns with slight modifications.
You may be wondering where I am going with this. I am basically trying to get to my next point which is the main subject of this article. I want to talk about making your individual components in your platform reusable and NOT necessarily the actual orchestration layers that you may have.
Let’s assume that you have the following components:
  • Component A
  • Component B
  • Component C
  • Component D
  • Component E
You need to develop each one of these components in such a way that they are reusable so that the orchestration layer can invoke those components without needing to learn about the internals of your components. For the sake of this example, I will assume that those components are just methods/functions. Then let’s assume that the orchestration layer looks as following:
Orchestration Layer:
MethodA();
Async call to MethodB();
MethodC();
Wait for MethodB() response
MethodD();
MethodE();
Then what could happen is that you try to make this orchestration layer reusable by creating a template for it. When you do that, you could create a workflow or some type of state-map that ties all these calls and you have some fancy configuration that allows you to connect state to state and you have a mechanism to incorporate a new step in the state-map without writing any code.
I understand this conceptually, but my question to you is: Why? What do you gain by doing this? What problem are trying to solve? Are you solving a problem at the root?
A lot of times the response is: We need to do it because we want to introduce new steps in this step-factory (state-map) without needing to push any code.
Yes, there are valid reasons to introduce the step-factory pattern (state-map) in your design, but most of the time you don’t have a good reason and I will explain why.
You maybe introducing this step-factory design for wrong reasons. It could be that in your organization it is very hard to deploy a code/binary change to production due to a lot of processes. So you ended up changing your design (even for a simple case as above) and introducing the step-factory to get around the process so that you can deploy a config change quickly to production. First, you are complicating your design and implementation by introducing this without good reasons. Second, you need skilled developers to make code changes within it and you are actually adding more risk.
Let me use the following analogy with the car engine and other cars parts under the hood of your car. Under the hood in the engine bay, you have the following:
  • engine
  • battery
  • alternator
  • water pump
  • brake cylinders
  • air intake and air filter
  • radiator
  • fan
  • …etc
You can picture those car parts as the individual components that I described above in your software platform. The engine bay itself would be the orchestration layer. Everything in this engine bay (orchestration layer) is connected a specific way and it works great. If something goes wrong, you can go inside the individual parts (components) and you can replace the full part or the insides of that part. For example, the air filter could be dirty and you just open the air filter box (component) and put a new air filter. So this setup with this engine bay (orchestration layer) and these parts (components) is great if all you are doing is changing the insides of a part/component or replacing the whole part/component with another one that fits in (using the same interface). So by changing these parts, you are pretty much changing just the configuration within your step-factory. If that is all you are doing, that is perfectly fine and this engine bay / orchestration layer / workflow is perfectly good for you.
However, the world of software engineering (especially microservices world) is not as static as the engine bay of your car. You typically have a need to put things together using different permutations or you have a need to orchestrate in a different way. That’s where you need to step back and ask yourself if you really need to turn your orchestration layer into some template/step-factory/state-map when you are actually constantly changing what that orchestration looks like? This is where you realize that the complexity of your state-map/workflow via step-factory pattern is actually complicating things and it is slowing you down, and it is adding the unnecessary risk. There is nothing wrong with making a code change in your orchestration layer in order to meet a requirement. Tell yourself, it is ok. It is not 100% configurable, but I know that making it 100% configurable would slow me down because of added complexity.
As for my example above about using the configurability to get around your process, you really need to think about solving that problem at the root. That company probably introduced the process to slow developers down because they were manually deploying things to production and breaking things. So the typical reaction from your Change Control team is to tighten the SDLC process. What you are missing is the continuous integration and automated deployments that increase the confidence, success and reduce the risk. That is your solution for that problem and your Change Control team will welcome that because it is repeatable and predictable.
In conclusion, please think about the complexity of your code. Make your individual components modular/reusable and use them in your code. After you use them enough in your orchestration layer, you can determine if you want to keep your orchestration layer simple, or you need to build a template (i.e. step factory or workflow) for it. Think about the maintenance here and your use cases. If you are constantly changing the flow of your orchestration layer, then the complexity you are introducing with the workflows/step-factory is very unnecessary. Then stick with the basics, and it is ok to change code in order to adjust the orchestration layer. It is ok, but streamline your deployments throughout the pre-prod environments and into production to make this experience very smooth.
Thank you for reading.
Almir Mustafic



Friday, May 26, 2017

Factory pattern in microservice architecture — Are you doing it right?

In the software engineering field we all have been using the factory pattern for many years. However, something changed with the introduction of the microservice architecture. How did this affect your factory pattern implementations and have you realized that you need to change your implementation to align it with the microservice architecture?
In the factory pattern, you basically have the following:
  • Interface that contains your method signatures
  • Base class
  • Class A that inherits from Base and implements the interface
  • Class B that inherits from Base and implements the interface
  • Then you have the Factory class
  • Then you have a client or some form of Manager class that creates an instance of the factory and invokes the methods defined in the interface
Let’s now use an example. Let’s say you have a website that accepts the customer’s address and then you need to verify if that address is valid. Let’s assume that behind the scenes you have two different external (3rd party) providers that you can use to do the address verification and you want to abstract that from your clients (Front End code). Then you would end up coding a service as following:
  • IVerifyAddress interface with verifyAddress(…) method signature
  • BaseAddressVerification class
  • ProviderA class that inherits from BaseAddressVerification and implements IVerifyAddress interface
  • ProviderB class that inherits from BaseAddressVerification and implements IVerifyAddress interface
  • AddressVerificationProviderFactory class
  • AddressVerificationManager that creates and instance of the factory and invokes the verifyAddress(…) method
In the n-tier architecture or a monolithic application, the verifyAddress(…) method would be fully implemented within the ProviderA and ProviderB classes within that existing application/service.
However, if you do this type of approach in your microservice architecture, your service really becomes a macro-service instead of a micro-service. You just need to go back to basics of the microservice architecture and go through that exercise of cleanly defining what micro means in the microservice (https://medium.com/@almirx101/micro-in-microservices-677e183baa5a). Once you go through this exercise, then you would realize that you need to do some refactoring in the above code.
Basically the full implementation of the verifyAddress(…) method is NOT going to be within the ProviderA and ProviderB classes of the existing service. The verifyAddress(…) method is those two classes would just be a shell or wrapper method and within that shell you would be making a call into TWO different microservices that you need to implement. One microservice would be AddressProviderA microservice and the other one would be AddressProviderB microservice. Those two microservices would contain the code that integrates behind the scenes to the EXTERNAL (3rd party) ProviderA and ProviderB.
With this approach, we truly decoupled things and each microservice has its single purpose:
  • My Address Factory microservice has the sole purpose to orchestrate things using factory pattern.
  • AddressProviderA microservice has my code on how to integrate with the external ProviderA and verifyAddress() method is fully implemented here.
  • AddressProviderB microservice has my code on how to integrate with the external ProviderB microservice and verifyAddress() method is fully implemented here.
  • and then you have the EXTERNAL (3rd party) APIs that you talk to and they are the actual external ProviderA APIs and ProviderB APIs.
The following exercise is what I like to go through to explain if you implemented the factory pattern properly within the microservice architecture. Let’s say you implemented your service as the macro-service described in the beginning. Then I tell you that I want to implement a batch process and verify a bunch of addresses through provider A leveraging what you have built so far. Would you be able to give me an API route that I can call to specifically verify addresses via provider A? I am sure you could expose an API route within your existing macro-service, but that API route would mostly likely NOT be RESTful. Now going back to the cleaned-up version of your microservices, I could just read the API documentation of your AddressProviderA microservice and I could invoke it directly (without talking to your Factory microservice) because in my batch process I want to run certain address only through provider A.
In conclusion, the factory pattern as the shell is your orchestration microservice and the actual full implementations of the interface are in separate microservices that you need to implement.
Thank you for reading this article.
Almir Mustafic


Thursday, May 25, 2017

Metrics in Software Development Life Cycle — Do they make teams and projects successful?

I like math and science and that’s why I studied Computer Engineering in school and ended up getting into the software engineering field. Numbers were the major part of my education. However, when it comes to making teams more successful, metrics and numbers are not everything.
They are ok as long as they are NOT in the way of team’s work. The numbers and metrics themselves are good, but the problem is in methods how those numbers/metrics are attained. As soon as you get into that territory of putting more emphasis on tools and metrics over working software and people, you are putting a dent into the team’s culture and their productivity. We are leaders in our companies and we don’t get paid for applying metrics that might have worked for somebody else. We are better than that. If it worked for somebody else, it does not mean that it will work for me. You have to do a temperature read in your company and you need to understand the culture on the floor, and most importantly you have be on the floor and live it for yourself as a leader. That’s when you will feel what works and what does not.
I talk a lot about the “feel” and “passion” in software engineering. If I could give you the formula for the “feel”, than I could also be able to give you a scientific explanation for what made successful leaders successful. If that were the case, we would not have leaders hosting seminars and we would not have speakers giving motivational speeches; everything could’ve been calculated through mathematical formulas. We know that’s not the case.
All you can do is try to get closer to understanding what that “feel” is and have others attain it by show-casing it through examples. The “feel” is something that comes with experience and we know it is tough to convey your experience.
I have worked on many high-profile projects that were carefully monitored by the senior leadership teams. Besides having a strong technical team on the floor, what makes these type of projects successful is the good relationship that the project manager and the tech lead of the project build. They compliment each other and the tech lead is the one that provides you that overall “feel” about the project status, and what needs to be done to keep correcting the direction of the ship in the open waters of the oceans. So the metrics can be used by managers to some degree, but ultimately you hire great tech leads to steer the ship at the technical level and these tech leads are closest to engineers on the floor, and that’s why they have the power to build a great culture in the company. With power comes responsibility.
In conclusion, please step back and think about what makes projects successful. Yes, you can use metrics, but be very careful on how you attain these metrics. Ultimately we know that the success of those projects is directly dependent on the authenticity of certain individuals who make things happen. It is about all the intangibles that make it happen and you can’t put a number to intangibles.
Thank you for reading this article. 
Almir Mustafic.


Sunday, May 21, 2017

Micro in Microservices

What does “micro” in the microservices mean?
How do you define what a microservice is? How do you define the boundaries?
This is not an exact science. However, when you get close to the boundaries of violating what the word “micro” means, you will know, and you will feel bothered making changes in that service because it will not feel right. It is still good to have some guidelines on defining the boundaries of your microservice. Here is the approach that I like to use.
Force yourself to write down on paper what your service does. Write it in first person to give it a little more emphasis.
I am the microservice A and I exist so that I can perform X.
This should be the first description in your API documentation so it is clear for your clients and it is also a good reminder for your team.
Let’s see an example. Let’s say you want to develop a “Subscription” microservice, and after some initial discussions you write the definition for that service:
I am the Subscription microservice and I exist so that I can create, read, update, delete subscriptions and I also exist so that I can activate, deactivate benefits and terms for the subscription. I also exist so that I can correct the failed subscriptions into a good state.
This definition is overwhelming. If the Subscription service ends up doing all of that work, then it is really not a micro-service. It is a macro-service. If the definition of your service has a lot of “ands” and “ifs” in the sentence(s), then you should worry about the size of this service.
Here is how this Subscription service could be turned into a micro-service.
I am the Subscription microservice and I exist so that I can perform the CRUD operations on subscriptions.
Now that your Subscription service is a micro-service, you need to introduce other services that will perform the remaining actions. For example, you can add the following services to perform the remaining actions:
I am the Fulfillment microservice and I exist so that I can orchestrate the creation of a subscription and the fulfillment of that subscription which is really the activation and deactivation of benefits with a given subscription.
I am the Fulfillment-Recovery microservice and I exist so that I can recover the failed subscription into a good state.
This is just one example of how you can define the boundaries for the “micro” in the microservices. One may argue that you can break the above services into smaller services, and in my opinion that’s where you are on the other spectrum where your microservices would be turning into nano-services. That is ok as long as you don’t get over-excited and negatively impact performance. If you go in this direction too much, that’s where you are starting to feel the overhead of too many RESTful calls chained together and each http call over the wire adds up. That’s where you need to be very realistic and decide how much you are gaining by doing this given your use cases.
Another way to more closely define the “micro” in your microservice is to define the API routes for all the actions that you mentioned in the definition of your microservice. If you go through this exercise, you quickly discover that the API routes for the original long definition of the Subscription microservice are not going to stay RESTful. That is your signal to start breaking up that service into smaller services.
In conclusion, introduce some guidelines within your company on how to define what a microservice is and keep each other in check when adding new services and when adding functionality to you existing microservices.
Thank you for reading.
Almir Mustafic.