In this blog article, Igor Nabebin, senior software engineer, explains how we sped up the delivery process of features with the power of micro frontends at sennder. He describes the problem we faced, the different solution approaches we considered, some difficulties we had while implementing them, and of course the results we achieved.
About the author: Igor has been with sennder since January 2020 and works in the execution team of Octopus, one of our internal products and part of the sennder operating system called sennOS.
Together with colleagues from multiple cross-functional teams working on Octopus, Igor is responsible for the frontend code of the application.
It’s a big enterprise application with plenty of features being developed and maintained continually. This leads to different challenges while working with Octopus, which require changes in the architecture of the project on a global level.
You can learn more about how our tech organization is structured through our product development playbook here.
The challenge
Before we dive into the challenges, let’s start with this overview of stats to describe Octopus as a product:
Most pull requests contain changes in the single-component or just a few ones. So CI pipelines running the building/testing/deployment of the whole application are certainly redundant, although having some integration tests are useful in this case.
Also, the CI pipeline, which triggers the deployment process, is being executed on every code change and it could create a race condition that makes developers spend time updating and rebasing their branches. A similar concept is described in this “race condition in auto-merging”-issue.
From the developers’ perspective it looks like this:
[object Object]0. Developers A and B from two different teams want to deploy their code to the production environment independently from each other. So they update their feature branches triggering CI pipelines, which are being executed automatically on every push to the feature branch. At sennder, we don’t execute them for master/development branches. [object Object]1. As soon as all tests are passed developer A triggers the deployment. [object Object]2. Developer B starts to trigger the deployment but they receive an error that the code in the feature branch doesn’t contain changes added by developer A in the previous point, so the code can’t be deployed. Developer B has to rebase the feature branch one more time and wait for the CI pipeline to pass again.
As we see, the process for developer B takes almost twice as long as it should. If we add another developer (C) to the scheme above, they’ll probably have to re-run tests three times. And for a fourth developer (D) it’ll be four times.
Apart from the apparent delay in delivering the feature to the users and the machine time spent for tests that could not be triggered, this situation will likely make developers frustrated and they’ll want to work towards an improvised deployment queue with other developers.
This manual queue contradicts the CI/CD concept as it requires the constant attention of the involved developers and adds additional idle time caused by waiting for other queue members. You can also read about other ways to solve this problem at GitLab.
The last limitation, which is still quite serious though not as critical as the previous two, is the extremely high difficulty of introducing the global code updates, like migrating the JS framework to the next major version or any other global changes that require updating the whole codebase. Upgrades like these require the coordinated work of all teams working on the product. It also requires a temporary feature freeze required for migration of the application to a new technology, which means that no features will be delivered for some amount of time, which could be anything from a couple of hours to a couple of days.
Internally we’ve discussed the described limitations multiple times and come to the consensus that the application code must be separated. But the separation must happen only on the code level. Users must still have the same user experience, so it should remain a single application from the user side. Micro frontends are the most popular conception of achieving this code separation.
Micro Frontends
The microservice architecture approach is no longer something new in the backend world, it found its niche and it helps companies to achieve greater efficiency and scalability not available with a monolithic architecture.
Despite this, most of the client-side code, including the one being developed both inside and outside our company now, is still monolithic. It could be justified though because the micro frontend approach, which is quite similar to what microservices offer, has its own application area.
The graph below is based on our own experience, but many micro frontend experts like Michael Geers and Cam Jackson also agree that micro frontends architecture is designed for an organizational structure where independent teams own the same product.
The product usually grows with the company. As the number of users increases, so does the number of new features being developed. At some point in time, the team responsible for the product becomes too big to manage and it splits into multiple smaller teams, each owning a part of the product. The growth of the product complexity and the number of teams working on the product can lead to significant feature development and the delivery speed decreases.
Possible solutions
The described situation is not something new. There are several different approaches for solving all mentioned problems, so I’ll briefly write about the most popular ones. You can also find a more detailed description with some code examples in this article.
The different approaches I will walk us through are:
- Server-side routing
- Server-side composition
- Iframes
- Build-time dependencies
- Runtime dependencies
Server-side routing
This solution is dividing the initial application into multiple smaller independent applications united by the server-side router, e.g. Nginx. This approach is relatively simple to implement and works perfectly if it’s possible to divide the initial application into independent parts by URL, so team A will work on page A, team B will work on page B, and so on.
Unfortunately, this approach doesn’t work for our product because most of the features we develop include components on different pages, including ones owned by other teams.
Server-side composition
The main difference between this approach and the previous one is that techniques like Nginx SSI allow us to compose the HTML page from different fragments by injecting components’ HTML code into the root HTML.
This kind of composition works really well with search engines and the first load performance is also better than for client-side approaches. But since our product has an application-like UI with a lot of interactivity required, the server-side approach won’t work that well in our case.
Iframes
Iframe is a time-tested technology supported by all browsers, including the older ones, which provides the best isolation for scripts and styling between different modules.
However, iframe technology has some serious drawbacks:
- Harmful for accessibility.
- Bad for search engines.
- Possible performance issues: Every new iframe generates a new browser context, which is almost the same as opening a new browser tab in terms of CPU and memory usage.
- Layout constraints: the iframe cannot render the content out of its borders.
The last point was a red flag for us because the first module we wanted to extract from the application contained components like a modal window, which is quite difficult to implement with iframes.
Build-time dependencies
This solution includes extracting the code of individual modules to separate repositories or to different projects within the same monorepo, with independent CI pipelines, and integrating these modules into the root application via NPM packages.
Even though this way doesn’t allow developers to deploy individual modules independently, we decided to use this solution as an initial step. It doesn’t fully solve the problem, but it doesn’t require any infrastructure updates or using frameworks/external utilities in the root application. The other reason for going with the build-time approach at first is that we have already implemented it for our component library.
Runtime dependencies
This is what people usually mean when they talk about micro frontends.
It looks pretty similar to the previous solution but with independent deployments. To make it possible, we need to extract modules bundles from the root application bundle and make the root application download them from the external source, e.g. from CDN.
The runtime dependencies will be our final step on the way to splitting the application.
Implementation details
The double migration from a monolithic approach to the build-time dependencies, followed by the runtime dependencies, made us add some extra steps, but the roadmap still looked generally simple. We planned to achieve independent deployments in just a couple of steps:
[object Object]0. Extracting the code. [object Object]1. Releasing the module to NPM. [object Object]2. Tests extraction. [object Object]3. Runtime module loading. [object Object]4. Releasing the module to CDN.
Extracting the code
The first step is pretty obvious. We need to move the code related to the feature to a separate repository.
Releasing the module to NPM
This step is necessary only if you decide to go with build-time dependencies first or if you want to keep using build-time NPM dependencies for the local development.
Developers will need the NPM registry for hosting their NPM package there. There are multiple NPM registry solutions available on the market, e.g. private or public, 3rd party service or self-hosted, and free or paid. We were looking for a private one, so we chose the GitLab NPM registry because we are already using other GitLab services. At this stage, we also need to create a CI pipeline to automatically release the NPM package. Manual releases also work just fine, especially if you don’t plan to update the module code too often.
After this stage, we have a working solution. We get versioning support and we can update the code of the module without touching the root application. This solution is not perfect because independent deployments are not possible for this step. We’ll still have to trigger the root application deployment to make these changes appear on the user side.
Tests extraction
The next thing we did was move the feature-related tests to the repo with the feature code and add testing jobs to the pipeline.
After doing that, the build-time dependencies approach should be up and running and you might even stop here if your main problem was testing pipeline speed.
Runtime module loading
We were looking for some lightweight and easy ways of loading the module by the URL instead of the NPM registry that will allow us not to change the existing codebase. So JavaScript import statements should behave the same way as before. We also didn’t want to use some feature-heavy micro frontends framework for the first version of our setup.
It appears that the technology we’re looking for, that allows us to replace the import directive destination from the local node_modules folder to any external URL, already exists and this is import-maps. The bad thing is that it has too poor browser support for us to use it as it is. The good thing is that there’s a nice open-source tool polyfilling this functionality called SystemJS. Implementing it was pretty easy but then we realized that we also need to find a way to store the module bundle.
Releasing the module to CDN
There are free CDN services for the open-source NPM packages but unfortunately, none of them worked with private NPM packages. So we decided to create our own internal CDN, which allows us to host module bundles to be consumed by SystemJS.
After the CDN was created, using AWS S3 buckets under the hood, the whole runtime dependencies setup started working.
For more information about how we create this CDN, also check out this article: AWS multi-account CI/CD with Gitlab runners.
Challenges
1. Code duplication
Some parts of the code could be used in both the root application and the extracted module. There are two ways of storing this code:
- By copying this code to both repositories
- Or by extracting this code to the separate runtime or build-time dependency.
Even though copying the code sounds bad, in reality for small chunks of the code it’s a much easier solution than maintaining the library of the shared code.
2. Type annotations
The developer experience should not get worse after migrating to micro frontends. The application code is written with TypeScript and we want to have type annotations for the micro frontend modules we use.
That means we have to keep using NPM dependencies for local development.
3. Vendor code: runtime vs build-time
Sometimes it’s difficult to decide if we should make a shared library, like lodash, a runtime dependency, making the root application and all micro frontends reference to the same instance of this dependency, or have each module use its own instance.
Each approach has its own pros and cons:
- Enforcing the same library version for each module vs. allowing to upgrade/downgrade it independently.
- Downloading the whole library and caching it vs. using tree-shaking but not being able to cache it.
There is no strict rule on which way to choose. All shared dependencies are build-time ones by default and you might want to make it runtime in case of the following:
- The dependency does not support tree-shaking or you can’t benefit from it a lot.
- The dependency is internal and is being maintained by your company, so you’re sure some random dependency update won’t break your application.
- It’s important to have all micro frontends use the same version of the dependency. E.g. if this dependency is a component library and you care about the UI consistency across your application modules.
4. Backward compatibility
The application we’re working with will be the first one working with micro frontends and some of its runtime dependencies could also be used by other applications, which have not adopted the micro frontend technology yet.
To make these modules compatible with both monolithic and micro frontend setups we need to deploy them to both the NPM registry and the CDN. That’s why we have two parallel deployment jobs in our CI pipeline, one for the NPM registry and one for the CDN.
5. QA
There are different approaches for the micro frontend’s QA environment setup. The first and simplest approach is to disable micro frontends for the QA environment. In this case, micro frontends will be available only in staging and production environments.
This solution has multiple drawbacks and the main one is that we might miss some micro frontend setup-related issues during the QA. E.g.if one of the micro frontend deployment CI jobs fails we might have an outdated version or even a 404 error in the production.
The better approach is to enable micro frontends for the QA environment, but in this case, you’ll need to deploy pre-released versions of your micro frontend to the CDN and also to figure out how to build your QA cluster that references the pre-released version of the micro frontend you want to test. Also please see this source about semantic versioning.
6. Using more than one framework
The main tradeoff of not using any micro frontend-oriented frameworks is having to stick to the single JS framework instance, which means we’re not able to migrate to the new framework or the new version of the same framework module-by-module.
7. Data-flow
The data flow and communication between a micro frontend and a root application, or between two micro frontends, is quite simple. Having a single runtime allows us to use the preferred way of communication between components in the JS framework you use. like props for React or props/emit for Vue. There’s no need to create separate event buses or to have some micro frontend-specific global state. Just keep using the same data flow you used before migrating to the micro frontends approach.
8. CSS
You’ll need to take care not only of JS files of your micro frontend but also of CSS files. It’s up to you, whether you’ll inject your CSS code into the JS bundle or provide it with a separate CSS file.
Results
We introduced the first micro frontend in April 2021, another two have been released in July and we are planning to release a few more in Q3 2021. Even though we are in the very beginning stages of moving forward with the micro frontends architecture, we already have some results:
Simplicity
Micro frontends can be really small. At sennder, each of them contains no more than 10 components. It makes them easily manageable and allows us to onboard people in a fast and efficient way.
The code of the client application that consumes the micro frontend has not undergone significant changes, the SystemJS solution required about 100 additional lines of code, which mostly just switch between build-time and runtime dependencies, depending on the current environment.
Speed
Our average pull request lifetime in the monolith repository is 99 hours, while this time for micro frontends is only 35 hours, even though both include the very same steps of a CI pipeline check, code review, and manual QA.
Talking about the pipeline time, it takes around 12 minutes for both testing and releasing the micro frontend. This is still a lot but now it’s mostly caused by the CI/CD tool provider so we can improve the speed.
For the monolith, the time period is longer with 14 minutes for testing and 8 minutes for releasing, which is 22 minutes in total without considering the race condition mentioned earlier.
Of course, the whole micro frontend approach also has multiple drawbacks, the most important of which is increased technical complexity. You would probably be much happier with the monolithic approach if you’re developing a company website or a small application. Even if the application is big enough but you just have one team working on it, micro frontends won’t make your work easier, it will most likely bring some extra confusion caused by new repositories and tools needed to make the whole setup work.
But if you suffer from the same problems as we are, which are mostly caused by the expensive communication between teams working on the same product, then a micro frontend approach could be a great solution.