Some companies just don’t know what they need. They move to the cloud because everyone else is doing it. They take physical infrastructure and dump it into a virtual environment without asking why. Getting on the cloud will solve all our problems, right? Well, not exactly. When you migrate without a plan, you risk cloud-washing. Yes, you’re in the cloud but, no, you’re not set up to take advantage of what the cloud offers.
What’s more, over half of cloud migrations go over budget and beyond the migration window, leading to unexpected problems for businesses, according to research from Gartner, Forrester, and others. The only way to avoid that fate is to decide which cloud benefits matter most, then plan a migration accordingly.
“Microservices architecture is not appropriate all the time”.
Doing microservices, or monoliths, or SOA, or Microliths or whatever fancy term gets bandied about at present is not the point. Businesses ideally will be looking for [new] ways to deliver customer value and technology can be a differentiator. The key problem we face as we journey down this path of “deliver value” is actually quite simple: uncertainty. We literally do not know what will deliver value. Customers are also poor at articulating it. ”
Part of what they discovered is that 66% of the “good ideas” people have actually have zero impact (or even worse) on the metric it was intended to effect. The folks who are able to run cheap experiments, run lots of them, and learn what brings value to customers faster than their competitors are going to win.
Microservices is about optimizing for speed.
Pioneers go off and experiment with wild, divergent approaches running many experiments hoping to reduce uncertainty about what may bring value to a company in 3+ yrs. This “pioneering” effort is intended to turn up a few decent options that we can build upon and take to the next level. The “settlers” end up doing this. They figure out how to scale the product engineering, scale marketing, sales, etc. and build the pieces of the organization to make this product a successful differentiator. Ultimately over the years, as a result of competitive diffusion, etc. our new product is no longer uniquely differentiated but still delivers massive value. It will be around for a long time and there are things we can do to make it run more efficiently.
So WTF? How does this tie in? Well…where do you think you fit in your organization?
If you’re the Pioneers, stick with monoliths.
As pioneers, you have to move quickly. You have zero ideas whether a “thing” will bring value. You want to run cheap experiments as quickly as possible and learn.
Running lots of these small experiments don’t require building out a complete product and absolutely reduces the uncertainty in your idea. You may, at some point, come to a point where you build a Minimum Viable Product. But again, the point of the MVP is to test a hypothesis and elicit learning.
Doing microservices at this point is infinitely overkill and will distract you from your objective: Figure out something that delivers value.
If you’re the Settlers, you may need microservices
Once you stumble upon something that delivers value, you will probably want to scale it. This involves creating a product team: product managers, testers, marketing, sales, etc. On the product side, you’ll want to be adding features and moving quickly, again, to run smaller tests about certain features.
Again, our goal is to make changes quickly to test them.
Microservices involves a lot of complexity. Matt Klein recently said, “don’t take on complexity when you don’t need to”. He’s absolutely correct.
If you’re the Town Planners, you may need microservices
We’re currently experiencing a lot of “microservices envy” in our industry. It’s easy to lose track of our jobs as technologists to help find and cultivate customer value using technology. Don’t over optimize and complicate things when you don’t need to. Solve the problems you have, not someone else’s.
We explore the challenge of dealing with data when creating and developing microservices.
Using Spring Boot/Dropwizard/Docker doesn’t mean you’re doing microservices. Taking a hard look at your domain and your data will help you get to microservices.
Of the reasons we attempt a microservices architecture, chief among them is allowing your teams to be able to work on different parts of the system at different speeds with minimal impact across teams. We want teams to be autonomous, capable of making decisions about how to best implement and operate their services, and free to make changes as quickly as the business may require. If we have our teams organized to do this, then the reflection in our systems architecture will begin to evolve into something that looks like microservices.
To gain this autonomy, we need to shed our dependencies, but that’s a lot easier said than done.
In this blog I will demonstrate how to build a simple nodejs function that can do reverse geocoding using Google Maps API, and how to deploy the functions on to Apache OpenWhisk.
The central idea behind microservices is that some types of applications become easier to build and maintain when they are broken down into smaller, composable pieces which work together. Each component is continuously developed and separately maintained, and the application is then simply the sum of its constituent components. This is in contrast to a traditional, “monolithic” application which is all developed all in one piece.
“Microservices architecture is not appropriate all the time”.
Let me expand a little bit.
Doing microservices, or monoliths, or SOA, or Microliths or whatever fancy term gets bandied about at present is not the point. Businesses ideally will be looking for [new] ways to deliver customer value and technology can be a differentiator. The key problem we face as we journey down this path of “deliver value” is actually quite simple: uncertainty. We literally do not know what will deliver value. Customers are also poor at articulating it. We have lots of ideas, good ideas sometimes, but we don’t actually know the what to deliver customer value until we experiment and try.
66% of the “good ideas” people have actually have zero impact (or even worse)
The folks who are able to run cheap experiments, run lots of them, and learn what brings value to customers faster than their competitors are going to win.
If you’re the Pioneers, stick with monoliths.
As pioneers, you have to move quickly. You have zero ideas whether a “thing” will bring value. You want to run cheap experiments as quickly as possible and learn. You may not even be writing any code!
the most inefficient way to test a hypothesis is to build it out completely. In his story, he talks about reducing uncertainty by coming up with a hypothesis like “people who take pictures of wine probably might want to buy that wine” and coming up with cheap experiments to test that hypothesis.
Running lots of these small experiments don’t require building out a complete product and absolutely reduces the uncertainty in your idea. You may, at some point, come to a point where you build a Minimum Viable Product. But again, the point of the MVP is to test a hypothesis and elicit learning. An MVP is not product engineering. You’re not building this for scale. In fact, you’re doing the opposite. You’re probably going to be running MANY MVP tests and throwing them away. A monolith is a perfect way to attack this. A monolith will actually allow you to go faster because changing things quickly can be done all in a single place.
As an architecture for building complex systems, microservices is gaining significant traction within the development community. Especially applications that share challenges related to dependencies and scaling can benefit greatly from it. Microservices adoption is on the rise, but so are the struggles associated with understanding how to test microservices.
Toby Clemson from ThoughtWorks has done a great job of enumerating testing strategies that you might want to employ in a microservices architecture
Applications built as a set of modular components are easier to understand, easier to test, and most importantly easier to maintain over the life of the application. It enables organizations to achieve much higher agility
This approach has proven to be superior, especially for large enterprise applications which are developed by teams of geographically and culturally diverse developers.
There are other benefits:
Developer independence: Small teams work in parallel and can iterate faster than large teams.
Isolation and resilience: If a component dies, you spin up another while and the rest of the application continues to function.
Scalability: Smaller components take up fewer resources and can be scaled to meet increasing demand of that component only.
Lifecycle automation: Individual components are easier to fit into continuous delivery pipelines and complex deployment scenarios not possible with monoliths.
Relationship to the business: Microservice architectures are split along business domain boundaries, increasing independence and understanding across the organization.
Coming into this year, CoreOS’s Alex Polvi predicted that Istio, an open source tool to connect and manage microservices, would soon become a category leading service mesh (essentially a configurable infrastructure layer for microservices) for Kubernetes. Today we celebrate a milestone that brings us closer to that prediction: celebrating the general availability of Istio 1.0.
Istio provides a method of integrating services like load balancing, mutual service-to-service authentication, transport layer encryption, and application telemetry requiring minimal (and in many cases no) changes to the code of individual services. This is in juxtaposition to other solutions like the various Java libraries from Netflix OSS.
The tech giant is extending the Kubernetes container service with Istio, its latest open source software release
In short, Istio is an “open platform to connect, manage, and secure microservices”.
Otherwise known as a ‘service mesh’, the aim is to unify traffic flow management, access policy enforcement and telemetry data aggregation across microservices into a shared management console, regardless of environment.
Originally launched in May 2017, version 1.0 becomes generally available on 1 August 2018 and was announced on stage during Google Next
What is distributed tracing and why is it so important in a microservices environment?
Microservices have become the default choice for greenfield applications. After all, according to practitioners, microservices provide the type of decoupling required for a full digital transformation, allowing individual teams to innovate at a far greater speed than ever before.
Microservices are nothing more than regular distributed systems, only at a larger scale. Therefore, they exacerbate the well-known problems that any distributed system faces, like lack of visibility into a business transaction across process boundaries.
it’s extremely common to have multiple versions of a single service running in production at the same time
we have is chaos. It’s almost impossible to map the interdependencies and understand the path of a business transaction across services and their versions.
This chaos ends up being a good thing, as long as we can observe what’s going on and diagnose the problems that will eventually occur.
A system is said to be observable when we can understand its state based on the metrics, logs, and traces it emits.
Metrics solutions like Prometheus are very popular in tackling this aspect of the observability problem. Similarly, we need logs to be stored in a central location
Logstash is usually applied here, in combination with a backing storage like Elasticsearch.
In monolithic web applications, logging frameworks provide enough capabilities to do a basic root-cause analysis when something fails. A developer just needs to place log statements in the code.
In microservices architectures, logging alone fails to deliver the complete picture.
A common strategy to answer this question is creating an identifier at the very first building block of our transaction and propagating this identifier across all the calls, probably by sending it as an HTTP header whenever a remote call is made.
In a central log collector, we could then see entries
This technique is one of the concepts at the core of any modern distributed tracing solution
trace displayed in Jaeger, an open source distributed tracing solution hosted by the Cloud Native Computing Foundation (CNCF)
Like with logging, we need to annotate or instrument our code with the data we want to record.
we can use an API such as OpenTracing, leaving the decision about the concrete implementation as a packaging or runtime concern
we could turn on the distributed tracing integration for Istio, a service mesh solution
it’s easy to feel helpless while conducting a root cause analysis when something eventually fails and the right tools aren’t available. The good news is tools like Prometheus, Logstash, OpenTracing, and Jaeger provide the pieces to bring observability to your application.
The old strategy of building small, focused applications is new again in the modern microservices environment.
In a nutshell that philosophy is: Build small, focused programs—in whatever language—that do only one thing but do this thing well, communicate via stdin/stdout, and are connected through pipes.
Sound familiar?
Yeah, I thought so. That’s pretty much the definition of microservices offered by James Lewis and Martin Fowler
Code is easy, State is hard. Learning how to deal with your monolithic relational databases in a microservices structure is key to keeping pace in a quickly changing workplace.
put terrible microservices in containers. Manage containers with Kubernetes. Put evrything on AWS cloud. This time it WILL work, I promise!
Reality:
There are an infinity of ways to create poor implementations and only a few ways to write good ones and even fewer ways to make something with efficiency, longevity, and that is easily maintainable. As with many things one can choose any two of good, fast, and cheap but never all three.
If you can’t even do a monolith right, you shouldn’t even be thinking of microservices yet.
Well the whole idea of microservices is to have less to conquer. The cost is obviously communication, which is always a slowdown in any system of entities. If you don’t intend to distribute the work among diverse and distant teams, it is my opinion that microservices are an overhead and pain you don’t need.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
We are a professional review site that has advertisement and can receive compensation from the companies whose products we review. We use affiliate links in the post so if you use them to buy products through those links we can get compensation at no additional cost to you.OkDecline
22 Comments
Tomi Engdahl says:
The Right Way to Plan a Migration (Forget Cloud-Washing)
When you migrate without a plan, you risk cloud-washing.
https://arstechnica.com/sponsored/the-right-way-to-plan-a-migration-forget-cloud-washing/
Some companies just don’t know what they need. They move to the cloud because everyone else is doing it. They take physical infrastructure and dump it into a virtual environment without asking why. Getting on the cloud will solve all our problems, right? Well, not exactly. When you migrate without a plan, you risk cloud-washing. Yes, you’re in the cloud but, no, you’re not set up to take advantage of what the cloud offers.
What’s more, over half of cloud migrations go over budget and beyond the migration window, leading to unexpected problems for businesses, according to research from Gartner, Forrester, and others. The only way to avoid that fate is to decide which cloud benefits matter most, then plan a migration accordingly.
Tomi Engdahl says:
About When Not to Do Microservices
https://developers.redhat.com/blog/2017/10/19/about-when-not-to-do-microservices/?sc_cid=7016000000127ECAAY
statement about microservices and not doing them:
“Microservices architecture is not appropriate all the time”.
Doing microservices, or monoliths, or SOA, or Microliths or whatever fancy term gets bandied about at present is not the point. Businesses ideally will be looking for [new] ways to deliver customer value and technology can be a differentiator. The key problem we face as we journey down this path of “deliver value” is actually quite simple: uncertainty. We literally do not know what will deliver value. Customers are also poor at articulating it. ”
Part of what they discovered is that 66% of the “good ideas” people have actually have zero impact (or even worse) on the metric it was intended to effect. The folks who are able to run cheap experiments, run lots of them, and learn what brings value to customers faster than their competitors are going to win.
Microservices is about optimizing for speed.
Pioneers go off and experiment with wild, divergent approaches running many experiments hoping to reduce uncertainty about what may bring value to a company in 3+ yrs. This “pioneering” effort is intended to turn up a few decent options that we can build upon and take to the next level. The “settlers” end up doing this. They figure out how to scale the product engineering, scale marketing, sales, etc. and build the pieces of the organization to make this product a successful differentiator. Ultimately over the years, as a result of competitive diffusion, etc. our new product is no longer uniquely differentiated but still delivers massive value. It will be around for a long time and there are things we can do to make it run more efficiently.
So WTF? How does this tie in? Well…where do you think you fit in your organization?
If you’re the Pioneers, stick with monoliths.
As pioneers, you have to move quickly. You have zero ideas whether a “thing” will bring value. You want to run cheap experiments as quickly as possible and learn.
Running lots of these small experiments don’t require building out a complete product and absolutely reduces the uncertainty in your idea. You may, at some point, come to a point where you build a Minimum Viable Product. But again, the point of the MVP is to test a hypothesis and elicit learning.
Doing microservices at this point is infinitely overkill and will distract you from your objective: Figure out something that delivers value.
If you’re the Settlers, you may need microservices
Once you stumble upon something that delivers value, you will probably want to scale it. This involves creating a product team: product managers, testers, marketing, sales, etc. On the product side, you’ll want to be adding features and moving quickly, again, to run smaller tests about certain features.
Again, our goal is to make changes quickly to test them.
Microservices involves a lot of complexity. Matt Klein recently said, “don’t take on complexity when you don’t need to”. He’s absolutely correct.
If you’re the Town Planners, you may need microservices
We’re currently experiencing a lot of “microservices envy” in our industry. It’s easy to lose track of our jobs as technologists to help find and cultivate customer value using technology. Don’t over optimize and complicate things when you don’t need to. Solve the problems you have, not someone else’s.
Tomi Engdahl says:
What’s the hardest part about microservices? Your data
https://opensource.com/article/17/5/hardest-part-about-microservices-your-data?sc_cid=7016000000127ECAAY
We explore the challenge of dealing with data when creating and developing microservices.
Using Spring Boot/Dropwizard/Docker doesn’t mean you’re doing microservices. Taking a hard look at your domain and your data will help you get to microservices.
Of the reasons we attempt a microservices architecture, chief among them is allowing your teams to be able to work on different parts of the system at different speeds with minimal impact across teams. We want teams to be autonomous, capable of making decisions about how to best implement and operate their services, and free to make changes as quickly as the business may require. If we have our teams organized to do this, then the reflection in our systems architecture will begin to evolve into something that looks like microservices.
To gain this autonomy, we need to shed our dependencies, but that’s a lot easier said than done.
Tomi Engdahl says:
Whisking Functions with Promises
https://developers.redhat.com/blog/2018/02/26/whisking-functions-with-promises/?sc_cid=7016000000127ECAAY
In this blog I will demonstrate how to build a simple nodejs function that can do reverse geocoding using Google Maps API, and how to deploy the functions on to Apache OpenWhisk.
Tomi Engdahl says:
What are microservices?
https://opensource.com/resources/what-are-microservices?sc_cid=7016000000127ECAAY
The central idea behind microservices is that some types of applications become easier to build and maintain when they are broken down into smaller, composable pieces which work together. Each component is continuously developed and separately maintained, and the application is then simply the sum of its constituent components. This is in contrast to a traditional, “monolithic” application which is all developed all in one piece.
Tomi Engdahl says:
About When Not to Do Microservices
https://developers.redhat.com/blog/2017/10/19/about-when-not-to-do-microservices/?sc_cid=7016000000127ECAAY
“Microservices architecture is not appropriate all the time”.
Let me expand a little bit.
Doing microservices, or monoliths, or SOA, or Microliths or whatever fancy term gets bandied about at present is not the point. Businesses ideally will be looking for [new] ways to deliver customer value and technology can be a differentiator. The key problem we face as we journey down this path of “deliver value” is actually quite simple: uncertainty. We literally do not know what will deliver value. Customers are also poor at articulating it. We have lots of ideas, good ideas sometimes, but we don’t actually know the what to deliver customer value until we experiment and try.
66% of the “good ideas” people have actually have zero impact (or even worse)
The folks who are able to run cheap experiments, run lots of them, and learn what brings value to customers faster than their competitors are going to win.
If you’re the Pioneers, stick with monoliths.
As pioneers, you have to move quickly. You have zero ideas whether a “thing” will bring value. You want to run cheap experiments as quickly as possible and learn. You may not even be writing any code!
the most inefficient way to test a hypothesis is to build it out completely. In his story, he talks about reducing uncertainty by coming up with a hypothesis like “people who take pictures of wine probably might want to buy that wine” and coming up with cheap experiments to test that hypothesis.
Running lots of these small experiments don’t require building out a complete product and absolutely reduces the uncertainty in your idea. You may, at some point, come to a point where you build a Minimum Viable Product. But again, the point of the MVP is to test a hypothesis and elicit learning. An MVP is not product engineering. You’re not building this for scale. In fact, you’re doing the opposite. You’re probably going to be running MANY MVP tests and throwing them away. A monolith is a perfect way to attack this. A monolith will actually allow you to go faster because changing things quickly can be done all in a single place.
Tomi Engdahl says:
Bringing Coolstore Microservices to the Service Mesh: Part 1 – Exploring Auto-injection
https://developers.redhat.com/blog/2018/04/05/coolstore-microservices-service-mesh-part-1-exploring-auto-injection/?sc_cid=7016000000127ECAAY
Tomi Engdahl says:
5 guiding principles you should know before you design a microservice
https://opensource.com/article/18/4/guide-design-microservices?sc_cid=7016000000127ECAAY
Top CTOs offer advice for a well-designed microservice based on five simple principles.
Tomi Engdahl says:
How to solve the challenges of creating automated tests for microservices
http://www.electronics-know-how.com/article/2614/2614
As an architecture for building complex systems, microservices is gaining significant traction within the development community. Especially applications that share challenges related to dependencies and scaling can benefit greatly from it. Microservices adoption is on the rise, but so are the struggles associated with understanding how to test microservices.
Toby Clemson from ThoughtWorks has done a great job of enumerating testing strategies that you might want to employ in a microservices architecture
Tomi Engdahl says:
5 microservice testing strategies for startups
https://opensource.com/article/18/6/five-microservice-testing-strategies-startups?sc_cid=7016000000127ECAAY
Testing microservices isn’t easy, but the benefits make it worthwhile. Here are five strategies to consider.
Tomi Engdahl says:
How Kubernetes became the solution for migrating legacy applications
https://opensource.com/article/18/2/how-kubernetes-became-solution-migrating-legacy-applications?sc_cid=7016000000127ECAAY
You don’t have to tear down your monolith to modernize it. You can evolve it into a beautiful microservice using cloud-native technologies.
Tomi Engdahl says:
What are microservices?
https://opensource.com/resources/what-are-microservices?sc_cid=7016000000127ECAAY
Applications built as a set of modular components are easier to understand, easier to test, and most importantly easier to maintain over the life of the application. It enables organizations to achieve much higher agility
This approach has proven to be superior, especially for large enterprise applications which are developed by teams of geographically and culturally diverse developers.
There are other benefits:
Developer independence: Small teams work in parallel and can iterate faster than large teams.
Isolation and resilience: If a component dies, you spin up another while and the rest of the application continues to function.
Scalability: Smaller components take up fewer resources and can be scaled to meet increasing demand of that component only.
Lifecycle automation: Individual components are easier to fit into continuous delivery pipelines and complex deployment scenarios not possible with monoliths.
Relationship to the business: Microservice architectures are split along business domain boundaries, increasing independence and understanding across the organization.
Tomi Engdahl says:
Connecting and managing microservices with Istio 1.0 on Kubernetes
https://www.redhat.com/en/blog/connecting-and-managing-microservices-istio-10-kubernetes?sc_cid=7016000000127ECAAY
Coming into this year, CoreOS’s Alex Polvi predicted that Istio, an open source tool to connect and manage microservices, would soon become a category leading service mesh (essentially a configurable infrastructure layer for microservices) for Kubernetes. Today we celebrate a milestone that brings us closer to that prediction: celebrating the general availability of Istio 1.0.
Istio provides a method of integrating services like load balancing, mutual service-to-service authentication, transport layer encryption, and application telemetry requiring minimal (and in many cases no) changes to the code of individual services. This is in juxtaposition to other solutions like the various Java libraries from Netflix OSS.
Tomi Engdahl says:
What is Istio? The latest open source project out of Google
https://www.computerworlduk.com/open-source/what-is-istio-latest-open-source-project-out-of-google-3681599/
The tech giant is extending the Kubernetes container service with Istio, its latest open source software release
In short, Istio is an “open platform to connect, manage, and secure microservices”.
Otherwise known as a ‘service mesh’, the aim is to unify traffic flow management, access policy enforcement and telemetry data aggregation across microservices into a shared management console, regardless of environment.
Originally launched in May 2017, version 1.0 becomes generally available on 1 August 2018 and was announced on stage during Google Next
Tomi Engdahl says:
Distributed tracing in a microservices world
https://opensource.com/article/18/9/distributed-tracing-microservices-world?sc_cid=7016000000127ECAAY
What is distributed tracing and why is it so important in a microservices environment?
Microservices have become the default choice for greenfield applications. After all, according to practitioners, microservices provide the type of decoupling required for a full digital transformation, allowing individual teams to innovate at a far greater speed than ever before.
Microservices are nothing more than regular distributed systems, only at a larger scale. Therefore, they exacerbate the well-known problems that any distributed system faces, like lack of visibility into a business transaction across process boundaries.
it’s extremely common to have multiple versions of a single service running in production at the same time
we have is chaos. It’s almost impossible to map the interdependencies and understand the path of a business transaction across services and their versions.
This chaos ends up being a good thing, as long as we can observe what’s going on and diagnose the problems that will eventually occur.
A system is said to be observable when we can understand its state based on the metrics, logs, and traces it emits.
Metrics solutions like Prometheus are very popular in tackling this aspect of the observability problem. Similarly, we need logs to be stored in a central location
Logstash is usually applied here, in combination with a backing storage like Elasticsearch.
In monolithic web applications, logging frameworks provide enough capabilities to do a basic root-cause analysis when something fails. A developer just needs to place log statements in the code.
In microservices architectures, logging alone fails to deliver the complete picture.
A common strategy to answer this question is creating an identifier at the very first building block of our transaction and propagating this identifier across all the calls, probably by sending it as an HTTP header whenever a remote call is made.
In a central log collector, we could then see entries
This technique is one of the concepts at the core of any modern distributed tracing solution
trace displayed in Jaeger, an open source distributed tracing solution hosted by the Cloud Native Computing Foundation (CNCF)
Like with logging, we need to annotate or instrument our code with the data we want to record.
we can use an API such as OpenTracing, leaving the decision about the concrete implementation as a packaging or runtime concern
we could turn on the distributed tracing integration for Istio, a service mesh solution
it’s easy to feel helpless while conducting a root cause analysis when something eventually fails and the right tools aren’t available. The good news is tools like Prometheus, Logstash, OpenTracing, and Jaeger provide the pieces to bring observability to your application.
Tomi Engdahl says:
MIKROPALVELUARKKITEHTUURISTA
https://www.cinia.fi/toihin-cinialle/mikropalveluarkkitehtuurista.html
Mikropalveluarkkitehtuurista on tulossa yhä enemmän standardivaihtoehto tulevaisuuden ohjelmistojen toteutustapoja miettiessä.
Tomi Engdahl says:
The old strategy of building small, focused applications is new again in the modern microservices environment.
Revisiting the Unix philosophy in 2018
https://opensource.com/article/18/11/revisiting-unix-philosophy-2018?sc_cid=7016000000127ECAAY
The old strategy of building small, focused applications is new again in the modern microservices environment.
In a nutshell that philosophy is: Build small, focused programs—in whatever language—that do only one thing but do this thing well, communicate via stdin/stdout, and are connected through pipes.
Sound familiar?
Yeah, I thought so. That’s pretty much the definition of microservices offered by James Lewis and Martin Fowler
Tomi Engdahl says:
About When Not to Do Microservices
https://developers.redhat.com/blog/2017/10/19/about-when-not-to-do-microservices/?sc_cid=7016000000127ECAAY
Tomi Engdahl says:
About When Not to Do Microservices
https://developers.redhat.com/blog/2017/10/19/about-when-not-to-do-microservices/?sc_cid=7016000000127ECAAY
“Microservices architecture is not appropriate all the time”.
Tomi Engdahl says:
Migrating to Microservice Databases: From Relational Monolith to Distributed Data
https://developers.redhat.com/books/migrating-microservice-databases-relational-monolith-distributed-data?sc_cid=7016000000127ECAAY
Code is easy, State is hard. Learning how to deal with your monolithic relational databases in a microservices structure is key to keeping pace in a quickly changing workplace.
Tomi Engdahl says:
A fully buzzword compliant solution:
put terrible microservices in containers. Manage containers with Kubernetes. Put evrything on AWS cloud. This time it WILL work, I promise!
Reality:
There are an infinity of ways to create poor implementations and only a few ways to write good ones and even fewer ways to make something with efficiency, longevity, and that is easily maintainable. As with many things one can choose any two of good, fast, and cheap but never all three.
If you can’t even do a monolith right, you shouldn’t even be thinking of microservices yet.
Well the whole idea of microservices is to have less to conquer. The cost is obviously communication, which is always a slowdown in any system of entities. If you don’t intend to distribute the work among diverse and distant teams, it is my opinion that microservices are an overhead and pain you don’t need.
Source: https://www.facebook.com/126000117413375/posts/3214565311890158
Tomi Engdahl says:
Your work is still gonna suck even if you did it using microservices as it did with monolithic.