Magic of containers
Before the rise of containers, developers struggled a lot with managing dependencies across multiple environments which then led to the famous quote "It works on my machine." Once containers were introduced, developers would no longer need to manage dependencies across every environment. They could just ship the containers.
If you were developing your application on a Linux-based computer, and another person would be working on the same application from Windows or macOS, there often can be a scenario where one of the 3 systems is missing some dependency. Or they could be OS-specific.
This is where the magic of containers comes into play. You could containerize your application on a Linux-based system, and the same container would run on a macOS or Windows-based system without having any sort of missing dependencies.
Since the containers are isolated from your main OS, the container comes packaged with the core OS dependencies, your application dependencies, and your application itself.
Too many configurations
We mentioned that one of the benefits of containers is scaling. Scaling is done simply by increasing the number of instances of your container. We use container orchestration tools like docker-compose or Kubernetes for this purpose.
However, organizations might be using both environments. Docker-Compose can have issues with scaling applications, so it's not used in production. But it can still be used for testing purposes. Docker-Compose uses a configuration file called
DockerCompose.yaml where you would define which parts of your applications would be containerized, the scope of containerization, and how multiple containers would talk to each other.
In the case of Kubernetes, we would do the same using a package manager such as helm. We would create our own custom chart, and use it to deploy our application with Kubernetes.
Now we are using both environments - Docker-Compose, and Kubernetes - to do something with our application. Kubernetes for deploying a production-ready version, and Docker-Compose to locally test newer versions of our application.
Now you might be able to see the problem already. We have to configure two different environments which work differently but ultimately do the same task on different scales. Not only would you need to figure out how to use both tools, but you would also need to keep them in sync as well. This leads to many development bottlenecks, as we are using two different tools, which have their own unique configuration settings.
Score - Simplify orchestration
An easier way to manage multiple environments a workload's specification only once, and it automatically created a docker-compose file and a values file for my Helm chart. That way it becomes easier to manage multiple environments and ensure that both use the same configuration.
This is exactly what Score does. With Score, you need a specification file, and it will run on both Docker-Compose and Kubernetes via helm charts. It's an open-source, platform-agnostic, container-based workload specification.
Score aims to reduce the toil, and cognitive load of developers by only having to define a single YAML file that works across multiple platforms. It does not intend to be a fully featured YAML replacement, instead, it defines workloads that can be combined with more advanced YAML configurations that an infrastructure team would provide to developers in an organization.
Score provides a single, easy-to-understand specification for each workload that describes its runtime requirements in a declarative manner. The developers would be required to create a
score.yaml file. This
score.yaml file will generate configurations in a standardized, automated, and one-directional way.
Reducing the risk of wrongly specified or inconsistent configurations between environments can foster focus and joy for developers in their day-to-day work.
If you try to manage multiple container orchestration environments manually, you would need to write multiple configuration files. This can often lead to some sort of misconfiguration either you define the wrong environment variable, or configure something in a different way.
Score helps you mitigate any of these multi-environment misconfigurations, as you will only need to write a single
score.yaml specification file, and it would be used to automatically generate a docker-compose or helm configuration.