In this post, we’ll describe how we reduced the memory footprint of linkerd, our JVM-based RPC proxy for microservices, by almost 80%—from 500mb to 105mb—by tuning the JVM’s runtime parameters. We’ll describe why we went through this painful exercise, and the various things that did—and didn’t—help us get there.
Version 0.6.0 of linkerd and namerd were released today! We wanted to take the opportunity in this release to bring more consistency and uniformity to our config files. Unfortunately, this means making non-backwards compatible changes. In this post, we describe how to update your config files to work with 0.6.0.
Distributed tracing is a critical tool for debugging and understanding microservices. But setting up tracing libraries across all services can be costly—especially in systems composed of services written in disparate languages and frameworks. In this post, we’ll show you how you can easily add distributed tracing to your polyglot system by combining linkerd, our open source RPC proxy, with Zipkin, a popular open source distributed tracing framework.
Microservices allow engineering teams to move quickly to grow a product… assuming they don’t get bogged down by the complexity of operating a distributed system. In this post, I’ll show you how some of the hardest operational problems in microservices—staging and canarying of deep services—can be solved by introducing the notion of routing to the RPC layer.
In this post, we describe how linkerd, our industrial-strength RPC proxy for microservices, can be used to transparently “wrap” RPC calls in TLS, adding a layer of security to microservices without requiring modification of application code.
This post was co-written with Ruben Oanta.
Load balancing is a critical component of any large-scale software deployment. But there are many ways to do load balancing. Which way is best? And how can we evaluate the different options?
How do you operate a microservice application at scale? What problems arise in practice, and how are they addressed? What is actually required to run a large microservice application under high-volume and unpredictable workloads, without introducing friction to feature releases or product changes?