Skip to content

Microservices local development tools just got a way easier

The observability ecosystem is well-developed these days and there are many options for handling it on large scale. However, when it comes to a micro scale, usually development environments, we may be missing some tools. Many developers are troubled with debugging multiple microservices setup locally, what is natural for them in production environment - logs viewer (like ELK, Grafana or DataDog) is missing locally. These tools fallback on browsing the logs in the terminal, however it proves to be too simple for even the least complex microservices setups.

Can you recall a situation when you've had multiple terminals open, each one with its log output and no way to merge and correlate these lines across requests that span through all of these services?

Correlation ID, Request ID, Trace ID - Correlators

Developers use these identifiers to trace requests between services. Usually, the first service that receives a request without this identifier is supposed to generate it and forward it to any other downstream service.

Usually these IDs are placed within headers (http, kafka) or metadata (grpc) depending on the protocol used. Correlation ID could be a part of the schema of the messages passed between services, for example an additional field in every model called metadata. This approach could be cumbersome however, given that you will usually try to keep your models backward compatible, changing metadata field could be problematic. For that reason, we recommend keeping them separate.

Passing these identifiers between services is just half of the job. The other half is effectively using them for logging purposes. That's why, you should prepare all of the components of your system to print logging messages with this data. It's not trivial and could be a reason for a bigger refactoring within you application, however, that investment will pay off very quickly and it's worth the effort.

Logging with context

In the previous posts we already mentioned that logging context is important as it allows us to investigate specific log messages within greater context. Correlators and logs are supplementary, especially for microservices or larger monolithic systems where each request could produce multiple log messages, it's important to be able to quickly filter and messages for a single requests and investigate them.

json
{
  "timestamp": "2023-05-15T14:32:21.673Z",
  "level": "INFO",
  "logger": "com.example.myapp.OrderService",
  "correlationId": "abc123def456",
  "message": "Order created successfully",
}

An example JSON message with correlationId field.

Introducing Logdy: Simplifying Local Development Logging

The above considerations are a great reminder for the production setup. However, developers often forget about setting them up during the development process and only realize before going to production about doing it. This is understandable because standard tooling for logging and metrics cover either specific programming languages or setup of the moving parts for a production architecture (think of producing logs to STDOUT that are picked up by logs aggregator and sent to central observability component).

What we're missing is something that would satisfy the needs of the production logs viewer without all of the hassle. Meet Logdy.

Logdy is a versatile DevOps tool designed to enhance productivity in the terminal. Operating under the UNIX philosophy, Logdy is a single-binary tool that requires no installations, deployments, or compilations. It works locally, ensuring security, and can be seamlessly integrated into the PATH alongside other familiar commands like grep, awk, sed, and jq. It is particularly beneficial for professionals such as software engineers, game developers, site reliability engineers, sys admins, and data scientists who frequently work with terminal logs or outputs.

Logdy records the output of processes, whether from standard output or a file and directs it to a web UI. The web UI served on a specific port by Logdy, provides a reactive, low-latency application for browsing and searching through logs. It supports various use cases, such as tailing log files, integrating with applications (e.g., node.js, Python scripts, Go programs, or anything else that produces standard output), and tools like kubectl, docker logs etc.

One notable feature is its hackability with TypeScript, allowing users to filter, parse, and transform log messages by writing TypeScript code directly within the browser. This hackability provides flexibility to express custom logic without delving into the intricacies of other command-line tools. Overall, Logdy offers a convenient and efficient solution for managing and analyzing terminal logs.

How Logdy can help with observing multiple microservices locally?

It's simple, you can forward logs from all of the instances to the Logdy process and browse them within a single UI. All of that with minimal setup, no installations and configurations. Whether you're developing a new feature that spans multiple services or debugging an issue, you can always stream logs to Logdy with minimal setup.

bash
# use with any shell command
$ tail -f file.log | logdy

Logdy, microservices and correlation IDs

Logdy helps you debug observability issues early on. If your services already produce correlation IDs, parsing it will be a breeze thanks to built-in TypeScript support which you can use to express the logic behind selecting that ID from a log message, whether its JSON or raw text.

Next, create a column from that field and that's it. You can browse, filter and search on it within Logdy UI.

Let's say you're running your microservices using docker-compose, with Logdy it will be as simple as

bash
$ docker logs my-container --follow --tail=10 | logdy

to stream all of the container logs to a web UI. Check more in the post Docker logs web browser UI.