Command modes
This page describes the modes in which you can run Logdy binary. In all of the modes, Logdy will start a webserver that will serve the web UI. You can check more options in the CLI chapter.
stdin (default)
$ tail -f file.log | logdy
Accepts the input on stdin or coming from a specified command to run.
$ logdy stdin [command]
In this mode, Logdy will start a command specified in the [command]
argument and intercept its STDOUT/STDERR. This could be any shell command that's producing the aforementioned output. Example
$ logdy stdin 'npm run dev'
TIP
This command mode is particularly useful if you have a process that is strictly producing logs
socket
$ logdy socket <port1> [<port2> ... <portN>]
Sets up a port(s) to listen for incoming line messages.
$ logdy socket 8233
You can setup multiple ports, separated by a space.
$ logdy socket 8233 8234 8235
In another terminal session
$ tail -f file.log | nc 8233 # all stdout will be sent to port 8233
Each message line will be marked with the origin port for easier identification where each line was produced.
forward
$ logdy forward <port>
Forwards lines from stdin to a port (the port should be one of the ones specified in $ logdy socket
). Example usage
$ tail -f file.log | logdy forward 8123
TIP
Use this command together with $ logdy socket
command ran in a separate terminal. See announcement for examples. This is basically a substitute to netcat
command.
follow
$ logdy follow <file1> [<file2> ... <fileN>]
Watches the file(s) for new lines.
$ logdy follow file.log
You can provide multiple files as well as relative or absolute paths.
$ logdy follow file.log /var/log/file.log ../file1.log
By default following
will ONLY push new lines to the buffer. If you would like to load the whole content of each file then use --full-read
option.
$ logdy follow --full-read file.log file2.log
In the above example, the contents of both files will be read and pushed to the buffer.
Each line sent to the UI will be marked with the origin file field for easier identification where each line was produced.
utils
A command prefixed with a utils
keyword is a set of utilities that operate on files and produce processed output. These commands filter the contents of the input file based on defined criteria and produce a filtered output.
You can read a blog post that highlights the use case for utils
mode
utils - cut-by-string
This utility cuts a file by a start and end string into a new file or standard output. It's useful when you have a very large log file but you would like to feed to Logdy only a subset of it.
$ logdy utils cut-by-string <file> <start> <end> {case-insensitive = true} {out-file = ''}
Arguments
file
(string) - a path to a file that will be read
start
(string) - once this string is found, the command will start producing lines
end
(string) - once this string is found, the command will stop producing lines
case-insensitive
(boolean) - (default: true
) - whether start
and end
string searches should be case-insensitive
out-file
(string) - (default: empty) - filtered lines will be saved to a file and a progress bar will be presented
Example
$ logdy utils cut-by-string /var/log/sys.log "process #3 initialized" "process #3 terminated"
The above command will scan a file located at /var/log/sys.log
and will produce only lines that are between lines that contain process #3 initialized
and process #3 terminated
strings
utils - cut-by-date
This utility cuts a file by a date (and time) into a new file or standard output. This command parses dates within each single line and filters based on the provided criteria. The format of the timestamps is provided in date-format
argument and uses a Golang idiomatic way of formatting (read more in the documentation or this guide on dates in Go). Not only date format, but you also have to provide a location of the date string by defining an offset as a number. This will guide the tool how many characters should be skipped for each line, before slicing it and parsing a timestamp.
TIP
The utility assumes that the lines are ordered by time, which means, it will stop scanning the file, once end
date is encountered.
The utility is optimistic, meaning it will not fail if a date parsing fails at a particular offset, it will just skip that line (it will let that line pass the filter).
$ logdy utils cut-by-date <file> <start> <end> <date-format> <search-offset> {out-file = ''}
Arguments
file
(string) - a path to a file that will be read
start
(string) - once this string is found, the command will start producing lines
end
(string or number) - once this string is found, the command will stop producing lines
case-insensitive
(boolean) - (default: true
) - whether start
and end
string searches should be case insensitive
out-file
(string) - (default: empty) - filtered lines will be saved to a file and a progress bar will be presented
Example
$ logdy utils cut-by-date /var/log/large.log "15/09/01 18:17:21" "15/09/01 18:17:28" "02/01/06 15:04:05" 0
The above command will scan a file located at /var/log/sys.log
and will produce only lines that are between lines that contain a timestamp between dates "15/09/01 18:17:21"
and "15/09/01 18:17:28"
at a offset 0
.
utils - cut-by-line-number
This utility cuts a file by a line number count and offset into a new file or standard output.
$ logdy utils cut-by-line-number <file> <count> <offset> {out-file = ''}
Arguments
file
(string) - a path to a file that will be read
count
(number) - a number of lines to be read
offset
(number) - how many lines should be skipped
out-file
(string) - (default: empty) - filtered lines will be saved to a file and a progress bar will be presented
Example
Consider a file /var/log/large.log
with a few million lines. You can get 100 lines after skipping 100 000 lines.
$ logdy utils cut-by-line-number /var/log/large.log 100 100000
TIP
You can achieve the same effect using tail
and head
commands, ex. head -n 100100 /var/log/large.log | tail -n 100
. However, take a look that we asked for 100100
lines in head
command, this is because, then we cut the the last 100
lines with tail
.
demo
$ logdy demo [number]
Starts a demo mode, random logs will be produced, the [number] defines a number of messages produced per second
TIP
This is a great mode if you would like to try out Logdy and play with it, locally without connecting it to any logs source.