In this post, I'll share some of the command-line tools I love to use when digging through log files. I will only brief through the basics, the more advanced stuff could be found in the man pages.
Hope you'll find it useful. Feel free to add your own in the comments section.
Clicking on any of the examples will lead you to the awesome tool explainshell that explains what linux commands actually do.
0. cat - Show me what you've got
Well, the use of cat is usually so simple that I thought of not putting it here at all. But I decided, for the record, to add it here. So we'll use cat to print file content to the screen. Yes, it has more options, but that's the most common one. So I think that'll do for now :)
1. tail - Wait for it...
tail man pageThe first one (aside from cat which is not counted) is the famous tail command that most people use. I actually very rarely use the 'tail' command. I would much rather use the less as you'll see below. The tail command, in its naive use, allows us to print to the standard output the last X lines from a file as follows:
This will print out the last 10 lines of the file mylog.log.
But What most people use it for is to stream log files (or other streams of messages) to the standard output using:
That's usually very helpful. But I would usually rather use less (with shift +F) instead as you'll see next. When I do use tail usually is when I want to pipe it to grep and see only specific lines:
This will stream only the log lines that contain some_string in them which is not as convenient to do with less.
2. less - Less is more
less lets you navigate through a large input file. Unlike text-editors (like vi for example), less does not read the entire file before starting up which makes it faster to load. There are loads of features for less but I'll give here a few basic ones which will get you up and running quickly:
- Shift + f: Will throw you to the end of the file and let you watch the stream of lines as more lines are added to the file/stream. So if you have a log file that is being filled with log lines, you'll see those log lines written, just like with "tail -f". The big advantage over tail in my opinion is the option to use Ctrl+C to stop the stream and perform searches on it. tail just prints stuff to the terminal so you can't use search commands like you can do with less and that's why I would more commonly use less with shift+f and not tail.
- / - Typing / allows you to then type a search term. less will now allow you to browse through all of the places in which this search term appears either with typing n to find the next appearance of the term or Shift+n to find the previous appearance.
- ? - Typing ? does the exact opposite of /. It will search a term backwards (this also means that n will search for the term in the lines above the line you're at and Shift+n will search for the lines after the line you're at).
- Shift + G - Will take you to the last line of the file. It goes well with combination with ? - if you want to search the latest place a phrase appeared just browse to the end of the file with Shift+G and then type ? and the phrase.
- Line number + g - Typing a number of line and then g will bring you to the specific line. I usually use it to go to the first line (typing 1g) and then using / to find the first time a phrase appears.
There are, of course, a lot more options to less but that pretty much covers the basics.
3. grep - Is it here?
grep is a very useful command for finding only relevant lines. Basically, the easiest and probably most common pattern would be this one:
This pattern finds all the lines that contain the word 'phrase' in the file mylog.log. Of course you can use file pattern such mylog.log* to search in all the mylog.log files. Again, grep has tons of options you can use but here are the ones I find most useful to start with:
- -v - Adding -v finds all the lines that don't contain the phrase, instead of the ones that do.
- -c - Count the number of matching (or non-matching if you use -v) lines.
- -e - Allows to provide a regexp instead of a a plain phrase.
- -i - Ignore case.
- -r - Allows recursive search inside the directory tree.
- -n - Prints the line number in the file in which the phrase was found
- -A 3 - Prints the 3 lines right after the match (of course, you can use other numbers and not only 3 :) ).
- -B 3 - Prints the 3 lines right before the match.
4. awk - Make it look better
awk could, by itself, fill a full blog-post. But this is a post for only the basics. So, I would usually use awk by piping with something like grep or tail. awk basically allows you to manipulate the input. The basic use I usually use awk for is to pretty-print the relevant data I grep-ed from the log line, and I believe an example would be easier to understand here. Let's say we have a file mylog.log that this is its content:
I want to extract only those lines that contain a user name and the user's id. Using 'grep' would make it easy to extract the lines:
But still I won't be able to have a clear mapping, user to id. I won't be able, for example to copy the result to a CSV file easily enough nor will I be able to further manipulate it with a few commands we'll learn below. But luckily, I have awk for the rescue:
In this case, I used awk to re-print the results from the grep using the print command. The $2 and $6 mark the 2nd and 6th token respectively of each line resulted from the grep assuming each token is separated by a space.
In order to change the delimiter from space to something else you could use the -F option and provide some other separator.
As I mentioned, there are tons of uses to awk besides what I showed here like printing lower/upper-case, substring, split and more but those are beyond the scope of this post. It is important, however, to know that those options exist, in case you ever find yourself in need of using them.
User Avi has id of 49240924
Some unneeded data
Some unneeded data
Some unneeded data
User George has id of 895042
Some unneeded data
Some unneeded data
User Elaine has id of 90348235
Some unneeded data
Some unneeded data
User Jerry has id of 9235239
Some unneeded data
Some unneeded data
User Kramer has id of 94023920
Some unneeded data
Some unneeded data
Some unneeded data
Some unneeded data
Some unneeded data
Some unneeded data
User George has id of 895042
Some unneeded data
Some unneeded data
User Elaine has id of 90348235
Some unneeded data
Some unneeded data
User Jerry has id of 9235239
Some unneeded data
Some unneeded data
User Kramer has id of 94023920
Some unneeded data
Some unneeded data
Some unneeded data
I want to extract only those lines that contain a user name and the user's id. Using 'grep' would make it easy to extract the lines:
~ $ grep "has id of" mylog.log
User Avi has id of 49240924
User George has id of 895042
User Elaine has id of 90348235
User Jerry has id of 9235239
User Kramer has id of 94023920
User Avi has id of 49240924
User George has id of 895042
User Elaine has id of 90348235
User Jerry has id of 9235239
User Kramer has id of 94023920
But still I won't be able to have a clear mapping, user to id. I won't be able, for example to copy the result to a CSV file easily enough nor will I be able to further manipulate it with a few commands we'll learn below. But luckily, I have awk for the rescue:
~ $ grep "has id of" mylog.log | awk {'print $2": "$6'}
Avi: 49240924
George: 895042
Elaine: 90348235
Jerry: 9235239
Kramer: 94023920
Avi: 49240924
George: 895042
Elaine: 90348235
Jerry: 9235239
Kramer: 94023920
In this case, I used awk to re-print the results from the grep using the print command. The $2 and $6 mark the 2nd and 6th token respectively of each line resulted from the grep assuming each token is separated by a space.
In order to change the delimiter from space to something else you could use the -F option and provide some other separator.
As I mentioned, there are tons of uses to awk besides what I showed here like printing lower/upper-case, substring, split and more but those are beyond the scope of this post. It is important, however, to know that those options exist, in case you ever find yourself in need of using them.
5. uniq - The only one
uniq allows you to play with repeating lines. So, for example, if we take the example file from 'awk' we can print it and avoid the repetitions of the line "Some unneeded data":
~ $ cat mylog.log | uniq
User Avi has id of 49240924
Some unneeded data
User George has id of 895042
Some unneeded data
User Elaine has id of 90348235
Some unneeded data
User Jerry has id of 9235239
Some unneeded data
User Kramer has id of 94023920
Some unneeded data
User Avi has id of 49240924
Some unneeded data
User George has id of 895042
Some unneeded data
User Elaine has id of 90348235
Some unneeded data
User Jerry has id of 9235239
Some unneeded data
User Kramer has id of 94023920
Some unneeded data
As you can see, the uniq-ness is reset whenever a non identical line is found. Here are the main extra features of uniq:
- -c - Adding the -c switch will print the number of repetitions before each line.
- -d - Will print only the lines that repeat more than once.
- -u - Will print only the lines that do not repeat more than once.
- -i - Ignore case
6. sort - From A to Z
Well, I guess it's easy to figure out what this command does. sort will sort its input and print it sorted to the standard output. Of course it goes well by piping with previous commands. For example, it might be a great tool after using awk to print a specific detail from a log line and print it sorted. For example, if we again take the example file from the awk command and add sort to the result we'll get the user ids sorted by names:
~ $ grep "has id of" mylog.log | awk {'print $2": "$6'} | sort
Avi: 49240924
Elaine: 90348235
George: 895042
Jerry: 9235239
Kramer: 94023920
Avi: 49240924
Elaine: 90348235
George: 895042
Jerry: 9235239
Kramer: 94023920
Another useful way of using sort would be by combining it with "uniq -c" if you want to sort by number of line repetitions.
Most useful options of sort:
- -f - Ignore case (because -i is such a cliche already).
- -r - Reverse order.
- -n - Sort according to numeric value
Again, there are more options to sort which you can find in the man page.
7. xargs - Now, let's do something else
And what if, all of the data you extract is just needed as an input for another command? xargs is meant for building and executing commands from the standard input. How is it done? Well, by piping usually. Again, an example would be much easier here. Let's take a different example file - mylog2.log:
User Avi has id of 49240924
User George has id of 895042
User Elaine has id of 90348235
User Jerry has id of 9235239
User Kramer has id of 94023920
User 49240924 presses the green button
User 49240924 presses the yellow button
User 895042 presses the red button
User 9235239 presses the green button
User 49240924 presses the green button
User 94023920 presses the red button
User George has id of 895042
User Elaine has id of 90348235
User Jerry has id of 9235239
User Kramer has id of 94023920
User 49240924 presses the green button
User 49240924 presses the yellow button
User 895042 presses the red button
User 9235239 presses the green button
User 49240924 presses the green button
User 94023920 presses the red button
Now, we want to find every action that 'Avi' took. Problem is - we know the name of the user, Avi, but the logs are written with the user id and not its name. Let's combine everything we have learned so far to get the desired output.
We first want to get the user's id:
~ $ grep "User Avi" mylog2.log
User Avi has id of 49240924
User Avi has id of 49240924
We already know how to print only the id without the prefix:
~ $ grep "User Avi" mylog2.log | awk {'print $6'}
49240924
49240924
Now, we want to grep all the log lines that contain this id but we don't want to enter the id manually, so instead we'd use xargs as follows:
~ $ grep "User Avi" mylog2.log | awk {'print $6'} | xargs -I user-id grep user-id mylog2.log
User Avi has id of 49240924
User 49240924 presses the green button
User 49240924 presses the yellow button
User 49240924 presses the green button
User Avi has id of 49240924
User 49240924 presses the green button
User 49240924 presses the yellow button
User 49240924 presses the green button
The switch -I defines a parameter named user-id which can later be used in the grep command and assigned with the value of the result from the piping to the awk command, in this case - the user id. If you want to take it one step further and get rid of the line "User Avi has id of 49240924" you can use "grep -v" as follows:
~ $ grep "User Avi" mylog2.log | awk {'print $6'} | xargs -I user-id grep user-id mylog2.log | grep -v "has id"
User 49240924 presses the green button
User 49240924 presses the yellow button
User 49240924 presses the green button
User 49240924 presses the green button
User 49240924 presses the yellow button
User 49240924 presses the green button
And now we get only the actions that the user Avi made.
That's it!
So, that's what I usually use when digging through log files. There is tons of more options and possibilities for those tools and also there are other tools that I left out because I wanted to focus on the absolute must in my opinion. I hope you enjoyed and learned that the world has more to offer than just tail and grep.
Feel I left something too important out? Want to add something or correct me? Please feel free to leave your comment!
Find me on Twitter: @AviEtzioni
More interesting posts from this blog: