13 Helpful Linux Commands to Level Up Your Skills

Linux is an incredibly powerful and popular operating system, used by developers, system administrators, and tech enthusiasts around the world. While most Linux users are familiar with basic commands like ls, cd, and cp, there are many other helpful commands that can take your skills to the next level.

In this guide, we‘ll introduce you to 13 lesser-known Linux commands that are extremely useful for boosting productivity, managing systems, and troubleshooting issues. These commands will make you feel like a true Linux power user in no time! Let‘s dive in.

1. chroot

The chroot command allows you to run commands with a different root directory. This is useful for testing or recovering systems in an isolated environment without affecting the rest of your system.

For example, to run a shell with /home/test as the root directory:

sudo chroot /home/test /bin/bash

Now any commands you run will be relative to /home/test. This is a great way to safely experiment or perform system maintenance.

2. crontab

Crontab allows you to automate tasks by scheduling them to run at specific times. You can schedule jobs to run by minute, hour, day of the month, month, and/or day of the week.

To edit your crontab file, run:

crontab -e

Then add your scheduled tasks using this format:

*     *     *     *     *  command_to_execute
┬    ┬    ┬    ┬    ┬
│    │    │    │    │
│    │    │    │    │
│    │    │    │    └───── day of week (0 - 7) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0)
│    │    │    └────────── month (1 - 12)
│    │    └─────────────── day of month (1 - 31)
│    └──────────────────── hour (0 - 23)
└───────────────────────── min (0 - 59)

For example, to run example.sh at 6:30am every weekday:

30 6 * * 1-5 /home/user/example.sh

Crontab is essential for automating system maintenance, backups, updates, and more.

3. df

The df command reports disk space usage for filesystems. Use it to quickly see how much space you have available.

To check disk space in human-readable format, run:

df -h

This will display output like:

Filesystem      Size  Used Avail Use% Mounted on
udev            7.8G     0  7.8G   0% /dev
tmpfs           1.6G  2.1M  1.6G   1% /run
/dev/sda2        20G   12G  6.9G  64% /
tmpfs           7.9G   28M  7.9G   1% /dev/shm

You can also specify a particular file or directory to see which filesystem it resides on:

df /home/user/file.txt

4. dmesg

The dmesg command prints the kernel message buffer, which contains messages from the kernel about hardware and driver statuses during bootup. It‘s useful for troubleshooting system and device issues.

To view all kernel messages, simply run:

dmesg

You can pipe this output to grep to search for specific keywords:

dmesg | grep -i error

This will display only lines containing "error" (case-insensitive). It‘s a good way to spot any issues reported by the kernel.

5. grep

grep is a powerful command for searching plain-text files for lines matching a regular expression. It‘s incredibly useful for finding specific information in large log files or codebases.

Basic syntax:

grep "search pattern" file

For example, to search for "error" in app.log:

grep "error" app.log

Some useful grep options:

  • -i: case-insensitive search
  • -r: recursively search all files in a directory
  • -n: display line numbers for matches
  • -v: invert match (display lines that don‘t contain search pattern)

Using grep will make you feel like a detective with superhuman text-analysis powers.

6. head/tail

The head command displays the first part of files, while tail displays the last part. By default they show 10 lines, but you can specify a different number.

To show the first 20 lines of file.txt:

head -n 20 file.txt

To show the last 50 lines:

tail -n 50 file.txt

You can also use tail to display a real-time view of new lines being appended to a file, which is handy for monitoring log files:

tail -f /var/log/nginx/access.log

This will continuously display new lines as they are added to the nginx access log.

7. ps

ps displays information about currently running processes, including their process ID (PID), the user running them, memory and CPU usage, and more.

To display all processes in a tree-like format:

ps auxf

To display only processes matching a certain name:

ps aux | grep nginx

This will show all processes containing "nginx" in the name or command. Use ps to monitor system resource usage and manage runaway processes.

8. rsync

rsync is a fast and efficient tool for syncing files between two locations, either on the same system or between remote hosts. It‘s similar to scp but has more advanced features like partial transfers and compression.

Basic syntax:

rsync [options] source destination

For example, to sync the local directory /home/user/files to a remote server:

rsync -a /home/user/files [email protected]:/backup/

Useful rsync options:

  • -a: archive mode (preserves timestamps, permissions, etc)
  • -z: compress files during transfer
  • -v: verbose output
  • –delete: delete files at destination that no longer exist at source

rsync is a must-have for anyone managing backups or file synchronization between servers.

9. pv

pv ("pipe viewer") allows you to monitor the progress of data through a pipeline. It‘s useful for seeing the status of large data transfers or processing tasks.

For example, to compress a large file while monitoring progress:

pv bigfile.tar | gzip > bigfile.tar.gz

This will display the transfer rate, total data transferred, time elapsed, and estimated time remaining.

You can also use pv to limit the transfer rate to avoid saturating a network link:

pv -L 1m file.iso | ssh user@host "cat > file.iso"

This limits the transfer to 1MB/s. Use pv anytime you‘re transferring or processing large amounts of data and want visual feedback on the progress.

10. mtr

mtr combines the functionality of ping and traceroute into a single, handy network diagnostic tool. It continually pings each hop between your machine and a specified host, and displays packet loss and latency at each hop.

To trace the route to example.com:

mtr example.com

This interactively displays the route, with a continuously updating display of the latency and packet loss to each hop. This makes it easy to identify problem spots in a route.

Use mtr to troubleshoot network issues and track down sources of latency or packet loss across your networks.

11. jq

jq is like sed for JSON data – it allows you to slice, filter, map and transform structured data with ease. It‘s very handy for parsing and manipulating JSON data returned from APIs.

For example, say you have a JSON file users.json containing an array of user objects. To extract just the email of the first 3 users:

jq ‘.[0:3] | .[].email‘ users.json

Or to calculate the average age of all users:

jq ‘[.[].age] | add / length‘ users.json

jq has a rich filtering language – you can select particular fields, filter on conditions, perform comparisons, do arithmetic, and much more. It‘s a hugely powerful tool for anyone working with JSON data.

12. tac

tac (cat backwards) prints files in reverse. It‘s useful for viewing logs chronologically, as newer entries are usually appended to the end.

To print a file in reverse:

tac file.txt

To print only the last 10 lines of a file in reverse order:

tac file.txt | head

You can also use tac creatively for tasks like reversing the order of a list:

tac list.txt > reversed_list.txt

Anytime you find yourself wishing you could see a file in reverse order, reach for tac.

13. perf

perf is a powerful Linux profiling tool that can help you analyze your system‘s performance and identify bottlenecks or inefficiencies in your code.

To count total cache misses while running a command:

perf stat -e cache-misses command

To record detailed profiling information about a command:

perf record command

Then to analyze the recorded data:

perf report

This will show you which functions used the most CPU time, how many times each was called, and from where. This allows you to pinpoint performance bottlenecks in your applications.

Perf can profile all kinds of events like page faults, context switches, CPU clock cycles, and more. It‘s an invaluable tool for any serious Linux developer or system administrator.

Conclusion

We‘ve explored 13 powerful yet often overlooked Linux commands that can greatly improve your productivity and skills as a Linux user. From file syncing with rsync, to JSON processing with jq, to CPU profiling with perf, these commands cover a wide range of useful functionality.

Take some time to practice with each of these and make them part of your regular command-line toolkit. The more you use them, the more natural they‘ll feel and the more opportunities you‘ll find to apply them to your daily tasks.

Remember, the real power of Linux lies not just in the commands themselves, but in your ability to combine them creatively to solve problems and automate your work. Invest in leveling up your Linux skills – it will pay dividends throughout your career.

Happy commanding!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *