hdfs-sink: how to get rid of the timestamp added in every event by flume in the HDFS files - flume-ng

I have a few files which contains json in each line
[root#ip-172-29-1-12 vp_flume]# more vp_170801.txt.finished | awk '{printf("%s\n", substr($0,0,20))}'
{"status":"OK","resp
{"status":"OK","resp
{"status":"OK","resp
{"status":"OK","resp
{"status":"OK","resp
{"status":"OK","resp
{"status":"OK","resp
{"status":"OK","resp
{"status":"OK","resp
{"status":"OK","resp
My flume config is
[root#ip-172-29-1-12 flume]# cat flume_test.conf
agent.sources = seqGenSrc
agent.channels = memoryChannel
agent.sinks = loggerSink
agent.sources.seqGenSrc.type = spooldir
agent.sources.seqGenSrc.spoolDir = /moveitdata/dong/vp_flume
agent.sources.seqGenSrc.deserializer.maxLineLength = 10000000
agent.sources.seqGenSrc.fileSuffix = .finished
agent.sources.seqGenSrc.deletePolicy = never
agent.sources.seqGenSrc.channels = memoryChannel
agent.sinks.loggerSink.channel = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100
agent.sinks.loggerSink.type = hdfs
agent.sinks.loggerSink.hdfs.path = /home/dong/vp_flume
agent.sinks.loggerSink.hdfs.writeFormat = Text
agent.sinks.loggerSink.hdfs.rollInterval = 0
agent.sinks.loggerSink.hdfs.rollSize = 1000000000
agent.sinks.loggerSink.hdfs.rollCount = 0
The files in HDFS is:
[root#ip-172-29-1-12 flume]# hadoop fs -text /home/dong/vp_flume/* | awk '{printf("%s\n", substr($0,0,20))}' | more
1505276698665 {"stat
1505276698665 {"stat
1505276698666 {"stat
1505276698666 {"stat
1505276698666 {"stat
1505276698667 {"stat
1505276698667 {"stat
1505276698667 {"stat
1505276698668 {"stat
1505276698668 {"stat
1505276698668 {"stat
1505276698668 {"stat
1505276698669 {"stat
1505276698669 {"stat
1505276698669 {"stat
1505276698669 {"stat
1505276698670 {"stat
1505276698670 {"stat
1505276698670 {"stat
1505276698670 {"stat
Question: I don't like the timestamp which is added by flume in each event. However, how can I get rid of it by configuring flume properly?

You have not explicitly mentioned a hdfs.fileType property in your agent config file so Flume will use SequenceFile as default. SequenceFile supports two write formats: Text and Writable. You have set hdfs.writeFormat = Text which means Flume will use HDFSTextSerializer to serialize your events. If you take a look at its source (Line 53), you will see that it adds a timestamp as the default key.
Using hdfs.writeFormat = Writable won't help either because it does the same. You can check its source here (Line 52).
A key is always required for a SequenceFile. So, unless you have a good reason to use SequenceFile I'd suggest you to use hdfs.fileType = DataStream in your agent config.

Related

Add key to array of objects and persist the change using jq

I am looping over array of s3 buckets in a json file.
[
[
"response": {
"Buckets: : [
{"Name": "foo"},
{"Name": "bar"}
]
}
]
]
I want to loop over each bucket, call the aws s3 api to get the region for each bucket and append the {"region": "region_name"} for each object inside Buckets array and persist the changes to file.
I am struggling to write the modified data to file as such that it doesn't lose all the data.
Below script writes to a temp.json file but it keeps overwriting the data for each run. In the end I only see last element in the Bucket array written to temp.json file
I want only the region key added to each element but keep all the contents of file same.
jq -r '.[][0].response.Buckets[].Name' $REPORT/s3.json |
while IFS= read -r bucket
do
region=$(aws s3api get-bucket-location --bucket $bucket | jq -r '.LocationConstraint')
jq -r --arg bucket "$bucket" --arg region "$region" '.[][0].response.Buckets[] | select(.Name==$bucket) | .region=$region' $REPORT/s3.json | sponge $REPORT/temp.json
done
Keep the context by adding parentheses to the LHS as in (.[][0].response.Buckets[] | select(.Name==$bucket)).region=$region.
jq -r '.[][0].response.Buckets[].Name' $REPORT/s3.json |
while IFS= read -r bucket
do
region=$(aws s3api get-bucket-location --bucket $bucket | jq -r '.LocationConstraint')
jq -r --arg bucket "$bucket" --arg region "$region" '(.[][0].response.Buckets[] | select(.Name==$bucket)).region=$region' $REPORT/s3.json | sponge $REPORT/temp.json
done
With a little bit more refactoring, you could also immediately provide the bucket name to the first inner jq call, which can already create the final objects, which can then be fed as an array to the outer jq call using --slurpfile, for instance. This would move the second inner jq call outside the loop, reducing the overall number of calls by one order.
jq --slurpfile buckets <(
jq -r '.[][0].response.Buckets[].Name' $REPORT/s3.json |
while IFS= read -r bucket; do
aws s3api get-bucket-location --bucket $bucket | jq --arg bucket "$bucket" '{Name: $bucket, region: .LocationConstraint}'
done
) '.[][0].response.Buckets = $buckets' $REPORT/s3.json | sponge $REPORT/temp.json

While loop skips the first line in output

I'm using the below command in Terminal on a Mac to read a file of email addresses and convert them to a MD5 hash.
tr -d " " < em.txt | tr '[:upper:]' '[:lower:]' | while read line; do
(echo -n $line | md5); done | awk '{print $1}' > hashes1.txt
This produces a file of hashes that are 1 row shorter than the original input file. But I can't figure out why.
This code does a few things, below.
Converts an email address to all lower case
Converts the email address to a MD5 Hash
Outputs a list of new email addresses to a hashes1.txt file
Thanks in advance!
Your tr command is wrong : it should be :
tr -d " " < em.txt |
tr '[[:upper:]]' '[[:lower:]]' |
while IFS= read -r line; do
echo -n "$line" | md5 | awk '{print $1}' >> hashes1.txt
done
or
while IFS= read -r line; do
echo -n "$line" | md5 | awk '{print $1}' >> hashes1.txt
done < <(tr -d " " < em.txt | tr '[[:upper:]]' '[[:lower:]]')
Changed the file feeding place too.
And ensure your file don't have strange characters with
od -c file
if yes, install dos2unix, then :
dos2unix file
or using perl :
perl -i -pe 's/\r//g' file

Append output of a command to file without newline

I have the following line in a unix script:
head -1 $line | cut -c22-29 >> $file
I want to append this output with no newline, but rather separated with commas. Is there any way to feed the output of this command to printf? I have tried:
head -1 $line | cut -c22-29 | printf "%s, " >> $file
I have also tried:
printf "%s, " head -1 $line | cut -c22-29 >> $file
Neither of those has worked. Anyone have any ideas?
You just want tr in your case
tr '\n' ','
will replace all the newlines ('\n') with commas
head -1 $line | cut -c22-29 | tr '\n' ',' >> $file
An very old topic, but even now i have been needed to do this (on limited command resources) and that one (replied) command havent been working for me due to its length.
Appending to a file can be done also by using file-descriptors:
touch file.txt (create new blank file),
exec 100<> file.txt (new fd with id 100),
echo -n test >&100 (echo test to new fd)
exec 100>&- (close new fd)
Appending starting from specyfic character can be done by reading file from certain point eg.
exec 100 <> file.txt - new descriptor
read -n 4 < &100 - read 4 characters
echo -n test > &100 - append echo test to a file starting from forth character.
exec 100>&- - (close new fd)

unix script to extract values in xml file

I have an XML file as below:
<xml>Workinstance name="suvi" permission="read" id="6543"</xml>
<xml>Projectinstance name="ram" permission="write" id="3534"</xml>
I want to display the workinstance id field from that XML file.
grep '<xml>Workinstance' file.xml | grep -o 'id="[^"]*' | cut -c5-
$ awk '/Workinstance/{ gsub(/.*id=\042|\042.*/,""); print } ' file
6543

How do I list all cron jobs for all users?

Is there a command or an existing script that will let me view all of a *NIX system's scheduled cron jobs at once? I'd like it to include all of the user crontabs, as well as /etc/crontab, and whatever's in /etc/cron.d. It would also be nice to see the specific commands run by run-parts in /etc/crontab.
Ideally, I'd like the output in a nice column form and ordered in some meaningful way.
I could then merge these listings from multiple servers to view the overall "schedule of events."
I was about to write such a script myself, but if someone's already gone to the trouble...
You would have to run this as root, but:
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done
will loop over each user name listing out their crontab. The crontabs are owned by the respective users so you won't be able to see another user's crontab w/o being them or root.
Edit
if you want to know which user a crontab belongs to, use echo $user
for user in $(cut -f1 -d: /etc/passwd); do echo $user; crontab -u $user -l; done
I ended up writing a script (I'm trying to teach myself the finer points of bash scripting, so that's why you don't see something like Perl here). It's not exactly a simple affair, but it does most of what I need. It uses Kyle's suggestion for looking up individual users' crontabs, but also deals with /etc/crontab (including the scripts launched by run-parts in /etc/cron.hourly, /etc/cron.daily, etc.) and the jobs in the /etc/cron.d directory. It takes all of those and merges them into a display something like the following:
mi h d m w user command
09,39 * * * * root [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -r -0 rm
47 */8 * * * root rsync -axE --delete --ignore-errors / /mirror/ >/dev/null
17 1 * * * root /etc/cron.daily/apt
17 1 * * * root /etc/cron.daily/aptitude
17 1 * * * root /etc/cron.daily/find
17 1 * * * root /etc/cron.daily/logrotate
17 1 * * * root /etc/cron.daily/man-db
17 1 * * * root /etc/cron.daily/ntp
17 1 * * * root /etc/cron.daily/standard
17 1 * * * root /etc/cron.daily/sysklogd
27 2 * * 7 root /etc/cron.weekly/man-db
27 2 * * 7 root /etc/cron.weekly/sysklogd
13 3 * * * archiver /usr/local/bin/offsite-backup 2>&1
32 3 1 * * root /etc/cron.monthly/standard
36 4 * * * yukon /home/yukon/bin/do-daily-stuff
5 5 * * * archiver /usr/local/bin/update-logs >/dev/null
Note that it shows the user, and more-or-less sorts by hour and minute so that I can see the daily schedule.
So far, I've tested it on Ubuntu, Debian, and Red Hat AS.
#!/bin/bash
# System-wide crontab file and cron job directory. Change these for your system.
CRONTAB='/etc/crontab'
CRONDIR='/etc/cron.d'
# Single tab character. Annoyingly necessary.
tab=$(echo -en "\t")
# Given a stream of crontab lines, exclude non-cron job lines, replace
# whitespace characters with a single space, and remove any spaces from the
# beginning of each line.
function clean_cron_lines() {
while read line ; do
echo "${line}" |
egrep --invert-match '^($|\s*#|\s*[[:alnum:]_]+=)' |
sed --regexp-extended "s/\s+/ /g" |
sed --regexp-extended "s/^ //"
done;
}
# Given a stream of cleaned crontab lines, echo any that don't include the
# run-parts command, and for those that do, show each job file in the run-parts
# directory as if it were scheduled explicitly.
function lookup_run_parts() {
while read line ; do
match=$(echo "${line}" | egrep -o 'run-parts (-{1,2}\S+ )*\S+')
if [[ -z "${match}" ]] ; then
echo "${line}"
else
cron_fields=$(echo "${line}" | cut -f1-6 -d' ')
cron_job_dir=$(echo "${match}" | awk '{print $NF}')
if [[ -d "${cron_job_dir}" ]] ; then
for cron_job_file in "${cron_job_dir}"/* ; do # */ <not a comment>
[[ -f "${cron_job_file}" ]] && echo "${cron_fields} ${cron_job_file}"
done
fi
fi
done;
}
# Temporary file for crontab lines.
temp=$(mktemp) || exit 1
# Add all of the jobs from the system-wide crontab file.
cat "${CRONTAB}" | clean_cron_lines | lookup_run_parts >"${temp}"
# Add all of the jobs from the system-wide cron directory.
cat "${CRONDIR}"/* | clean_cron_lines >>"${temp}" # */ <not a comment>
# Add each user's crontab (if it exists). Insert the user's name between the
# five time fields and the command.
while read user ; do
crontab -l -u "${user}" 2>/dev/null |
clean_cron_lines |
sed --regexp-extended "s/^((\S+ +){5})(.+)$/\1${user} \3/" >>"${temp}"
done < <(cut --fields=1 --delimiter=: /etc/passwd)
# Output the collected crontab lines. Replace the single spaces between the
# fields with tab characters, sort the lines by hour and minute, insert the
# header line, and format the results as a table.
cat "${temp}" |
sed --regexp-extended "s/^(\S+) +(\S+) +(\S+) +(\S+) +(\S+) +(\S+) +(.*)$/\1\t\2\t\3\t\4\t\5\t\6\t\7/" |
sort --numeric-sort --field-separator="${tab}" --key=2,1 |
sed "1i\mi\th\td\tm\tw\tuser\tcommand" |
column -s"${tab}" -t
rm --force "${temp}"
Under Ubuntu or debian, you can view crontab by /var/spool/cron/crontabs/ and then a file for each user is in there. That's only for user-specific crontab's of course.
For Redhat 6/7 and Centos, the crontab is under /var/spool/cron/.
This will show all crontab entries from all users.
sed 's/^\([^:]*\):.*$/crontab -u \1 -l 2>\&1/' /etc/passwd | sh | grep -v "no crontab for"
Depends on your linux version but I use:
tail -n 1000 /var/spool/cron/*
as root. Very simple and very short.
Gives me output like:
==> /var/spool/cron/root <==
15 2 * * * /bla
==> /var/spool/cron/my_user <==
*/10 1 * * * /path/to/script
A small refinement of Kyle Burton's answer with improved output formatting:
#!/bin/bash
for user in $(cut -f1 -d: /etc/passwd)
do echo $user && crontab -u $user -l
echo " "
done
getent passwd | cut -d: -f1 | perl -e'while(<>){chomp;$l = `crontab -u $_ -l 2>/dev/null`;print "$_\n$l\n" if $l}'
This avoids messing with passwd directly, skips users that have no cron entries and for those who have them it prints out the username as well as their crontab.
Mostly dropping this here though so i can find it later in case i ever need to search for it again.
To get list from ROOT user.
for user in $(cut -f1 -d: /etc/passwd); do echo $user; sudo crontab -u $user -l; done
If you check a cluster using NIS, the only way to see if a user has a crontab entry ist according to Matt's answer /var/spool/cron/tabs.
grep -v "#" -R /var/spool/cron/tabs
I like the simple one-liner answer above:
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done
But Solaris which does not have the -u flag and does not print the user it's checking, you can modify it like so:
for user in $(cut -f1 -d: /etc/passwd); do echo User:$user; crontab -l $user 2>&1 | grep -v crontab; done
You will get a list of users without the errors thrown by crontab when an account is not allowed to use cron etc. Be aware that in Solaris, roles can be in /etc/passwd too (see /etc/user_attr).
This script worked for me in CentOS to list all crons in the environment:
sudo cat /etc/passwd | sed 's/^\([^:]*\):.*$/sudo crontab -u \1 -l 2>\&1/' | grep -v "no crontab for" | sh
While many of the answers produce useful results, I think the hustle of maintaining a complex script for this task is not worth it. This is mainly because most distros use different cron daemons.
Watch and learn, kids & elders.
$ \cat ~jaroslav/bin/ls-crons
#!/bin/bash
getent passwd | awk -F: '{ print $1 }' | xargs -I% sh -c 'crontab -l -u % | sed "/^$/d; /^#/d; s/^/% /"' 2>/dev/null
echo
cat /etc/crontab /etc/anacrontab 2>/dev/null | sed '/^$/d; /^#/d;'
echo
run-parts --list /etc/cron.hourly;
run-parts --list /etc/cron.daily;
run-parts --list /etc/cron.weekly;
run-parts --list /etc/cron.monthly;
Run like this
$ sudo ls-cron
Sample output (Gentoo)
$ sudo ~jaroslav/bin/ls-crons
jaroslav */5 * * * * mv ~/java_error_in_PHPSTORM* ~/tmp 2>/dev/null
jaroslav 5 */24 * * * ~/bin/Find-home-files
jaroslav * 7 * * * cp /T/fortrabbit/ssh-config/fapps.tsv /home/jaroslav/reference/fortrabbit/fapps
jaroslav */8 1 * * * make -C /T/fortrabbit/ssh-config discover-apps # >/dev/null
jaroslav */7 * * * * getmail -r jazzoslav -r fortrabbit 2>/dev/null
jaroslav */1 * * * * /home/jaroslav/bin/checkmail
jaroslav * 9-18 * * * getmail -r fortrabbit 2>/dev/null
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
RANDOM_DELAY=45
START_HOURS_RANGE=3-22
1 5 cron.daily nice run-parts /etc/cron.daily
7 25 cron.weekly nice run-parts /etc/cron.weekly
#monthly 45 cron.monthly nice run-parts /etc/cron.monthly
/etc/cron.hourly/0anacron
/etc/cron.daily/logrotate
/etc/cron.daily/man-db
/etc/cron.daily/mlocate
/etc/cron.weekly/mdadm
/etc/cron.weekly/pfl
Sample output (Ubuntu)
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
17 * * * * root cd / && run-parts --report /etc/cron.hourly
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
/etc/cron.hourly/btrfs-quota-cleanup
/etc/cron.hourly/ntpdate-debian
/etc/cron.daily/apport
/etc/cron.daily/apt-compat
/etc/cron.daily/apt-show-versions
/etc/cron.daily/aptitude
/etc/cron.daily/bsdmainutils
/etc/cron.daily/dpkg
/etc/cron.daily/logrotate
/etc/cron.daily/man-db
/etc/cron.daily/mlocate
/etc/cron.daily/passwd
/etc/cron.daily/popularity-contest
/etc/cron.daily/ubuntu-advantage-tools
/etc/cron.daily/update-notifier-common
/etc/cron.daily/upstart
/etc/cron.weekly/apt-xapian-index
/etc/cron.weekly/man-db
/etc/cron.weekly/update-notifier-common
Pics
Ubuntu:
Gentoo:
for user in $(cut -f1 -d: /etc/passwd);
do
echo $user; crontab -u $user -l;
done
The following strips away comments, empty lines, and errors from users with no crontab. All you're left with is a clear list of users and their jobs.
Note the use of sudo in the 2nd line. If you're already root, remove that.
for USER in $(cut -f1 -d: /etc/passwd); do \
USERTAB="$(sudo crontab -u "$USER" -l 2>&1)"; \
FILTERED="$(echo "$USERTAB"| grep -vE '^#|^$|no crontab for|cannot use this program')"; \
if ! test -z "$FILTERED"; then \
echo "# ------ $(tput bold)$USER$(tput sgr0) ------"; \
echo "$FILTERED"; \
echo ""; \
fi; \
done
Example output:
# ------ root ------
0 */6 * * * /usr/local/bin/disk-space-notify.sh
45 3 * * * /opt/mysql-backups/mysql-backups.sh
5 7 * * * /usr/local/bin/certbot-auto renew --quiet --no-self-upgrade
# ------ sammy ------
55 * * * * wget -O - -q -t 1 https://www.example.com/cron.php > /dev/null
I use this on Ubuntu (12 thru 16) and Red Hat (5 thru 7).
On Solaris, for a particular known user name:
crontab -l username
To get all user's jobs at once on Solaris, much like other posts above:
for user in $(cut -f1 -d: /etc/passwd); do crontab -l $user 2>/dev/null; done
Update:
Please stop suggesting edits that are wrong on Solaris:
Depends on your version of cron. Using Vixie cron on FreeBSD, I can do something like this:
(cd /var/cron/tabs && grep -vH ^# *)
if I want it more tab deliminated, I might do something like this:
(cd /var/cron/tabs && grep -vH ^# * | sed "s/:/ /")
Where that's a literal tab in the sed replacement portion.
It may be more system independent to loop through the users in /etc/passwd and do crontab -l -u $user for each of them.
you can write for all user list :
sudo crontab -u userName -l
,
You can also go to
cd /etc/cron.daily/
ls -l
cat filename
this file will list the schedules
cd /etc/cron.d/
ls -l
cat filename
i made below one liner script and it worked for me to list all cron jobs for all users.
cat /etc/passwd |awk -F ':' '{print $1}'|while read a;do crontab -l -u ${a} ; done
Thanks for this very useful script. I had some tiny problems running it on old systems (Red Hat Enterprise 3, which handle differently egrep and tabs in strings), and other systems with nothing in /etc/cron.d/ (the script then ended with an error). So here is a patch to make it work in such cases :
2a3,4
> #See: http://stackoverflow.com/questions/134906/how-do-i-list-all-cron-jobs-for-all-users
>
27c29,30
< match=$(echo "${line}" | egrep -o 'run-parts (-{1,2}\S+ )*\S+')
---
> #match=$(echo "${line}" | egrep -o 'run-parts (-{1,2}\S+ )*\S+')
> match=$(echo "${line}" | egrep -o 'run-parts.*')
51c54,57
< cat "${CRONDIR}"/* | clean_cron_lines >>"${temp}" # */ <not a comment>
---
> sys_cron_num=$(ls /etc/cron.d | wc -l | awk '{print $1}')
> if [ "$sys_cron_num" != 0 ]; then
> cat "${CRONDIR}"/* | clean_cron_lines >>"${temp}" # */ <not a comment>
> fi
67c73
< sed "1i\mi\th\td\tm\tw\tuser\tcommand" |
---
> sed "1i\mi${tab}h${tab}d${tab}m${tab}w${tab}user${tab}command" |
I'm not really sure the changes in the first egrep are a good idea, but well, this script has been tested on RHEL3,4,5 and Debian5 without any problem. Hope this helps!
Building on top of #Kyle
for user in $(tail -n +11 /etc/passwd | cut -f1 -d:); do echo $user; crontab -u $user -l; done
to avoid the comments usually at the top of /etc/passwd,
And on macosx
for user in $(dscl . -list /users | cut -f1 -d:); do echo $user; crontab -u $user -l; done
I think a better one liner would be below. For example if you have users in NIS or LDAP they wouldnt be in /etc/passwd. This will give you the crontabs of every user that has logged in.
for I in `lastlog | grep -v Never | cut -f1 -d' '`; do echo $I ; crontab -l -u $I ; done
With apologies and thanks to yukondude.
I've tried to summarise the timing settings for easy reading, though it's not a perfect job, and I don't touch 'every Friday' or 'only on Mondays' stuff.
This is version 10 - it now:
runs much much faster
has optional progress characters so you could improve the speed further.
uses a divider line to separate header and output.
outputs in a compact format when all timing intervals uencountered can be summarised.
Accepts Jan...Dec descriptors for months-of-the-year
Accepts Mon...Sun descriptors for days-of-the-week
tries to handle debian-style dummying-up of anacron when it is missing
tries to deal with crontab lines which run a file after pre-testing executability using "[ -x ... ]"
tries to deal with crontab lines which run a file after pre-testing executability using "command -v"
allows the use of interval spans and lists.
supports run-parts usage in user-specific /var/spool crontab files.
I am now publishing the script in full here.
https://gist.github.com/myshkin-uk/d667116d3e2d689f23f18f6cd3c71107
Since it is a matter of looping through a file (/etc/passwd) and performing an action, I am missing the proper approach on How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?:
while IFS=":" read -r user _
do
echo "crontab for user ${user}:"
crontab -u "$user" -l
done < /etc/passwd
This reads /etc/passwd line by line using : as field delimiter. By saying read -r user _, we make $user hold the first field and _ the rest (it is just a junk variable to ignore fields).
This way, we can then call crontab -u using the variable $user, which we quote for safety (what if it contains spaces? It is unlikely in such file, but you can never know).
I tend to use following small commands to list all jobs for single user and all users on Unix based operating systems with a modern bash console:
1. Single user
echo "Jobs owned by $USER" && crontab -l -u $USER
2. All users
for wellknownUser in $(cut -f1 -d: /etc/passwd);
do
echo "Jobs owned by $wellknownUser";
crontab -l -u $wellknownUser;
echo -e "\n";
sleep 2; # (optional sleep 2 seconds) while drinking a coffee
done
This script outputs the Crontab to a file and also lists all users confirming those which have no crontab entry:
for user in $(cut -f1 -d: /etc/passwd); do
echo $user >> crontab.bak
echo "" >> crontab.bak
crontab -u $user -l >> crontab.bak 2>> > crontab.bak
done

Resources