Send first 1000 lines of syslog to a text file based on time computer was last booted - syslog

I would like to be a able to create a text file containing the first 1000 lines of syslog starting with the time of the last boot up.
I want the first line of the text file to be the first entry in syslog since the last time the computer was booted.
This is what I have so far.
#!/bin/bash
#
# who -b system boot 2022-11-19 02:20
head -1000 /var/log/syslog > /home/andy/Downloads/syslog.txt
geany /home/andy/Downloads/syslog.txt
An entry showing an early syslog entry.
Nov 12 21:19:05 7 systemd[1]: Finished Remount Root and Kernel File Systems.

Related

Systemd Journal log from command line to specific namespace

How to log from command line to a specific systemd log namespace?
When using Systemd to start a service, one can give
LogNamespace=myNamespace in combination with
StandardOutput=journal as part of the configuration in the unit file.
To see the output of this namespace, it is sufficient to call journalctl --namespace=myNamespace to see only output of this namespace.
With systemd-catit is possible to print directly from command line into the journal:
echo "Hello Journal!" | systemd-cat
The print "Hello Journal!" does show up in the default (anonymous?) namespace, which is visible with journalctl. It is not within any namespace and not visible when using journalctl --namespace=myNamespace.
To be more specific on the initial question:
When looking at journalct -f --namespace=myNamespace, how to make the output of a process started from command line (without systemd, just plain binary) visible in this view?
Something like echo "Hello Journal!" | systemd-cat --log-namespace=myNamespace
I use LogNamespaces to separate different application logs. If this is not the intended use, an answer which explains how to this in another (better) way is also accepted.
first find out the pid of your journald with your namespace.
ps aux|grep jou
root 243 0.0 0.6 56176 26212 ? Ss 10:06 0:01 /lib/systemd/systemd-journald
root 10214 0.0 0.3 42308 14740 ? Ss 11:34 0:00 /lib/systemd/systemd-journald DEBUG
our instance is DEBUG
next find out what socket that instance uses:
lsof -p 10214 |grep dev-log
systemd-j 10214 root 5u unix 0x0000000003787334 0t0 61560 /run/systemd/journal.DEBUG/dev-log type=DGRAM
now we are ready to log to the desired namespace:
logger -u /run/systemd/journal.DEBUG/dev-log HAHAHAHA
journalctl --namespace=DEBUG
Jan 22 11:35:46 myportal-deb root[10339]: HAHAHAHA
Jan 22 11:38:50 myportal-deb root[10559]: HAHAHAHA

Airflow task Intermittently Fails due to Failed to fetch log file and Could not read logs

I'm running a DAG that runs once per day. It starts with 9 concurrently running tasks that all do the same thing - each is basically polling S3 to see if that tasks's designated 1 file exists. Each task is the same code in Airflow and is put into the structure in the same way. I have 1 of these tasks, which, on random days, fails to "begin" - it won't enter the running stage. It just sits as queued . When it does this, here's what its log says
*** Log file isn't local.
*** Fetching here: http://:8793/log/my.dag.name./my_airflow_task/2020-03-14T07:00:00
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://mybucket/airflow/logs/my.dag.name./my_airflow_task/2020-03-14T07:00:00
Why does this only happen on random days? All similar questions I've seen point to this error happening consistently, and once overcome, no longer continues. To "trick" this task into "running" I manually touch whatever the name of the log file is supposed to be, and then it changes to running.
So the issue appears that it had to do with the system's ownership rules regarding the folder the logs for that particular task wrote to. I used a CI tool to ship the new task_3 when I updated my Airflow's Python code to the production environment, so the task was created that way. When I peaked for log directory ownership, I noticed this for the tasks:
# inside/airflow/log/dir:
drwxrwxr-x 2 root root 4096 Mar 25 14:53 task_3 # is the offending task
drwxrwxr-x 2 airflow airflow 20480 Mar 25 00:00 task_2
drwxrwxr-x 2 airflow airflow 20480 Mar 25 15:54 task_1
So, I think what was going on, was that randomly, Airflow couldn't get the permission to write the log file, thus it wouldn't start the rest of the task. When I applied the appropriate chown command using something like sudo chown -R airflow:airflow task_3 . Ever since I changed this, the issue has disappeared.

snakemake always report " MissingOutputException in line 44, Missing files after 5 seconds:

I always get the same error report in my RNAs-seq pipeline by snakemake:
MissingOutputException in line 44 of /root/s/r/snakemake/my_rnaseq_data/Snakefile:
Missing files after 5 seconds:
03_align/wt2.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Here is my Snakefile:
SBT=["wt1","wt2","epcr1","epcr2"]
rule all:
input:
expand("02_clean/{nico}_1.paired.fq", nico=SBT),
expand("02_clean/{nico}_2.paired.fq", nico=SBT),
expand("03_align/{nico}.bam", nico=SBT)
rule trim:
input:
"01_raw/{nico}_1.fastq",
"01_raw/{nico}_2.fastq"
output:
"02_clean/{nico}_1.paired.fq.gz",
"02_clean/{nico}_1.unpaired.fq.gz",
"02_clean/{nico}_2.paired.fq.gz",
"02_clean/{nico}_2.unpaired.fq.gz",
shell:
"java -jar /software/Trimmomatic-0.36/trimmomatic-0.36.jar PE -threads 16 {input[0]} {input[1]} {output[0]} {output[1]} {output[2]} {output[3]} ILLUMINACLIP:/software/Trimmomatic-0.36/adapters/TruSeq3-PE-2.fa:2:30:10 LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36 &"
rule gzip:
input:
"02_clean/{nico}_1.paired.fq.gz",
"02_clean/{nico}_2.paired.fq.gz"
output:
"02_clean/{nico}_1.paired.fq",
"02_clean/{nico}_2.paired.fq"
run:
shell("gzip -d {input[0]} > {output[0]}")
shell("gzip -d {input[1]} > {output[1]}")
rule map:
input:
"02_clean/{nico}_1.paired.fq",
"02_clean/{nico}_2.paired.fq"
output:
"03_align/{nico}.sam"
log:
"logs/map/{nico}.log"
threads: 40
shell:
"hisat2 -p 20 --dta -x /root/s/r/p/A_th/WT-Al_VS_WT-CK/index/tair10 -1 {input[0]} -2 {input[1]} -S {output} >{log} 2>&1 &"
rule sort2bam:
input:
"03_align/{nico}.sam"
output:
"03_align/{nico}.bam"
threads:30
shell:
"samtools sort -# 20 -m 20G -o {output} {input} &"
everything is fine until I add "rule sort2bam" part.
When I dry-run ,it goes fine. But when I execute it,it report error as the question describe. And Surprisely it run the task where it report it stuck in the background.But it always run the one task.like these:
rule sort2bam:
input: 03_align/epcr1.sam
output: 03_align/epcr1.bam
jobid: 11
wildcards: nico=epcr1
Waiting at most 5 seconds for missing files.
MissingOutputException in line 45 of /root/s/r/snakemake/my_rnaseq_data/Snakefile:
Missing files after 5 seconds:
03_align/epcr1.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
[Sat Apr 27 06:10:22 2019]
rule sort2bam:
input: 03_align/wt1.sam
output: 03_align/wt1.bam
jobid: 9
wildcards: nico=wt1
Waiting at most 5 seconds for missing files.
MissingOutputException in line 45 of /root/s/r/snakemake/my_rnaseq_data/Snakefile:
Missing files after 5 seconds:
03_align/wt1.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
[Sat Apr 27 06:23:13 2019]
rule sort2bam:
input: 03_align/wt2.sam
output: 03_align/wt2.bam
jobid: 6
wildcards: nico=wt2
Waiting at most 5 seconds for missing files.
MissingOutputException in line 44 of /root/s/r/snakemake/my_rnaseq_data/Snakefile:
Missing files after 5 seconds:
03_align/wt2.bam
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
I don't know what's wrong with my code? Any ideals? Thanks in advance!
As you figured out, & is the problem. Control operator & makes your command run in the background in a subshell, and this leads snakemake to think that job is complete when in fact it is not. In your case, its usage doesn't appear to be required.
From man bash on usage of & (stolen from this answer):
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and
the return status is 0.
I know how to solve, but I don't know why it works!
Just delete the '&' in
samtools sort -# 20 -m 20G -o {output} {input} &

A basic movescu example for retrieving dicom images

I'm trying to use dcm4che for downloading images from the free http://www.dicomserver.co.uk/. I've cloned and checked out the 5.13.2 version and built it using mvn install. Now when I go into the dcm4che-assembly/target/dcm4che-5.13.2-bin/dcm4che-5.13.2/bin directory and try to download a StudyInstanceUID:
./movescu -c DCMQRSCP#www.dicomserver.co.uk:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105 --dest STORESCP
I get the error:
...
(0000,0902) LO [Unknown Move Destination: STORESCP] ErrorComment
...
The error indicates that it can't connect to the the receiver. I've tried to run:
./storescp -b STORESCP:11112
without much success. I've also tried to run the dcmqrscp with similar outcomes.
My humble request: Please provide a working example of the movescu.
Details
I can get the findscu to work without issues, e.g.:
./findscu -c DCMQRSCP#www.dicomserver.co.uk:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105 -r PatientID
gives:
(0008,0005) CS [] SpecificCharacterSet
(0008,0052) CS [STUDY] QueryRetrieveLevel
(0008,0054) AE [DCMQRSCP] RetrieveAETitle
(0010,0020) LO [PAT004] PatientID
(0020,000D) UI [1.2.826.0.1.3680043.11.105] StudyInstanceUID
Similarly the getscu command seems to work:
>./getscu -c DCMQRSCP#www.dicomserver.co.uk:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105
Creates the following DICOM files:
ls 1* -lh
-rw-rw-r-- 1 max max 12M jul 7 12:16 1.2.276.0.7230010.3.1.4.39332053.7432.1527748041.31
-rw-rw-r-- 1 max max 150K jul 7 12:17 1.2.276.0.7230010.3.1.4.8323329.11391.1527939718.955155
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.100
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.104
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.108
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.112
-rw-rw-r-- 1 max max 6,0M jul 7 12:16 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.80
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.84
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.88
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.92
-rw-rw-r-- 1 max max 6,0M jul 7 12:17 1.2.826.0.1.3680043.9.6384.2.2087.20180322152557.400.96
Lastly, I'm sorry if this question falls into the duplicate category. After spending days without finding a working movescu example on either StackOverflow or the dcm4che-forum, I've given up searching. The goal is to have an example to use so that I can modify the underlying Java code for my own purposes. Also let me know if you're interested in the entire movescu dump.
Update
After Tarmo's helpful tip I tried to (1) use the correct AE & port and (2) change to Orthanc. Unfortunately I still can't retrieve an image from the dicomserver.co.uk but the Orthanc solution worked.
Below is the summary of the outcomes:
Alt. 1: Port & port compliance
As it seems part of my issue was RTFM-related:
Use any calling and called AE titles you like (making them specific to you will assist if logs need to be examined), but if you wish to use C-MOVE, ensure that the calling and destination AETs are the same, and that you listen on port 104.
My first attempt was to align the two AE-titles:
./movescu -c STORESCP#www.dicomserver.co.uk:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105 --dest STORESCP
This does not work and it turns out that the destination port is random. At both ends (server log + local) one can find that the port was:
14:23:47,539 INFO - MOVESCU->APA(1): close Socket
[addr=www.dicomserver.co.uk/88.202.185.144,port=104,localport=57985]
The localport changes between each attempt. Things that I've tried so far:
Variants of --dest (1) STORESCP:104, (2) STORESCP$localhost:104, (3) other AE-titles
Starting up a SCP through sudo ./dcmqrscp -b STORESCP:104 --dicomdir /home/max/tmp/dcm (the sudo is due to the low port number) and calling with AE-title only as dest
Same as above but with the -b option: ./movescu -c STORESCP#www.dicomserver.co.uk:104 -b STORESCP#localhost:104 -m StudyInstanceUID=1.2.826.0.1.3680043.11.105 --dest STORESCP
Same as above without the SCP and with my local IP/external IP (no firewall changes made)
I've also tried USB-tethering through my phone to circumvent the router but the phone operated at IPv6 and not v4
It would still be nice to know how to set this up as it could be quite useful. My guess is that since C-MOVE provides the raw IP-address to the dicomserver the 104-port needs to be forwarded to the current machine. Being new to the DICOM-protocol I find many of these features somewhat cryptic...
Alt 2: Local Orthanc server (WORKS!)
Here's the full set-up for anyone that wants to get a test system up and running (using Ubuntu 18.04):
sudo apt install orthanc & check that the service is started systemsctl status orthanc.service
In /etc/orthanc/orthanc.json uncomment the line with: "sample" : [ "STORESCP", "localhost", 2000 ] and restart the server systemsctl restart orthanc.service
Go to http://localhost:8042 (unless you've changed the web-port at /etc/orthanc/orthanc.json)
Navigate into upload and find a dcm-file for uploading (you can find dcm-files to download here: https://www.dicomlibrary.com/ or you can use the getscu from above)
Drag and drop the dcm-file into http://localhost:8042/app/explorer.html#upload + press "Start the upload"
Go to patients and get the new StudyInstanceUID for the uploaded image
Start a SCP-service with the STORESCP and 2000 port that you allowed in the /etc/orthanc/orthanc.json, e.g. ./dcmqrscp -b STORESCP:2000 --dicomdir /home/max/tmp/dcm
Call the movescu with the -b to the above SCP with the new StudyInstanceUID (shortened below for readability), e.g.:
./movescu
-c ORTHANC#localhost:4242
-m StudyInstanceUID=1.2.826.0.1.3680043.8.....
-b STORESCP#localhost:2000
--dest STORESCP
And that's it!
Please read the C-MOVE information on the http://www.dicomserver.co.uk/ homepage again to figure out, how to set up your query. Your syntax for the command is correct, but some details are wrong.
Basically you need two things:
Your calling AE title must be the same as the destination AE title. You have them different at the moment
Your storescp must be accessible from the public internet on the same port, that you used to connect to dicomserver.co.uk, in your example that is 104. Their server needs to make a new TCP connection back to your computer for it to work.
I think it would be easier to install a lightweight PACS on your local machine to test your applications with (e.g. Orthanc). Getting DICOM C-MOVE to work over public internet is asking for trouble in my opinion.

exit code for rsync if there is no modification done to destination folder

I'm using rsync in solaris and couldn't find an exit code if there is no file or folder modification/addition or deletion done on the destination folder. How can I get the status if rsync doesn't have one ?
0 Success
1 Syntax or usage error
2 Protocol incompatibility
3 Errors selecting input/output files, dirs
4 Requested action not supported: an attempt was made to manipulate 64-bit
files on a platform that cannot support them; or an option was specified
that is supported by the client and not by the server.
5 Error starting client-server protocol
6 Daemon unable to append to log-file
10 Error in socket I/O
11 Error in file I/O
12 Error in rsync protocol data stream
13 Errors with program diagnostics
14 Error in IPC code
20 Received SIGUSR1 or SIGINT
21 Some error returned by waitpid()
22 Error allocating core memory buffers
23 Partial transfer due to error
24 Partial transfer due to vanished source files
25 The --max-delete limit stopped deletions
30 Timeout in data send/receive
35 Timeout waiting for daemon connection
Thank you
There is a work around
rsync --log-format=%f ...
Note that rsync outputs files anytime any attribute changes, not only if the content of the file is updated.
There is also a -i option (or --log-format=%i) that itemizes all of the changes. See the rsync man page for details of the output format.

Resources