i want to create a file with the system date in another directory and copy some data into it. Unix - unix

I want to create a file with system date in another directory and copy data difference of two files into it.
NOW=$(date +"%H_%D")
file="log_$NOW.txt"
diff tmp1.txt tmp2.txt > $temp/log_$NOW.txt
i am using above code. But file is not getting generated. Apart from it if i create a file with simple name i.e without using $NOW the file is getting generated. Please help me.

The format string to date produces something like 16_12/03/13. This contains directory separators so the filename becomes invalid. Instead use dots to separate the date:
NOW=$(date +"%H_%m.%d.%y")
which should produce strings like 16_12.03.13

Related

File Name generated by passed variable getting special character appended

I'm creating an archive filename by combining output folder and output file name which are both passed variables.
Example tar -czf $output_folder/$output_file.tar.gz.
However the generated file is getting an unicode character U+25AA appended, so it looks like output_fileU+25AA.tar.gz. What could be causing this? - I'm testing it on a Windows machine

Fluent-bit, How can I use strftime in path

my log file name contains the current date, like my_log_210616.log
and I need to tail the file in fluent-bit. I tried with,
[INPUT]
Name tail
Path /var/log/my-service/my_log_%y%m%d.log
[OUTPUT]
Name stdout
Match *
but it doesn't watch the file. I replaced my_log_%y%m%d.log with my_log_210616.log, then it works.
How can I use strftime in the path?
One solution is to use a path that matches any date. Since fluent-bit will read the log files from their tail you won’t get data from the older files.
You could also add ’Ignore_Older 24h’ to the input config. This will ignore files with modified times older than 24 hours. Using ’Ignore_Older’ with a parser that extracts the event time works even better.
You could also do more elaborate filtering by file name in a lua filter.

unix file fetch by timestamp

I have a list of files that get added to my work stream. They are csv with a date time stamp to indicate when they are created. I need to pick up each file in the order of the datetime in the file name to process it. Here is a sample list that I get:
Workprocess_2016_11_11T02_00_12.csv
Workprocess_2016_11_11T06_50_45.csv
Workprocess_2016_11_11T10_06_18.csv
Workprocess_2016_11_11T14_23_00.csv
How would I compare the files to search for the oldest one and work towards the chronological newer file? The day the files are dumped is the same, so I can only use from the timestamp in file name.
The beneficial aspect of that date time format is that it sorts the same lexically and chronologically. So all you need is
for file in *.csv; do
mv "$f" xyz
process xyz
done

Extract exactly one file (any) from each 7zip archive, in bulk (Unix)

I have 1,500 7zip archives, each archive contains 2 to 10 files, with no subdirectories.
Each file has the same extension, however the filename varies.
I only want one file out of each archive, but I'd like to perform this in bulk. I do not care which file is taken out, as long as only one file is taken out. It can be the first file, the newest, the biggest, the smallest, it doesn't matter.
Here's an example:
aa.7z {blah 56.smc, blah 57.smc, 1 blah 58.smc}
ab.7z {xx.smc, xx 1.smc, xx_2.smc}
ac.7z {1.smc}
I want to run something equivalent to:
7z e *.7z # But somehow only extract one file
Thank you!
Ultimately my solution was to extract all files and run the following in the directory:
for n in *; do echo "$n"; done > files.txt
I then imported that list into excel, and split the files by a special character that divided the title of the file with the qualifying data inside the filename (for example: Some Title (V1) [X2].smc), specifically I used a brackets delimiter.
Then I removed all duplicates, leaving me with only one edition of each from the zip. I finally remerged the columns (unfortunately the bracket was deleted during the splitting so wrote a function to add it back on the condition of whether there was content in the next column) and then resaved files.txt, after a bit of reviewing StackOverflow for answers, deleted files based on an input file (files.txt). A word of warning on this, spaces in filenames cause problems with rm and xargs so I had to encapsulate the variable with quotes.
Ultimately this still didn't serve me well enough so I just used a different resource entirely.
Posting this answer so others who find themselves in a similar predicament find an alternative resolution.

list of files with space in the name

I would like to get the list of files with a specific extention in a folder. However, these files has space in the name. So for example, imagining I have files named file test1.txt, file test2.txt, file test3.txt, file test4.txt, if I do
list.files(pattern="file test*.txt")
I got
character(0)
NOTA: Apparentely, using simply pattern="file test*" it works fine but I need the extention file as well.
Try:
list.files(pattern="file test.*.txt")
Actually, what this says is:
list.files(pattern="file test(.*).txt")
(which also works). . refers to any character and * refers to the idea that this character should be present 0 or more times (see ?regex).
In your kast example you said that using pattern="file test*" works but you need a way to search for the extension as well.
All you have to do is Change your code to pattern="file test.*.txt". This would make your code search for any filename that matched "file testX.txt" with any one character in place of X.

Resources