I am getting data from the server in a file (in format1) everyday ,however i am getting the data for last one week.
I have to archive the data for 1.5 months exactly,because this data is being picked to make some graphical representation.
I have tried to merge the the files of 2 days and sort them uniquely (code1) ,however it didn't work because everyday name of raw file is changing.However Time-stamp is unique in this file,but I am not sure how to sort the unique data on base of a specific column also,is there any way to delete the data older than 1.5 months.
For Deletion ,The logic i thought is deleting by fetching today's date - least date of that file but again unable to fetch least date.
Format1
r01/WAS2/oss_change0_5.log:2016-03-21T11:13:36.354+0000 | (307,868,305) | OSS_CHANGE |
com.nokia.oss.configurator.rac.provisioningservices.util.Log.logAuditSuccessWithResources | RACPRS RNC 6.0 or
newer_Direct_Activation: LOCKING SUCCEEDED audit[ | Source='Server' | User identity='vpaineni' | Operation
identifier='CMNetworkMOWriterLocking' | Success code='T' | Cause code='N/A' | Identifier='SUCCESS' | Target element='PLMN-
PLMN/RNC-199/WBTS-720' | Client address='10.7.80.21' | Source session identifier='' | Target session identifier='' |
Category code='' | Effect code='' | Network Transaction identifier='' | Source user identity='' | Target user identity='' |
Timestamp='1458558816354']
Code1
cat file1 file2 |sort -u > file3
Data on Day2 ,the input file name Differ
r01/WAS2/oss_change0_11.log:2016-03-21T11:13:36.354+0000 | (307,868,305) | OSS_CHANGE |
com.nokia.oss.configurator.rac.provisioningservices.util.Log.logAuditSuccessWithResources | RACPRS RNC 6.0 or
newer_Direct_Activation: LOCKING SUCCEEDED audit[ | Source='Server' | User identity='vpaineni' | Operation
identifier='CMNetworkMOWriterLocking' | Success code='T' | Cause code='N/A' | Identifier='SUCCESS' | Target element='PLMN-
PLMN/RNC-199/WBTS-720' | Client address='10.7.80.21' | Source session identifier='' | Target session identifier='' |
Category code='' | Effect code='' | Network Transaction identifier='' | Source user identity='' | Target user identity='' |
Timestamp='1458558816354']
I have written almost similar kind of code a week back.
Awk is a good Tool ,if you want to do any operation column wise.
Also , Sort Unique will not work as file name is changing
Both unique rows and least date can be find using awk.
1 To Get Unique file content
cat file1 file2 |awk -F "\|" '!repeat[$21]++' > file3;
Here -F specifies your field separator
Repeat is taking 21st field that is time stamp
and will only print 1st occurrence of that time ,rest ignored
So,finally unique content of file1 and file2 will be available in file3
2 To Get least Date and find difference between 2 dates
Least_Date=`awk -F: '{print substr($2,1,10)}' RMCR10.log|sort|head -1`;
Today_Date=`date +%F` ;
Diff=`echo "( \`date -d $Today_Date +%s\` - \`date -d $Start_Date +%s\`) / (24*3600)" | bc -l`;
Diff1=${Diff/.*};
if [ "$Diff1" -ge "90" ]
then
Here we have used {:} as field separator, and finally substring to get exact date field then sorting and finding least
value.
Subtracting today's Date by using Binary calculator and then removing decimals.
Hope it helps .....
Related
When I prepare for my exam, I meet the following statement:
If file1 and file2 are hard linked, and two processes open file1 and file2,
their read/write pointer keeps the same.
Which, according to the answer (no explanation), is wrong. So I searched google, and found something different.
This link: https://www.usna.edu/Users/cs/wcbrown/courses/IC221/classes/L09/Class.html Says the read/write pointer is in the (system wide) open file table.
But this link http://www.cs.kent.edu/~walker/classes/os.f08/lectures/Walker-11.pdf
Says the pointer is in the per process file table.
Which one is true?
IHMO, the read/write offset clearly has to be a per process property. You could easily crash other proceses if this was a system wide per file property. This is my understang, but I'd rather have this confirmed by an informed source.
I took a look at the 1986 AT&T book "The design of the Unix Operating System" by Maurice J. Bach, which I consider a informed source.
In topic 2.2.1 An Overview of the File Subsytem it sais:
... Inodes are stored in the file system ... The kernel reads them
into an in-core inode table when manipulating files ... The kernel
contains two other data structures, the file table and the user
file descriptor table. The file table is a global kernel
structure, but the user file descriptor table is allcated per
process ... The file table keeps track of the (read/write) byte
offset ...
This would contradict my statement. But then, clarification can be read in topic 5.1 OPEN, pages 92ff. Figure 5.3 shows an example of a process having done three opens, two of them being for the same file, /x/y/z (I simplfy the naming here, and in the illustration below).
User File
Descriptor Table File Table inode Table
+--------------+ +------------+ +------------+
0| | | | | |
+--------------+ | . | | . |
1| | | . | | . |
+--------------+ | . | | . |
2| | +------------+ | |
+--------------+ +-->| read offset|----+ | |
3| | | +------------+ | | |
+--------------+ | | | | +------------+
4| |---+ | . | +->| inode of |
+--------------+ | . | +--->| /x/y/z |
5| |----+ | . | | +------------+
+--------------+ | +------------+ | | . |
6| |-+ +->| read |----+ | . |
+--------------+ | +------------+ | | | . |
| . | | | . | | | +------------+
| . | | | . | | +->| inode of |
| | | | . | | | /a/b |
+--------------+ | +------------+ | +------------+
+---->|write offset|--+ | . |
+------------+ | . |
| . | | . |
| . | | . |
+------------+ +------------+
The final answer is in the text following figure 5.3 on page 94:
Figure 5.3 shows the relationship between the inode table, file
table, and user file descriptor table structures. Each open
returns a file descriptor to the process, and the corresponding entry
in the user file descriptor table points to a unique entry in the
kernel file table even though one file (/x/y/z) is opened twice.
To answer your question: The read/write offset is kept in the kernel file table, not in a per process table, but a unique entry is allocated upon each open().
But, why is there a kernel file table? After all, the read/write offsets could have been stored in the per process user file descriptor table, instead of in a kernel table, couldn't they?
To understand why there is a kernel file table, think of what the dup() and fork() functiones do with respect to file descriptors: They duplicate the state of an open file. Under a new file descriptor in the same process, dup(), or under the same file descriptor (number) but in a duplicated user file descriptor table in the new (child) process. In both cases, duplicating the state of an open file includes the read/write offset. So for these cases, more than one file descriptor will point to a single file table entry.
I have json file with multiple object Id's and I need a query that excludes different ids based on naming conventions. These are essentially OR's. I thought I had it with this query but they are still appearing in the output.
If I run the query with them separately I can get it to work, but I need to add a large list.
Works
cat file.json | jq '.interface[] | select(.description | contains ("VLL") | not )'
Not working
cat file.json | jq '.interface[] | select(.description | contains ("VLL"|"2002089"|"otherstuff" ) | not )'
Ive tried a few different ways with commas and quoting but no luck.
Am I far off?
I also plan to run this in bash script if that help(probably makes worse)
Thanks
Am I far off?
If you use test/1 instead of contains, and make corresponding adjustments, no:
.interface[]
| select(.description | test ("VLL|2002089|otherstuff" ) | not )
The argument of test is interpreted as a regex. There are of course alternatives, but if using a regex is appropriate, then test would be suitable.
Blacklist of strings
If you have a blacklist of strings and want to use string equality as the criterion, consider:
["VLL","2002089","otherstuff"] as $blacklist
| .interface[]
| select(.description | IN($blacklist[]) | not)
I have a data file extracted from a table. Now I have to retain lines with latest timestamp for an id if there are multiple entries for the id if exist.
Need to delete other old entry lines. The file need to be cleaned up.
Id | timestamp | status
3 | 17-Feb-20 12:30:00:00 PM | E
1 | 16-feb-20 09:30:00:00 Am | L
3 | 17-Feb-20 15:30:00:00 PM | N
2 | 17-Feb-20 10:12:00:00 Am | L
My need is I need to retain id 1,2 as there are only one entry. But id 3 has two rows but should retain the one with timestamp 15:30 PM since that's the latest.
I got reference to use 'sed' cmd to delete particular line number or particular line. But the thing is I will be reading line by line.
Say on looping first line wud be id 3, I wud grep on the same file for matching id if exist. So for id 3 I got naother match, then will compare the timestamp. Since the current line I'm on is older timestamp need to delete tat line and retain latest one.
Is it possible to do the operation on the same file which I will be reading through or should obviously go with another file ??
Looking to find the object count in the InUseBy element.
aws acm describe-certificate \
--certificate-arn arn:### \
| jq -r '.Certificate | [.CertificateArn, .InUseBy] | #tsv'
| length is what I want to use but I'm unsure how to limit it to just InUseBy
[.CertificateArn, .InUseBy | length] applies length to all items, how do I limit it to InUseBy
You need to use parentheses:
[.CertificateArn, (.InUseBy | length)]
The below is rewarded with a complaint that Remove Directory requires 1 or 2 arguments and I gave it none. I'm using 2.6.3, and dcsLshLocation is a variable (and adding an x in front doesn't change the error). I'm using the Java version of all this.
*** Settings ***
| Documentation | http://jira.basistech.net:8080/browse/JEST-226
| Resource | src/main/resources/jug-shared-keywords.txt
| Force Tags | integration |
| Suite Precondition | Run Keywords |
| | ... | Validate SUT Installations |
| | ... | Launch Derby Server |
| | ... | Copy file ${jddInstallDir}/conf/jdd-conf-basic.xml to ${jddInstallDir}/conf/jdd-conf.xml
| | ... | Remove Directory | ${dcsLshLocation} |
| Suite Teardown | Run Keywords | Shutdown Derby
| Test Timeout | 20 minutes
When this question was originally written, Run Keywords could only run keywords that do not take arguments. That is no longer true. From the documentation:
Starting from Robot Framework 2.7.6, keywords can also be run with arguments using upper case AND as a separator between keywords. The keywords are executed so that the first argument is the first keyword and proceeding arguments until the first AND are arguments to it. First argument after the first AND is the second keyword and proceeding arguments until the next AND are its arguments. And so on.
The code in the question can thus be expressed like this:
| Suite Precondition | Run Keywords |
| | ... | Validate SUT Installations
| | ... | AND | Launch Derby Server
| | ... | AND | Copy file ${jddInstallDir}/conf/jdd-conf-basic.xml to ${jddInstallDir}/conf/jdd-conf.xml
| | ... | AND | Remove Directory | ${dcsLshLocation}
The following is the original answer to the question, which others may still find useful. It is still relevant for versions of robot framework prior to 2.7.6.
When you use Run Keywords, you cannot run keywords that take arguments. Admittedly the documentation is a bit unclear, but this is what it says:
User keywords must nevertheless be used if the executed keywords need
to take arguments.
What it should say is that, when you use Run Keywords, each argument is the name of a keyword to run. This keyword cannot take any arguments itself because robot can't know where the arguments for one keyword ends and the next keyword begins.
Remember that ... simply means that the previous row is continued on the next, so while it looks like each row is a separate keyword with arguments, it's not. You example is the same as:
| Suite Precondition | Run Keywords |
| | ... | Validate SUT Installations |
| | ... | Launch Derby Server |
| | ... | Copy file ${jddInstallDir}/conf/jdd-conf-basic.xml to ${jddInstallDir}/conf/jdd-conf.xml
| | ... | Remove Directory |
| | ... | ${dcsLshLocation} |