I'm using phabricator and I see this in my diff in the web app
DIFF | ID |Base
DIFF 30 | 41275 | xyz123
DIFF 31 | 41276 | xyz123
Is there way for me to checkout to what my code as at DIFF 30
To apply diff on your local branch, you may consider
arc patch <your diff ID>
ref: https://gist.github.com/tony612/f698465a5c74b3cd6d98
found it:
git checkout phabricator/diff/41275
each arc diff applies a tag.
Related
I've got this Unix script and I'm trying to translate it to Python. The problem is I don't understand Unix and I have no idea what any of it means.
This is the code:
1 17-23 * * * curl -X PUT -d EM_OperatingMode=10 --header 'Auth-Token: 512532eb-0d57-4a59-8da0-e1e136945ee8' http://192.168.1.70:80/api/v2/configurations
The code you took is in crontab of Unix system. Let's split the code into two parts.
The first part, 1 17-23 * * * , represents a schedule. It means between 17:00 to 23:00, the program runs every hour at the first minute of that hour. Crontab follows min hour day month weekday. (Refer crontab guru.)
The second part, curl -X PUT -d EM_OperatingMode=10 --header 'Auth-Token: 512532eb-0d57-4a59-8da0-e1e136945ee8' http://192.168.1.70:80/api/v2/configurations, represents a PUT request to an URL.
curl is a program used to communicate with a server. You use a browser to interact with a server right. You can think of curl as a command line tool to do the same. The parameter contains PUT request which updates data in the server. Here EM_OperationMode=10 is being passed to the server for updating. --header represents parameters to be sent to the URL. Here an authentication token is being sent to the URL to validate its access. Finally, the URL for communication is http://192.168.1.70:80/api/v2/configurations.
So, the value is being passed to the server at 192.168.1.70 as per the schedule I mentioned above. You can use Python requests library to replicate the server communication while you can retain the scheduler for calling the python program to run it as per the same schedule.
It seems to be cron job that runs given command automatically on the given interval: "At minute 1 past every hour from 17 through 23."
Similar Python code would be
import requests
requests.put(
"http://192.168.1.70:80/api/v2/configurations",
data={"EM_OperatingMode":10},
headers={"Auth-Token":"512532eb-0d57-4a59-8da0-e1e136945ee8"}
)
It is crontab, example:
* * * * * Command_to_execute
| | | | |
| | | | Day of the Week ( 0 - 6 ) ( Sunday = 0 )
| | | |
| | | Month ( 1 - 12 )
| | |
| | Day of Month ( 1 - 31 )
| |
| Hour ( 0 - 23 )
|
Min ( 0 - 59 )
Here is the source: https://www.tutorialspoint.com/unix_commands/crontab.htm
And second part is curl with some options (flags): https://curl.se/docs/manpage.html
As pre requirement, I need to fetch last month date in Unix (solaris) csh.
set Lmit_Date=`date --date='1 month ago' +%Y%m%d`
above command will fetch last month date and working fine in Linux server. But our server is Solaris and mentioned command is not working.
Please can anyone suggest how I can fetch the last month date
The issue is due to the fact you are using a GNU date extension. --date is non standard.
Moreover, due to the fact month lengths are variable, the date displayed by GNU date might be unexpected, to say the least...
For example today is March 31 but "last month" date was March the 2nd according to GNU date:
$ date +%Y%m%d
20160331
$ date --date='1 month ago' +%Y%m%d
20160302
If you still want to either use GNU date on Solaris or find some workarounds, have a look to these replies:
https://stackoverflow.com/a/23507108/211665 and https://stackoverflow.com/a/17817087/211665
You should be able to compile coreutils for your solaris platform, which will provide you with the right date utility. But as coreutils overwrites core utilities as the name says, you may want to install this into a custom path and select the right date command through your special path, say "/opt/coreutils/bin/date".
The other method would be to calculate last month with a symbolic date output split
eval `date +"set YEAR=%Y; set MONTH=%m ;set DAY=%d"`
Now you can operate on "$YEAR", "$MONTH" and "$DAY". For example
let 'MONTH--'
if [ "$MONTH" -eq 0 ]; then MONTH=12; let 'YEAR--'; fi
set Lmit_Date=`date --date "$MONTH$DAY0000$YEAR" +"%Y%m%d"`
kind of. (I'm used to bash so I don't know if let is available here. But there are some other methods to shell calculations. There might be another keyword for csh).
Also you need to take care for number of days per month with the $DAY parameter.
function last_day {
y=`echo $1 | cut -f1 -d "-"`
m=`echo $1 | cut -f2 -d "-"`
d=`cal ${m} ${y} | nawk 'NF{A=$NF}END{print A}'`
echo "$y $m $d" | nawk '{printf("%s-%02d-%02d",$1,$2,$3);}'
} # last_day
last_day 2023-01-01
Will give you: 2023-01-31 in non-GNU Solaris.
I am getting data from the server in a file (in format1) everyday ,however i am getting the data for last one week.
I have to archive the data for 1.5 months exactly,because this data is being picked to make some graphical representation.
I have tried to merge the the files of 2 days and sort them uniquely (code1) ,however it didn't work because everyday name of raw file is changing.However Time-stamp is unique in this file,but I am not sure how to sort the unique data on base of a specific column also,is there any way to delete the data older than 1.5 months.
For Deletion ,The logic i thought is deleting by fetching today's date - least date of that file but again unable to fetch least date.
Format1
r01/WAS2/oss_change0_5.log:2016-03-21T11:13:36.354+0000 | (307,868,305) | OSS_CHANGE |
com.nokia.oss.configurator.rac.provisioningservices.util.Log.logAuditSuccessWithResources | RACPRS RNC 6.0 or
newer_Direct_Activation: LOCKING SUCCEEDED audit[ | Source='Server' | User identity='vpaineni' | Operation
identifier='CMNetworkMOWriterLocking' | Success code='T' | Cause code='N/A' | Identifier='SUCCESS' | Target element='PLMN-
PLMN/RNC-199/WBTS-720' | Client address='10.7.80.21' | Source session identifier='' | Target session identifier='' |
Category code='' | Effect code='' | Network Transaction identifier='' | Source user identity='' | Target user identity='' |
Timestamp='1458558816354']
Code1
cat file1 file2 |sort -u > file3
Data on Day2 ,the input file name Differ
r01/WAS2/oss_change0_11.log:2016-03-21T11:13:36.354+0000 | (307,868,305) | OSS_CHANGE |
com.nokia.oss.configurator.rac.provisioningservices.util.Log.logAuditSuccessWithResources | RACPRS RNC 6.0 or
newer_Direct_Activation: LOCKING SUCCEEDED audit[ | Source='Server' | User identity='vpaineni' | Operation
identifier='CMNetworkMOWriterLocking' | Success code='T' | Cause code='N/A' | Identifier='SUCCESS' | Target element='PLMN-
PLMN/RNC-199/WBTS-720' | Client address='10.7.80.21' | Source session identifier='' | Target session identifier='' |
Category code='' | Effect code='' | Network Transaction identifier='' | Source user identity='' | Target user identity='' |
Timestamp='1458558816354']
I have written almost similar kind of code a week back.
Awk is a good Tool ,if you want to do any operation column wise.
Also , Sort Unique will not work as file name is changing
Both unique rows and least date can be find using awk.
1 To Get Unique file content
cat file1 file2 |awk -F "\|" '!repeat[$21]++' > file3;
Here -F specifies your field separator
Repeat is taking 21st field that is time stamp
and will only print 1st occurrence of that time ,rest ignored
So,finally unique content of file1 and file2 will be available in file3
2 To Get least Date and find difference between 2 dates
Least_Date=`awk -F: '{print substr($2,1,10)}' RMCR10.log|sort|head -1`;
Today_Date=`date +%F` ;
Diff=`echo "( \`date -d $Today_Date +%s\` - \`date -d $Start_Date +%s\`) / (24*3600)" | bc -l`;
Diff1=${Diff/.*};
if [ "$Diff1" -ge "90" ]
then
Here we have used {:} as field separator, and finally substring to get exact date field then sorting and finding least
value.
Subtracting today's Date by using Binary calculator and then removing decimals.
Hope it helps .....
Due to a few company-specific features, which I need to swap in and out, I sometimes have migrated scripts which are not present in the sql directory when I run "info" or "migrate" at a later time. I just noticed an inconsistency, though, in how this displays:
+----------------+----------------------------+---------------------+---------+
| Version | Description | Installed on | State |
+----------------+----------------------------+---------------------+---------+
...
| 4.1 | Add new reports synonyms | 2013-05-31 16:38:22 | Success |
| 4.1.1 | BRNC Add new reports synon | 2013-05-31 16:38:22 | Missing |
| 4.2 | Convert old DATA to DATA2 | 2013-05-31 16:38:22 | Success |
| 4.2.1 | BRNC Convert old DATA to D | 2013-05-31 16:38:22 | Future |
+----------------+----------------------------+---------------------+---------+
So, "Success" means that scripts have been run, and "Missing" means they were run and are no longer present. But what does "Future" mean?
This is similar but not identical to a question:
state of migration scripts is "future"
which was never officially answered, but where Axel Fontaine said in a comment that this had been fixed. I checked, and my jars (3/18) are a later date than his comment (3/2).
As it currently stands this is what these mean:
missing -> executed, no longer found in configured locations, older than the newest found script
future -> executed, no longer found in configured locations, newer than the newest found script
Coming to think of it though, I feel this minor distinction might not be worth a separate state in the info results. I will revisit this in time for 2.2.
Due to a few company-specific features, which I need to swap in and out, I sometimes have migrated scripts which are not present in the sql directory when I run "info" or "migrate" at a later time. I just noticed an inconsistency, though, in how this displays:
+----------------+----------------------------+---------------------+---------+
| Version | Description | Installed on | State |
+----------------+----------------------------+---------------------+---------+
...
| 4.1 | Add new reports synonyms | 2013-05-31 16:38:22 | Success |
| 4.1.1 | BRNC Add new reports synon | 2013-05-31 16:38:22 | Missing |
| 4.2 | Convert old DATA to DATA2 | 2013-05-31 16:38:22 | Success |
| 4.2.1 | BRNC Convert old DATA to D | 2013-05-31 16:38:22 | Future |
+----------------+----------------------------+---------------------+---------+
So, "Success" means that scripts have been run, and "Missing" means they were run and are no longer present. But what does "Future" mean?
This is similar but not identical to a question:
state of migration scripts is "future"
which was never officially answered, but where Axel Fontaine said in a comment that this had been fixed. I checked, and my jars (3/18) are a later date than his comment (3/2).
As it currently stands this is what these mean:
missing -> executed, no longer found in configured locations, older than the newest found script
future -> executed, no longer found in configured locations, newer than the newest found script
Coming to think of it though, I feel this minor distinction might not be worth a separate state in the info results. I will revisit this in time for 2.2.