I have a script which runs everyday at 1.00 AM regularly for every day.
But On every alternate Wednesday I need to change the timings to 6.00 AM and which currently I am doing separately on every Tuesday Manually.
e.g
Wednesday Nov 09 2016 6.00 AM.
Wednesday Nov 23 2016 6.00 AM.
Wednesday Dec 07 2016 6.00 AM.
The main thing is for every Wednesday in between the job should be as per regular timings.
Using this bash trick it could be done with 3 cron entries (possibly 2):
#Every day except Wednesdays at 1am
0 1 * * 0,1,2,4,5,6 yourCommand
#Every Wednesdays at 1am, proceeds only on even weeks
0 1 * * 3 test $((10#$(date +\%W)\%2)) -eq 0 && yourCommand
#Every Wednesdays at 6am, proceeds only on odd weeks
0 6 * * 3 test $((10#$(date +\%W)\%2)) -eq 1 && yourCommand
Change the -eq's to 1 or 0 depending if you want to start with odd or even week. It should work according to your example, because Wednesday Nov 09 2016 6.00 AM is even.
Related
I need to create a basic log file through the use of a crontab job that appends a timestamp, followed by a list of logged in users. It must be at 23:59 each night.
(I have used 18 18 * * * as an example to make sure the job works for now)
So far, I have;
!#/bin/bash
59 23 * * * (date ; who) >> /root/userlogfile.txt
for my crontab script, the output;
Fri Dec 9 18:18:01 UTC 2022
root console 00:00 Dec 9 18:15:15
My required output is something similar to;
Fri 09 Dec 23:59:00 GMT 2022
user1 tty2 2017-11-30 22:00 (:0)
user5 pts/1 2017-11-30 20:35 (192.168.1.1)
How would I go about this?
I am trying to pull some economic data from Investing.com. Here is a link to the non-farm payroll I am looking to pull.
https://ca.investing.com/economic-calendar/nonfarm-payrolls-227
As you can see, once you click the show more button, more rows are loaded. I would like to scrape all the hidden data in the table.
If you inspect the page you can quite easily see the html tags associated with each row. I was wondering if there was an easy way to scrape the data without using R selenium.
Here is my current code that only returns the 6 rows initially showed when first entering the site.
x = read_html("https://ca.investing.com/economic-calendar/nonfarm-payrolls-227")%>%
html_nodes('table')%>%.[1]%>%html_table(fill = T)
print(x)
# Release Date Time Actual Forecast Previous
1 May 03, 2019 (Apr) 08:30 263K 181K 189K NA
2 Apr 05, 2019 (Mar) 08:30 196K 175K 33K NA
3 Mar 08, 2019 (Feb) 09:30 20K 181K 311K NA
4 Feb 01, 2019 (Jan) 09:30 304K 165K 222K NA
5 Jan 04, 2019 (Dec) 09:30 312K 178K 176K NA
6 Dec 07, 2018 (Nov) 09:30 155K 200K 237K NA
Edited for clarity:
I'm using R and I have a set of data that consists of order days:
> orders <- data.frame(order.num=1:4,
+ day = c("Mon", "Mon", "Mon", "Tue"))
> orders
order.num day
1 1 Mon
2 2 Mon
3 3 Mon
4 4 Tue
...
Orders typically come in on a consistent day (Monday in example above), but sometimes they come in on an alternate day (Tuesday in example above).
Here is actual data, spread into columns using dplyr::spread function
Outlet.number Sun Mon Tue Wed Thu Fri Sat
1 1 0 530 162 0 629 49 0
2 2 0 784 123 0 854 65 0
3 3 24 15 483 0 365 0 0
For Outlet 1, the "typical" order days are Monday and Thursday
For Outlet 2, the "typical" order days are Monday and Thursday
For Outlet 3, the "typical" order days are Tuesday and Thursday
I want to be able to predict if an order on an atypical day (e.g. Tue for Outlet 1) is more likely to be associated with the first typical day (Monday) or the second typical day (Thursday)
Neither of these examples have any orders on Wednesdays so I was able to hard code this small set, but for future outlets, Wednesday may either be a typical or atypical day.
Is there a way to ingest the data as shown above and then classify them?
There are a few similar queries to mine but I can't quite figure it out. In Access 2010 I have one table with three columns, day, week and number.
Day Week Number
Monday 1 12
Monday 2 24
Tuesday 2 10
Thursday 1 12
Monday 1 10
Tuesday 2 10
I want to be able to count (sum) the total "number" for Monday in Week 1, Monday in Week 2 etc.
Day Week Total
Monday 1 22
Monday 2 24
Tuesday 2 20
Thursday 1 12
You need a TOTALS Query.
SELECT
yourTable.dayFieldColumn,
yourTable.weekFieldColumn,
Sum(yourTable.numberColumnName) As TotalSum
FROM
yourTable
GROUP BY
yourTable.dayFieldColumn,
yourTable.weekFieldColumn;
I have log files with time stamps. I want to search for text between two time stamps using sed even if the first time stamp or the last time stamp are not present.
For example, if I search between 9:30 and 9:40 then it should return text even if neither 9:30 nor 9:40 is there but the time stamp is between 9:30 and 9:40.
I am using a sed one liner:
sed -n '/7:30:/,/7:35:/p' xyz.log
But it only returns data if both the time stamps are present; it will print everything if one of the time stamp are missing. And if the time is in 12 hr format it will pull data for both AM and PM.
Additionally, I have different time stamp formats for different log files so I need a generic command.
Here are some time format examples:
<Jan 27, 2013 12:57:16 AM MST>
Jan 29, 2013 8:58:12 AM
2013-01-31 06:44:04,883
Some of them contain AM/PM i.e. 12 hr format and others contain 24 hr format so I have to account for that as well.
I have tried this as well but it doesn't work:
sed -n -e '/^2012-07-19 18:22:48/,/2012-07-23 22:39:52/p' history.log
With the serious medley of time formats you have to parse, sed is not the correct tool to use. I'd automatically reach for Perl, but Python would do too, and you probably could do it in awk if you put your mind to it. You need to normalize the time formats (you don't say anything about date, so I assume you're working only with the time portion).
#!/usr/bin/env perl
use strict;
use warnings;
use constant debug => 0;
my $lo = "09:30";
my $hi = "09:40";
my $lo_tm = to_minutes($lo);
my $hi_tm = to_minutes($hi);
while (<>)
{
print "Read: $_" if debug;
if (m/\D\d\d?:\d\d:\d\d/)
{
my $tm = normalize_hhmm($_);
print "Normalized: $tm\n" if debug;
print $_ if ($tm >= $lo_tm && $tm<= $hi_tm);
}
}
sub to_minutes
{
my($val) = #_;
my($hh, $mm) = split /:/, $val;
if ($hh < 0 || $hh > 24 || $mm < 0 || $mm >= 60 || ($hh == 24 && $mm != 0))
{
print STDERR "to_minutes(): garbage = $val\n";
return undef;
}
return $hh * 60 + $mm;
}
sub normalize_hhmm
{
my($line) = #_;
my($hhmm, $ampm) = $line =~ m/\D(\d\d?:\d\d):\d\d\s*(AM|PM|am|pm)?/;
my $tm = to_minutes($hhmm);
if (defined $ampm)
{
if ($ampm =~ /(am|AM)/)
{
$tm -= 12 * 60 if ($tm >= 12 * 60);
}
else
{
$tm += 12 * 60 if ($tm < 12 * 60);
}
}
return $tm;
}
I used the sample data:
<Jan 27, 2013 12:57:16 AM MST>
Jan 29, 2013 8:58:12 AM
2013-01-31 06:44:04,883
Feb 2 00:00:00 AM
Feb 2 00:59:00 AM
Feb 2 01:00:00 AM
Feb 2 01:00:00 PM
Feb 2 11:00:00 AM
Feb 2 11:00:00 PM
Feb 2 11:59:00 AM
Feb 2 11:59:00 PM
Feb 2 12:00:00 AM
Feb 2 12:00:00 PM
Feb 2 12:59:00 AM
Feb 2 12:59:00 PM
Feb 2 00:00:00
Feb 2 00:59:00
Feb 2 01:00:00
Feb 2 11:59:59
Feb 2 12:00:00
Feb 2 12:59:59
Feb 2 13:00:00
Feb 2 09:31:00
Feb 2 09:35:23
Feb 2 09:36:23
Feb 2 09:37:23
Feb 2 09:35:00
Feb 2 09:40:00
Feb 2 09:40:59
Feb 2 09:41:00
Feb 2 23:00:00
Feb 2 23:59:00
Feb 2 24:00:00
Feb 3 09:30:00
Feb 3 09:40:00
and it produced what I consider the correct output:
Feb 2 09:31:00
Feb 2 09:35:23
Feb 2 09:36:23
Feb 2 09:37:23
Feb 2 09:35:00
Feb 2 09:40:00
Feb 2 09:40:59
Feb 3 09:30:00
Feb 3 09:40:00
I'm sure this isn't the only way to do the processing; it seems to work, though.
If you need to do date analysis, then you need to use one of the date or time manipulation packages from CPAN to deal with the problems. The code above also hard codes the times in the script. You'd probably want to handle them as command line arguments, which is perfectly doable, but isn't scripted above.