I'm challenged with a Leave Table setup issue and would like some guidance.
Background: I have a division at work where they do not accumulate any vacation time on their first year of service. All the accrued vacation time are backloaded and you receive the hours the following calendar year based on the previous year's service. I am having issues setting up the accrual service for the First Year Award Values because when I try to set the "Month Eligible" field to 13, it gives me an error. Screenshots can be provided or I can try to explain this better. But I'm up for any suggestions since I have a test environment to play around with this setup
Example 1:
DOH = jan 1, 2015 on Jan 1, 2016; member would accrue 10 days based on the service from Jan 1, 2015 to Dec 31, 2015 on Jan 1, 2017; member would accrue 10 days based on the service from Jan 1, 2016 to Dec 31, 2016
The breakdown for the 1st year of service is prorated based on month of hire:
Example 2:
DOH = feb 1, 2015 on Jan 1, 2016; member would accrue 9 days based on the service from Feb 1, 2015 to Dec 31, 2015 on Jan 1, 2017; member would accrue 10 days based on the service from Jan 1, 2016 to Dec 31, 2016
Example 3:
DOH = mar 1, 2015 on Jan 1, 2016; member would accrue 8 days based on the service from Feb 1, 2015 to Dec 31, 2015 on Jan 1, 2017; member would accrue 10 days based on the service from Jan 1, 2016 to Dec 31, 2016
with continuing the breakdown until the 12th month.
Example 4:
DOH = dec 1, 2015 on Jan 1, 2016; member would accrue 0 days based on the service from Dec 1, 2015 to Dec 31, 2015 on Jan 1, 2017; member would accrue 10 days based on the service from Jan 1, 2016 to Dec 31, 2016
Will this be part of the "Special Calculation Routine" checkbox?
I suggest using the Service Calc at Year Begin box instead. That will calculate leave accruals based on service as of Jan. 1 of the current year. For the accrual setup, try the following:
Service Units = Months
Accrual Rate Units = Hours per Year (Award Frequency = First Run of Year)
First Year Award Values ==> NOT USED
Accrual Rate Values (You did not indicate subsequent years, so you may need more intervals.)
After Service Interval Accrue Hours At
13 Service Months 10 Hours per Year
Service Bonus Values (Assuming no accrual if hired after October)
After Service Interval Award Bonus Hours
3 Service Months 1.000000
4 Service Months 1.000000
5 Service Months 1.000000
6 Service Months 1.000000
7 Service Months 1.000000
8 Service Months 1.000000
9 Service Months 1.000000
10 Service Months 1.000000
11 Service Months 1.000000
12 Service Months 1.000000
The SBV's + Svc Calc # Yr Begin should cover your first year requirement, but you may need to tweak the setup if I did not understand it correctly.
Related
I have a data frame ordered by month and year. I want to select only the integer number of years i.e. if the data start in July 2002 and ends in September 2010 then select only data from July 2002 to June 2010.
And if the data starts in September 1992 and ends in March 2000 then select only data from September 1992 to August 1999. Regardless of the missing months in between.
The data can be uploaded from the following link:
enter link description here
The code
mydata <- read.csv("E:/mydata.csv", stringsAsFactors=TRUE)
this is manually selection
selected.data <- mydata[1:73,] # July 2002 to June 2010
how to achieve that by coding.
Here is a base solution, that reproduce your manual subsetting:
mydata <- read.csv("D:/mydata.csv", stringsAsFactors=F)
lookup <-
c(
January = 1,
February = 2,
March = 4,
April = 4,
May = 5,
June = 6,
July = 7,
August = 8,
September = 9,
October = 10,
November = 11,
December = 12
)
mydata$Month <- unlist(lapply(mydata$Month, function(x) lookup[match(x, names(lookup))]))
first.month <- mydata$Month[1]
last.year <- max(mydata$Year)
mydata[1:which(mydata$Month==(first.month -1)&mydata$Year==last.year),]
Basically, I convert the Month name in number and find the month preceding the first month that appears in the dataframe, for the last year of the dataframe.
Here's a base R one-liner :
result <- mydata[seq_len(with(mydata, which(Month == month.name[match(Month[1],
month.name) - 1] & Year == max(Year)))), ]
head(result)
# Month Year var
#1 July 2002 -91.22997
#2 October 2002 -91.19007
#3 December 2002 -91.05395
#4 February 2003 -91.16958
#5 March 2003 -91.17881
#6 April 2003 -91.15110
tail(result)
# Month Year var
#68 December 2009 -90.92610
#69 January 2010 -91.07379
#70 February 2010 -91.12460
#71 March 2010 -91.10288
#72 April 2010 -91.06040
#73 June 2010 -90.94212
I have cumulative population totals data for the end of each month for two years (2016, 2017). I would like to combine these two years and treat each months cumulative total as a repeated measure (one for each year) and fit a non linear growth model to these data. The goal is to determine whether our current 2018 cumulative monthly totals are on track to meet our higher 2018 year-end population goal by increasing the model's asymptote to our 2018 year-end goal. I would ideally like to integrate a confidence interval into the model that reflects the variability between the two years at each month.
My columns in my data.frame are as follows:
- Year is year
- Month is month
- Time is the month's number (1-12)
- Total is the month-end cumulative population total
- Norm is the proportion of year-end total for that month
- log is the Total log transformed
Year Month Total Time Norm log
1 2016 January 3919 1 0.2601567 8.273592
2 2016 February 5887 2 0.3907993 8.680502
3 2016 March 7663 3 0.5086962 8.944159
4 2016 April 8964 4 0.5950611 9.100972
5 2016 May 10014 5 0.6647637 9.211739
6 2016 June 10983 6 0.7290892 9.304104
7 2016 July 11775 7 0.7816649 9.373734
8 2016 August 12639 8 0.8390202 9.444543
9 2016 September 13327 9 0.8846920 9.497547
10 2016 October 13981 10 0.9281067 9.545455
11 2016 November 14533 11 0.9647504 9.584177
12 2016 December 15064 12 1.0000000 9.620063
13 2017 January 3203 1 0.2163458 8.071843
14 2017 February 5192 2 0.3506923 8.554874
15 2017 March 6866 3 0.4637622 8.834337
16 2017 April 8059 4 0.5443431 8.994545
17 2017 May 9186 5 0.6204661 9.125436
18 2017 June 10164 6 0.6865248 9.226607
19 2017 July 10970 7 0.7409659 9.302920
20 2017 August 11901 8 0.8038501 9.384378
21 2017 September 12578 9 0.8495778 9.439705
22 2017 October 13422 10 0.9065856 9.504650
23 2017 November 14178 11 0.9576494 9.559447
24 2017 December 14805 12 1.0000000 9.602720
Here is my data plotted as a scatter plot:
Should I treat the two years as separate models or can I combine all the data into one?
I've been able to calculate the intercept and the growth parameter for just 2016 using the following code:
coef(lm(logit(df_tot$Norm[1:12]) ~ df_tot$Time[1:12]))
and got a non-linear least squares regression for 2016 with this code:
fit <- nls(Total ~ phi1/(1+exp(-(phi2+phi3*Time))), start = list(phi1=15064, phi2 = -1.253, phi3 = 0.371), data = df_tot[c(1:12),], trace = TRUE)
Any help is more than appreciated! Time series non-linear modeling is not my strong suit and googling hasn't got me very far at this point.
I want to get data between From Month1 year1 to Month2 year2
Example From Jan 2016 to Jan 2017.
I have the query like
select * from customer cr where
EXTRACT(MONTH FROM cr.regdate) between Month1 and Month2 AND
EXTRACT(YEAR FROM cr.regdate) between Year1 and Year2
This query is giving wrong result when I would like to take the data from Jan 2016 to Jan 2017. I will be expecting the data from jan 2016, Feb 2016, Mar 2016 till Jan 2017.
But it is giving the result only for Jan 2016 and Jan 2017. Please guide me.
I have a specific problem where I have records in my DB table like the following:
LastUpdated
10 January 2017
(The dates are stored in the database as a DateTime type.)
Now I need to fetch the difference in days between today's date and the last one including today's date. So, for example, today is the 12th, so the amount of days would be 2.
Now the second part of the problem is that I have another table setup like the following:
TransactionDate
1 January 2017
2 January 2017
3 January 2017
4 January 2017
5 January 2017
6 January 2017
7 January 2017
8 January 2017
9 January 2017
10 January 2017
So now after I perform a LINQ the updated results in my DBTable would look like the following:
3 January 2017
4 January 2017
5 January 2017
6 January 2017
7 January 2017
8 January 2017
9 January 2017
10 January 2017
11 January 2017
12 January 2017
So basically I'm trying to get the difference between the current date and the last updated date and then add it to the transaction details table. Upon adding the difference between two dates, I want to remove as much as the days in difference has been added, so that the total date span remains 10 days...
Can someone help me out with this?
Edit: this is the code so far:
var usr = ctx.SearchedUsers.FirstOrDefault(x => x.EbayUsername == username);
var TotalDays = (DateTime.Now - usr.LastUpdatedAt).Value.TotalDays;
Is this a correct way of fetching the difference between two dates like I've mentioned above?
Now after this I perform an HTTP request where I get the remaining two dates and insert it like:
ctx.TransactionDetails.Add(RemainingTwoDates);
ctx.SaveChanges();
Now I have dates expanding from 1st January to 12th of January, but I want to remove 1st and 2nd of January so that the total range of days stays 10;
How can I do this ?
You can remove transaction dates that are older than 10 days ago.
ctx.TransactionDetails.Add(RemainingTwoDays);
//Now you want to remove those older than 10 days
var today = DateTime.Today;
var tenDaysAgo = today.AddDays(-10);
var oldTrandactions = ctx.TransactionDetails.Where(t => t.TransactionDate <= tenDaysAgo).ToList();
foreach (var t in oldTrandactions) {
ctx.TransactionDetails.Remove(t);
}
ctx.SaveChanges();
I am using sentiment analysis function sentiment_by() from R package sentimentr (by trinker). I have a dataframe containing the following columns:
review comments
month
year
I ran the sentiment_by function on the dataframe to find the average polarity score based on the year and month and i get the following values.
review_year review_month word_count sd ave_sentiment
2015 March 8722 0.381686065 0.163440921
2015 April 7758 0.387046768 0.158812775
2015 May 7333 0.389256472 0.149220636
2015 November 14020 0.394711478 0.14691745
2016 February 7974 0.400406931 0.142345278
2015 September 8238 0.379989344 0.141740366
2015 February 7642 0.361415304 0.141624745
2015 December 24863 0.387409099 0.141606892
2016 March 8229 0.389033232 0.138552943
2016 January 10472 0.388300946 0.134302612
2015 August 7520 0.3640285 0.127980712
2016 May 3432 0.422246851 0.125041218
2015 June 8678 0.356612924 0.119333949
2015 January 9930 0.351126449 0.119225549
2016 April 9344 0.397066458 0.111879315
2015 July 8450 0.349963536 0.108881821
2015 October 7630 0.38017201 0.1044298
Now i run the sentiment_by function on the dataframe based on the comments alone and then i run the following function on the resultant data frame to find the average polarity score based on year and months.
sentiment_df[,list(avg=mean(ave_sentiment)),by="month,year"]
I get the following results.
month year avg
January 2015 0.110950199
February 2015 0.126943461
March 2015 0.146546669
April 2015 0.148264268
May 2015 0.143924126
June 2015 0.110691204
July 2015 0.106472437
August 2015 0.118976304
September 2015 0.135362187
October 2015 0.111441484
November 2015 0.137699548
December 2015 0.136786867
January 2016 0.128645808
February 2016 0.129139898
March 2016 0.134595706
April 2016 0.12106743
May 2016 0.142801514
As per my understanding both should return the same results, correct me if I am wrong. Reason for me to go for the second approach is because i need to average polarity based on both month and year, as well as based on months and i don't want to use the method twice as it will cause additional time delay. Could some one let me know what i am doing wrong here?
Here is an idea: Maybe the first function is taking the averages from the individual sentences, and the second one is taking the average from the "ave sentiment", which is already an average. So, the average of averages is not always equal to the average of the individual elements.