Qualtrics: total duration and branching? - qualtrics

On Qualtrics surveys, is it possible to combine the embedded total-duration variable
sb1 = ${e://Field/Q_TotalDuration}
with a branching condition to move straight to the end of survey?
For example, suppose I want respondents to spend no more than 60 seconds on the survey. Suppose also that I want them moved to the end of the survey when time is up. If I use a branching condition on the sb1 variable after each block to do it, it will allow the last response to be recorded even after time is up.
I tried to get around this by pasting the following javascript on each question, but it's not working:
var that = this;
var elapsed = parseInt("${e://Field/Q_TotalDuration}") - parseInt("${e://Field/sb1}");
if(elapsed >= 60) { that.clickNextButton() }
Thanks in advance for any suggestions.

Related

Is it possible to use R as stopwatch to measure the time elapsed between keystrokes?

I am developing an R package to measure the behavioural responses of animals toward odour sources. To achieve this I need to be able to record the amount of time (in seconds) that an individual spends in five predetermined zones and how many times they enter each zone.
I am struggling to find any information that suggests R can be used to do what I need, although I'm sure it can!
Essentially I want to assign each zone to a key, which I can press when an individual enters a zone and for R to measure the total amount of time in each zone and the number of times they entered. I have searched extensively on the forum to see whether something similar has been achieved previously and found two relevant threads:
Time user input from first keystroke in R
How to allow multiple inputs from user using R?
However, neither of these threads fully enable me to measure the required values.
Any help would be greatly appreciated. Many thanks.
So based on the 2 very helpful threads that you linked you could try something like:
require(tictoc) #load required package
while(T){ #open infinite while loop
tic() #start timer
input_state=readline("State input: ") #allow for entry of state
if(input_state %in% 1:5){ #check if it's acceptable
elapsed=toc() #if it is then end timer and record data
write.table(cbind(input_state,elapsed$toc-elapsed$tic),'results.txt',col.names=F,row.names=F,quote=F,append=T)
}else if(input_state=='t'){ #if input is 't'
break #break out of while loop
}else if(input_state <1 | input_state > 5 & input_state!='t'){#if input is not and accepted state AND is not 't'
print('thats not an allowed state- please try another')
}
}
then to get the number of times that each state was entered you can do:
data=read.table('results.txt',stringsAsFactors=F,header=F)
table(data[,1])

Comparing times within two vectores and finding nearest for each element in R

I have a problem going out of basic programming towards more sophisticated. Could you help me to adjust this code?
There are two vectors with dates and times, one is when activities happens, and another one - when triggers appear. The aim is to find nearest activities date/time to each of triggers, after each trigger happen. Final result is average of all differences.
I have this code. It works. But it's very slow when working with large dataset.
time_activities<- as.POSIXct(c("2008-09-14 22:15:14","2008-09-15 09:05:14","2008-09-16 14:05:14","2008-09-17 12:05:14"), , "%Y-%m-%d %H:%M:%S")
time_triggers<- as.POSIXct(c("2008-09-15 06:05:14","2008-09-17 12:05:13"), , "%Y-%m-%d %H:%M:%S")
for (j in 1:length(time_triggers))
{
for(i in 1:length(time_activities))
{
if(time_triggers[j]<time_activities[i])
{
result[j] = ceiling(difftime(time_activities[i], time_triggers[j], units = 'mins'))
break
}
}
}
print(mean(as.numeric(result)))
Can I somehow get rid of the loop, and do everything with vectors? Maybe you can give me some hint of which function I could use to compare dates at once?
delay=sapply(time_triggers,function(x) max(subset(difftime(x,time_activities,units='mins'),difftime(x,time_activities,units='mins')<0)))
mean(delay[is.finite(delay)])
This should do the trick. As always, the apply family of functions is a good replacement for a for loop.
This gives the average number of minutes that an activity occurred after a trigger.
If you want to see what the activity delay was after each trigger (rather than just the mean of all the triggers), you can just remove the mean() at the beginning. The values will then correspond to each value in time_triggers.
UPDATE:
I updated the code to ignore Inf values as requested. Sadly, this means the code should be 2 lines rather than 1. If you really want, you can make this all one line, but then you will be doing the majority of the computation twice (not very efficient).

R - Cluster x number of events within y time period

I have a dataset that has 59k entries recorded over 63 years, I need to identify clusters of events with the criteria being:
6 or more events within 6 hours
Each event has a unique ID, time HH:MM:SS and date DD:MM:YY, an output would ideally have a cluster ID, the eventS that took place within each cluster, and start and finish time and date.
Thinking about the problem in R we would need to look at every date/time and count the number of events in the following 6 hours, if the number is 6 or greater save the event IDs, if not move onto the next date and perform the same task. I have taken a data extract that just contains EventID, Date, Time and Year.
https://dl.dropboxusercontent.com/u/16400709/StackOverflow/DataStack.csv
If I come up with anything in the meantime I will post below.
Update: Having taken a break to think about the problem I have a new approach.
Add 6 hours to the Date/Time of each event then count the number of events that fall within the start end time, if there are 6 or more take the eventIDs and assign them a clusterID. Then move onto the next event and repeat 59k times as a loop.
Don't use clustering. It's the wrong tool. And the wrong term. You are not looking for abstract "clusters", but something much simpler and much more well defined. In particular, your data is 1 dimensional, which makes things a lot easier than the multivariate case omnipresent in clustering.
Instead, sort your data and use a sliding window.
If your data is sorted, and time[x+5] - time[x] < 6 hours, then these events satisfy your condition.
Sorting is O(n log n), but highly optimized. The remainder is O(n) in a single pass over your data. This will beat every single clustering algorithm, because they don't exploit your data characteristics.

VB or macro to exclude period of times from time duration calculation in Excel

I have an Excel table which contains thousands of incident tickets. Each tickets typically carried over few hours or few days, and I usually calculate the total duration by substracting opening date and time from closing date and time.
However I would like to take into account and not count the out of office hours (night time), week-ends and holidays.
I have therefore created two additional reference tables, one which contains the non-working hours (eg everyday after 7pm until 7am in the morning, saturday and sunday all day, and list of public holidays).
Now I need to find some sort of VB macro that would automatically calculate each ticket "real duration" by removing from the total ticket time any time that would fall under that list.
I had a look around this website and other forums, however I could not find what I am looking for. If someone can help me achieve this, I would be extremely grateful.
Best regards,
Alex
You can use the NETWORKDAYS function to calculate the number of working days in the interval. Actually you seem to be perfectly set up for it: it takes start date, end date and a pointer to a range of holidays. By default it counts all days non-weekend.
For calculating the intraday time, you will need some additional magic. assuming that tickets are only opened and closed in bussines hours, it would look like this:
first_day_hrs := dayend - ticketstart
last_day_hrs := ticketend - daystart
inbeetween_hrs := (NETWORKDAYS(ticketstart, ticketend, rng_holidays) - 2) * (dayend - daystart)
total_hrs := first_day_hrs + inbetween_hrs + last_day_hrs
Of course the names should in reality refer to Excel cells. I recommend using lists and/or names.

How to calculate time span from timestamps?

I have quite an interesting task at work - I need to find out how much time user spent doing something and all I have is timestamps of his savings. I know for a fact that user saves after each small portion of a work, so they is not far apart.
The obvious solution would be to find out how much time one small item could possibly take and then just go through sorted timestamps and if the difference between current one and previous one is more than that, it means user had a coffee break, and if it's less, we can just add up this difference into total sum. Simple example code to illustrate that:
var prev_timestamp = null;
var total_time = 0;
foreach (timestamp in timestamps) {
if (prev_timestamp != null) {
var diff = timestamp - prev_timestamp;
if (diff < threshold) {
total_time += diff;
}
}
prev_timestamp = timestamp;
}
The problem is, while I know about how much time is spent on one small portion, I don't want to depend on it. What if some user just that much slower than my predictions, I don't want him to be left without paycheck. So I was thinking, could there be some clever math solution to this problem, that could work without knowledge of what time interval is acceptable?
PS. Sorry for misunderstanding, of course no one would pay people based on this numbers and even if they would, they understand that it is just an approximation. But I'd like to find a solution that would emit numbers as close to real life as possible.
You could get the median TimeSpan, and then discard those TimeSpans which are off by, say >50%.
But this algorithm should IMHO only be used to get estimated spent hours per project, not for payrolls.
You need to either look at the standard deviation for the group of all users or the variance in the intervals for a single user or better a combination of the two for your sample set.
Grab all periods and look at the average? If some are far outside the average span you could discard them or use an adjusted value for them in the average.
I agree with Groo that using something based only on the 'save' timestamp is NOT what you should do - it will NEVER provide you with the actual time spent on the tasks.
The clever math you seek is called "standard deviation".

Resources