How to define a section between ta.crossunder and ta.crossover to get max / min prices - crossover

My goal is to get the maximum value of the section between the ta.crossover and ta.crossunder.
Could you please help which function should I use for that?
Thanks a ton in advance!
Kinda lost during the progress. Have the high and low value for the cross' themselves, but not the section between.

Related

Modify sliderInput legend

To select a certain period of time, I use the following sliderInput:
SliderInput
In the 'legend' below, I would like to display only dates every five days so in this case '2016-06-01', '2016-06-06', '2016-06-11', '2016-06-16' etc...
I have not found any possibility to do this in the documentation or on forums.
Any help would be appreciated.
Thanks in advance
https://shiny.rstudio.com/reference/shiny/latest/sliderInput.html
step
Specifies the interval between each selectable value on the slider (if NULL, a heuristic is used to determine the step size). If the values are dates, step is in days; if the values are times (POSIXt), step is in seconds.

How can I set max word count?

how can I do sphinx search set limit for the word count.
For example I have faceted search for 2100 products with 190 filters (price, color etc.) and result time is 0.004 second. Very good for me.
But there's something I 've wondered.
Faceted search example
Blue(1700)
Yellow(676)
Green(224)
I want to this
Blue(999+) <- Sphinx count max 1000, not much more. How can i this? Is this possible?
Yellow(676)
Green(224)
improves performance?
Thank you.
There is no setting for this.
And even is was doubt it wold inprove performance. When doing the counting loop, would have to add a conditional, to check if above a threshold and if so do nothing, otherwise increment the counter. its less work to just to increment anyway.
Max_matches affects the number of groups more, rather than the count within.
Can do it in application code for display purposes if like.
You can set a limit on the amount of results returned in Sphinx for sure. With PHP, I can refer you this:
http://php.net/manual/en/sphinxclient.setlimits.php
Otherwise, check out the setLimits() function in the Sphinx API here: http://sphinxsearch.com/docs/current.html

no rows to aggregate in R

I have a column in my data that is split by poor, Average and Good. I'm looking to find the Max and the Min number in a different column when one of those categories is selected.
This is what I have for the code:
aggregate(as.numeric(data_gt$Category), list(exptCount=data_gt$Employment_Rate),max)
Is there an easier way to do this, even if it takes more commands to individually get the max min for each category?
Thanks for your help, I'm new to R so still learning the basics.
You can get this error message if you specified incorrect variable name (data_gt$Category instead of data_gt$category).

MDX for finding the time at which the the max value occured

I am new to MDX and I am trying to find a way to write it in such a way that I can get the date and time value from my datetime dimension of when a measure's max and min values occured. Like I found out the max and min of my actual values by adding them in the cube as a measure. Right click on actuval value and in properties selecting max and min. But I dont know how to specify to give me the time when it happened in a given start adn end period of teh query. My query looks like this right now, I need to add two more measurres to show the date adn time when the max value actual and min value actual happened.
Select { [Measures].[ItemKey],[Measures].[UTC],
[Measures].[Value Actual], [Measures].[Min Value Actual], [Measures].[ Max Value Actual]
} on columns ,
{[Dim_Item].[ItemId].&[63678],[Dim_Item].[ItemId].&[63710]}
on rows from [Energy Aggregator]
Where
([Dim_DateTimeLocal].[CalenderLocalDateTime].
[HourofDay].&[26]&[2012]&[12]&[21]&[6]&[0]:
[Dim_DateTimeLocal].[CalenderLocalDateTime].[HourofDay].
&[26]&[2012]&[12]&[25]&[3]&[0].lag(1)
)
I have been stuck on this for a while, any help would be greatly appreciated.
Thanks
-Sarah
Try creating a calculated member that's the TOPCOUNT (1) of the date dimension using the value measure. Might need some work to get it to play nice with filters & slicers.

How to calculate time span from timestamps?

I have quite an interesting task at work - I need to find out how much time user spent doing something and all I have is timestamps of his savings. I know for a fact that user saves after each small portion of a work, so they is not far apart.
The obvious solution would be to find out how much time one small item could possibly take and then just go through sorted timestamps and if the difference between current one and previous one is more than that, it means user had a coffee break, and if it's less, we can just add up this difference into total sum. Simple example code to illustrate that:
var prev_timestamp = null;
var total_time = 0;
foreach (timestamp in timestamps) {
if (prev_timestamp != null) {
var diff = timestamp - prev_timestamp;
if (diff < threshold) {
total_time += diff;
}
}
prev_timestamp = timestamp;
}
The problem is, while I know about how much time is spent on one small portion, I don't want to depend on it. What if some user just that much slower than my predictions, I don't want him to be left without paycheck. So I was thinking, could there be some clever math solution to this problem, that could work without knowledge of what time interval is acceptable?
PS. Sorry for misunderstanding, of course no one would pay people based on this numbers and even if they would, they understand that it is just an approximation. But I'd like to find a solution that would emit numbers as close to real life as possible.
You could get the median TimeSpan, and then discard those TimeSpans which are off by, say >50%.
But this algorithm should IMHO only be used to get estimated spent hours per project, not for payrolls.
You need to either look at the standard deviation for the group of all users or the variance in the intervals for a single user or better a combination of the two for your sample set.
Grab all periods and look at the average? If some are far outside the average span you could discard them or use an adjusted value for them in the average.
I agree with Groo that using something based only on the 'save' timestamp is NOT what you should do - it will NEVER provide you with the actual time spent on the tasks.
The clever math you seek is called "standard deviation".

Resources