How to calculate luminance in HSL - hsl

Just checked this website http://www.workwithcolor.com/hsl-color-schemer-01.htm and I am curious how to get those numbers in Lum(luminance)? I can not find a proper formula online to get those results. Thanks

Related

Measuring deviation in a music waveform in R?

I have posted this question in the R tag but I am open to solutions in other languages.
Lets say you have some waveforms. The first is just a bar. It is completely horizontal so it has no deviation. The other waveforms look like these:
Now I am able to get these waveforms separated into a uniform box so that they are all the same pixel size and resolution. My first idea was to quantify the amount of whitespace within one of these uniform boxes that the waveform used up using the code found here:
Measuring whitespace in a jpeg
Now however I want to measure the deviation between waveforms. That is, how could I quantify how "jumpy" a waveform is? Looking at the picture above, the second waveform seems the most homogeneous, and the third waveform seems to display the most variation, but I am unsure about how to quantify this. Any suggestions would be greatly appreciated.
I would recommend starting by getting familiarized with the packages tuneR and seewave, you can import and extract a lot of parameters from these two packages. In particular you could use the function acustat from seewave, this is a worked example with data from the package
data(tico)
note <- cutw(tico, from=0.5, to=0.9, output="Wave")
a<-acoustat(note)
a will give you 10 acoustic parameters from the sound, you could also use other packages like soundecology, that also extract some other variables, in particular, the function acoustic_diversity measures sound complexity

Approximating two different curves in R

I have two different density plots in R- one of them is the observed data (x1), and the other is randomly generated data from a Poisson distribution with the observed mean (x2). I would like to approximate the curves, i.e. make the expected curve look more like the observed data as it is over and under-estimated in certain areas. How do I go about doing this? I know you can get the absolute value between the curves by using
abs (x1 - x2)
However I'm not too sure how to proceed. Anybody have any ideas?
I think if you want to find an analytical solution, you might just have to play with the functions for a while. Otherwise, it seems that you could use calculus of variations to do this. That is, you take the difference between the area under both of your functions, and then minimize that (take the derivative). Formally, you need to take the second derivative to find if it's a max, min, or inflection point. However, you don't need to in this case if the function fits the data. I'm not sure what the best program would be for finding an analytical solution, but maybe that will put you on the right track. Just an idea to bounce around

Dijkstra's algorithm cannot deal with negative weights, when do you see negative weights in the real world?

I can't think of a concrete instance in which you'd have a negative weight. You cannot have a negative distance between two houses, you cannot go back in time. When would you have a graph with a negative edge weight?
I found the Bellman Ford algorithm was originally used to deal with routing in ARPANET, but again, can't imagine where you'd run into a route with a negative weight, it just doesn't seem possible. I could just be thinking too hard about this, what would be a simple example?
Suppose that walking a distance takes a certain amount of food. But along some paths there is food you can gather, so you might gain food by following those paths.
When doing routing, a negative weight might be assigned to a link to make it the default path. You could use this if you have a primary and a backup line and for whatever reason you don't want to load balance between them.
I guess you might get negative weights where you've already got a system with non-negative weights and a path comes along that is cheaper than all existing paths, and for some reason it's expensive to reweight the network.
Even if there were an example; you could probably normalize it to be all positive. Any actual representation of a negative weight is relative to some 0. I guess what I'm saying is that there probably isn't an application of negative weights that can't be done using exclusively positive weights.
EDIT: After thinking about this a little bit more, I suppose you could have situations where a given path has a negative weight. In this context; assuming the negative weight is bad, you would have to have a situation where the only possible way to achieve the goal of getting to your desired endpoint, would mean there would have to be at least one point in your graph where you're REQUIRED to take the negative path (as in, no other option is available to reach your goal). But I suppose if the graph hasn't been traversed; how would you know it were true?
EDIT (AGAIN): #Jim, I think you're right. The choke point isn't really relevant. I guess I was too quick to assume that it was because one question that pops into my mind when introducing negative edges is - if it is possible to traverse the graph without taking ANY negative edge, then what are the negative edges doing there in the first place? But, this doesn't hold very well, because - outside of hindsight - how would you ever know if a graph could or could not be traversed without going across a negative edge?
Also worth noting, according to the wikipedia page for Djikstra's algorithm :
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and published in 1959, is a graph search algorithm that solves the single-source shortest path problem for a graph with nonnegative edge path costs, producing a shortest path tree. This algorithm is often used in routing and as a subroutine in other graph algorithms.
So, even though this conversation is useful and thought provoking; maybe the title of the question should be "What is the proper algorithm to use for traversing a graph with negative edges?" Djikstra's algorithm was intended to find the shortest path. But, if you introduce positive and negative weights, then doesn't the goal change from finding the shortest path to finding the MOST positive - regardless of how many edges are on your chosen path? And if it does, what is your exit condition? The only way you could know you've reached the optimal solution would be if you happened across a path that included all positive edges without any negative edges - and wouldn't this scenario only occur by chance? So - if introducing a situation where there are positive and negative weights changes the goal to be the most positive (or negative depending on how you want to frame it) wouldn't this problem be doomed to O(n!) and therefor be best solved by a decision making algorithm (like alpha/beta) which would produce the best outcome given a restriction on the total amount of edges you're allowed to take?
If you're trying to find the quickest way to swim across a series of linked pools in a water park, and it has flumes.

Anyone know of the logic to compute the Version of a QR code needed to encode data?

The spec has 4 of these tables:
http://www.denso-wave.com/qrcode/vertable1-e.html
to handle versions 1-40
I'm wondering if anyone has coded something to formulate calculating the version needed for a string of data. None of the libraries I've seen for encoding the data offer this.
http://code.google.com/p/jsqrencode/downloads/list
Inside is genframe that finds the smallest version that a string will fit in.
It doesn't really use a formula, simply tests linearly (maybe a bsearch would be faster). There is no algorithm or equation nor is one (compact) really possible since the tables use fixed values and aren't algorithmically generated.

m-estimate for continuous values

I'm building a custom regression tree and want to use m-estimate for pruning.
Does anyone know how to calculate that.
http://www.ailab.si/blaz/predavanja/UISP/slides/uisp07-RegTrees.ppt might help (slide 12, how should Em look like?)
There are a lot of m-estimates. They all boil down to recasting your estimation problem as a minimization problem. If you use squared error as the function you're minimizing, you just get sample mean. If you use absolute value of the error, you get the sample median. The idea is to use a function that is a compromise between these two so that you get some of the efficiency of the mean and some of the robustness of the median.
Once you've picked your function, finding an m-estimate is just an optimization problem. So your question really boils down to one of finding optimization software. If your optimization problem is convex (and you can pick your m-estimator so that the problem is convex) then there's a lot of high quality software out there.

Resources