Is there an alternate form of the following Laplace transform (for a band pass filter)- that would be easier to process? G(s) = As/[(1+Bs)(1+Cs)] - bandpass-filter

Standard tables of Transforms do not contain this form. I believe this needs to be broken into partial fractions?
I have tried an approximation where B=C to get a ballpark response but the rigorous solution is evading me.
If I put this into WolframAlpha the result is nonfunctional

Related

Why Do User Defined Contrasts in R need to be provided as an INVERSE matrix of weights?

I would like to use some user defined contrasts in R in addition to the default contrast codes (contr.treatment / contr.sum / contr.helmert). However, the guidance I have read indicates that these need to be provided in an inverse matrix. Could anybody explain why?
Ie, the guidance here: https://stats.idre.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/ states:
The final contrast matrix (or coding scheme) turns out to be the inverse of mat transposed.
Similarly, this site: https://rstudio-pubs-static.s3.amazonaws.com/65059_586f394d8eb84f84b1baaf56ffb6b47f.html writes:
While there are a handful of automatic contrast functions in R (what I’ve been using so far), you will sometimes find yourself wanting to run comparisons that are not included there. When that happens, you can specify them yourself. You need to be careful, though, because the contrasts() function is a sneaky little bastard, as noted above. To apply contrast weights, you’ll need to give it the inverse of your matrix of weights.
Neither explains why. In addition, computing an inverse matrix messes up the contrast coefficients so that they become difficult to interpret as they are no longer a standard unit apart, so I'd like to know why it's necessary.

Linear programming using blocking theory R

The following linear programming problem is not of canonical form. I am really stuck when trying to put this in regular form and feed it into the normal lp() function.
Does someone has experience with such weird form?
B and A are the blocker and antiblocker, respectively, which are simply two sets of inequalities.
I don't know what the "normal lp() function" is. Let's assume this is the lp function from the LpSolve package.
This function does not expect a canonical form. (Canonical usually means each constraint has the same fixed sign, e.g. Ax<=b; lp() allows different signs for each constraint).
lp() just wants one big constraint matrix: each column is an individual variable and each row is an individual constraint. This is conceptual simple, but often tedious in practice. Best thing to do is to get a large piece of paper and draw the layout of the LP matrix: which variables and constraints go where.
For some classes of models there are easier-to-use tools to express an LP model, such as OMPR, CVXR.

How can I do blind fitting on a list of x, y value pairs if I don't know the form of f(x) = y?

If I have a function f(x) = y that I don't know the form of, and if I have a long list of x and y value pairs (potentially thousands of them), is there a program/package/library that will generate potential forms of f(x)?
Obviously there's a lot of ambiguity to the possible forms of any f(x), so something that produces many non-trivial unique answers (in reduced terms) would be ideal, but something that could produce at least one answer would also be good.
If x and y are derived from observational data (i.e. experimental results), are there programs that can create approximate forms of f(x)? On the other hand, if you know beforehand that there is a completely deterministic relationship between x and y (as in the input and output of a pseudo random number generator) are there programs than can create exact forms of f(x)?
Soooo, I found the answer to my own question. Cornell has released a piece of software for doing exactly this kind of blind fitting called Eureqa. It has to be one of the most polished pieces of software that I've ever seen come out of an academic lab. It's seriously pretty nifty. Check it out:
It's even got turnkey integration with Amazon's ec2 clusters, so you can offload some of the heavy computational lifting from your local computer onto the cloud at the push of a button for a very reasonable fee.
I think that I'm going to have to learn more about GUI programming so that I can steal its interface.
(This is more of a numerical methods question.) If there is some kind of observable pattern (you can kinda see the function), then yes, there are several ways you can approximate the original function, but they'll be just that, approximations.
What you want to do is called interpolation. Two very simple (and not very good) methods are Newton's method and Laplace's method of interpolation. They both work on the same principle but they are implemented differently (Laplace's is iterative, Newton's is recursive, for one).
If there's not much going on between any two of your data points (ie, the actual function doesn't have any "bumps" whose "peaks" are not represented by one of your data points), then the spline method of interpolation is one of the best choices you can make. It's a bit harder to implement, but it produces nice results.
Edit: Sometimes, depending on your specific problem, these methods above might be overkill. Sometimes, you'll find that linear interpolation (where you just connect points with straight lines) is a perfectly good solution to your problem.
It depends.
If you're using data acquired from the real-world, then statistical regression techniques can provide you with some tools to evaluate the best fit; if you have several hypothesis for the form of the function, you can use statistical regression to discover the "best" fit, though you may need to be careful about over-fitting a curve -- sometimes the best fit (highest correlation) for a specific dataset completely fails to work for future observations.
If, on the other hand, the data was generated something synthetically (say, you know they were generated by a polynomial), then you can use polynomial curve fitting methods that will give you the exact answer you need.
Yes, there are such things.
If you plot the values and see that there's some functional relationship that makes sense, you can use least squares fitting to calculate the parameter values that minimize the error.
If you don't know what the function should look like, you can use simple spline or interpolation schemes.
You can also use software to guess what the function should be. Maybe something like Maxima can help.
Wolfram Alpha can help you guess:
http://blog.wolframalpha.com/2011/05/17/plotting-functions-and-graphs-in-wolframalpha/
Polynomial Interpolation is the way to go if you have a totally random set
http://en.wikipedia.org/wiki/Polynomial_interpolation
If your set is nearly linear, then regression will give you a good approximation.
Creating exact form from the X's and Y's is mostly impossible.
Notice that what you are trying to achieve is at the heart of many Machine Learning algorithm and therefor you might find what you are looking for on some specialized libraries.
A list of x/y values N items long can always be generated by an degree-N polynomial (assuming no x values are the same). See this article for more details:
http://en.wikipedia.org/wiki/Polynomial_interpolation
Some lists may also match other function types, such as exponential, sinusoidal, and many others. It is impossible to find the 'simplest' matching function, but the best you can do is go through a list of common ones like exponential, sinusoidal, etc. and if none of them match, interpolate the polynomial.
I'm not aware of any software that can do this for you, though.

R script - nls function

Can anyone give me a good explanation for what the parameter "algorithm" does in the nls function in R?
Also, how does the formula work? I know it uses a tilda, but I can't really find a down-to-earth explanation of it.
Also, how important are the start values? Do I need to try multiple start values, or can I still have a guarantee that nls will find the correct parameters regardless of the start values I use?
In brief:
nls() is going to vary parameters to try to minimize the square error between your model and your data. There's several good methods it can try to find the minimum. Reading the details about "method" in ?optim will provide some good info and references.
In general, for nonlinear models, your results can be sensitive to initial guess. You should try several different guesses to make sure that the outputs are close. If your results are very sensitive to your guess, you can try re-parameterizing, using a different algorithm, or rethinking your model.
As for the formula, I'd echo the previous answer. Work through the examples in the bottom of ?nls and then try to ask a more specific question.

Anyone know of the logic to compute the Version of a QR code needed to encode data?

The spec has 4 of these tables:
http://www.denso-wave.com/qrcode/vertable1-e.html
to handle versions 1-40
I'm wondering if anyone has coded something to formulate calculating the version needed for a string of data. None of the libraries I've seen for encoding the data offer this.
http://code.google.com/p/jsqrencode/downloads/list
Inside is genframe that finds the smallest version that a string will fit in.
It doesn't really use a formula, simply tests linearly (maybe a bsearch would be faster). There is no algorithm or equation nor is one (compact) really possible since the tables use fixed values and aren't algorithmically generated.

Resources