Calculate minimum QRcode version from input data - qr-code

I started to study ISO/IEC 18004 - the "QRcode specification".
When encoding data as a QRcode (generating a QRcode), the following information must be specified:
Mode (what kind of data, eg numeric, alphanumeric, binary, ...)
Error correction level (L, M, Q or H)
Version (the "size" of the QRcode)
I am looking for a mathematical (equation) way of determining the version requires to hold the provided data with the specified mode and error correction level. The standard itself contains a look-up table which is also available from various web sources such as this or this (they are just look-up tables).
A quote from this very related SO answer:
Any formula for the data capacity will be necessarily awkward and unenlightening since many of the parameters that determine the structure of QR Code symbols have been manually chosen and therefore implementations must generally resort to including tables of constants for these non-computed values.
As per my current understanding of the standard, it should be possible to determine the data capacity (and therefore minimum version) required based on the input data, mode and ECC level. The answer even expressed the calculations for the maximum capacity:
DataModules = Rows × Columns − ( FinderModules + AlignmentModules + TimingPatternModules ) − ( FormatInformationModules + VersionInformationModules )
UsableDataBits = DataModules − ErrorCorrectionBits
I don't understand where the "awkwardness" that would lead to resorting to a look-up table originates from? As per my understanding sizes of timing pattern etc. are fixed per version.
Could somebody further explain this?

zxing provides an implementation for that:
https://github.com/zxing/zxing/blob/master/core/src/main/java/com/google/zxing/qrcode/encoder/Encoder.java
/**
* Decides the smallest version of QR code that will contain all of the
provided data.
*
* #throws WriterException if the data cannot fit in any version
*/
private static Version recommendVersion(ErrorCorrectionLevel ecLevel,
Mode mode,
BitArray headerBits,
BitArray dataBits)
The Encoder class provides the algorithm ("mathematical way") of determining the minimum version you are looking for.
If this implementation is in any kind "awkward" or not, I cannot answer.

Related

Simple function to generate random number sequence without knowing previous number but know current index (no variable assignment)?

Is there any (simple) random generation function that can work without variable assignment? Most functions I read look like this current = next(current). However currently I have a restriction (from SQLite) that I cannot use any variable at all.
Is there a way to generate a number sequence (for example, from 1 to max) with only n (current number index in the sequence) and seed?
Currently I am using this:
cast(((1103515245 * Seed * ROWID + 12345) % 2147483648) / 2147483648.0 * Max as int) + 1
with max being 47, ROWID being n. However for some seed, the repeat rate is too high (3 unique out of 47).
In my requirements, repetition is ok as long as it's not too much (<50%). Is there any better function that meets my need?
The question has sqlite tag but any language/pseudo-code is ok.
P.s: I have tried using Linear congruential generators with some a/c/m triplets and Seed * ROWID as Seed, but it does not work well, it's even worse.
EDIT: I currently use this one, but I do not know where it's from. The rate looks better than mine:
((((Seed * ROWID) % 79) * 53) % "Max") + 1
I am not sure if you still have the same problem but I might have a solution for you.
What you could do is use Pseudo Random M-sequence generators based on shifting registers. Where you just have to take high enough order of you primitive polynomial and you don't need to store any variables really.
For more info you can check the wiki page
What you would need to code is just the primitive polynomial shifting equation and I have checked in an online editor it should be very easy to do. I think the easiest way for you would be to use Binary base and use PRBS sequences and depending on how many elements you will have you can choose your sequence length. For example this is the implementation for length of 2^15 = 32768 (PRBS15), the primitive polynomial I took from the wiki page (There youcan find the primitive polynomials all the way to PRBS31 what would be 2^31=2.1475e+09)
Basically what you need to do is:
SELECT (((ROWID << 1) | (((ROWID >> 14) <> (ROWID >> 13)) & 1)) & 0x7fff)
The beauty of this approach is if you take the sequence of the PRBS with longer period than your ROWID largest value you will have unique random index. Very simple. :)
If you need help with searching for primitive polynomials you can see my github repo which deals exactly with finding primitive polynomials and unique m-sequences. It is currently written in Matlab, but I plan to write it in python in next few days.
Cheers!
What about using good hash function and map result into [1...max] range?
Along the lines (in pseudocode). sha1 was added to SQLite 3.17.
sha1(ROWID) % Max + 1
Or use any external C code for hash (murmur, chacha, ...) as shown here
A linear congruential generator with appropriately-chosen parameters (a, c, and modulus m) will be a full-period generator, such that it cycles pseudorandomly through every integer in its period before repeating. Although you may have tried this idea before, have you considered that m is equivalent to max in your case? For a list of parameter choices for such generators, see L'Ecuyer, P., "Tables of Linear Congruential Generators of Different Sizes and Good Lattice Structure", Mathematics of Computation 68(225), January 1999.
Note that there are some practical issues to implementing this in SQLite, especially if your SQLite version supports only 32-bit integers and 64-bit floating-point numbers (with 52 bits of precision). Namely, there may be a risk of—
overflow if an intermediate multiplication exceeds 32 bits for integers, and
precision loss if an intermediate multiplication results in a greater-than-52-bit number.
Also, consider why you are creating the random number sequence:
Is the sequence intended to be unpredictable? In that case, a linear congruential generator alone is not enough, and you should generate unique identifiers by other means, such as by combining unique numbers with cryptographically random numbers.
Will the numbers generated this way be exposed in any way to end users? If not, there is no need to obfuscate them by "shuffling" them.
Also, depending on the SQLite API you're using (for your programming language), there may be a way to write a custom function to convert the seed and ROWID to a random unique number. The details, however, depend heavily on the specific SQLite API. Another answer shows an example for Perl.

Trouble implementing a very simple mass flow source

I am currently learning Modelica by trying some very simple examples. I have defined a connector Incompressible for an incompressible fluid like this:
connector Incompressible
flow Modelica.SIunits.VolumeFlowRate V_dot;
Modelica.SIunits.SpecificEnthalpy h;
Modelica.SIunits.Pressure p;
end Incompressible;
I now wish to define a mass or volume flow source:
model Source_incompressible
parameter Modelica.SIunits.VolumeFlowRate V_dot;
parameter Modelica.SIunits.Temperature T;
parameter Modelica.SIunits.Pressure p;
Incompressible outlet;
equation
outlet.V_dot = V_dot;
outlet.h = enthalpyWaterIncompressible(T); // quick'n'dirty enthalpy function
outlet.p = p;
end Source_incompressible;
However, when checking Source_incompressible, I get this:
The problem is structurally singular for the element type Real.
The number of scalar Real unknown elements are 3.
The number of scalar Real equation elements are 4.
I am at a loss here. Clearly, there are three equations in the model - where does the fourth equation come from?
Thanks a lot for any insight.
Dominic,
There are a couple of issues going on here. As Martin points out, the connector is unbalanced (you don't have matching "through" and "across" pairs in that connector). For fluid systems, this is acceptable. However, intensive fluid properties (e.g., enthalpy) have to be marked as so-called "stream" variables.
This topic is, admittedly, pretty complicated. I'm planning on adding an advanced chapter to my online Modelica book on this topic but I haven't had the time yet. In the meantime, I would suggest you have a look at the Modelica.Fluid library and/or this presentation by one of its authors, Francesco Casella.
That connector is not a physical connector. You need one flow variable for each potential variable. This is the OpenModelica error message if it helps a little:
Warning: Connector .Incompressible is not balanced: The number of potential variables (2) is not equal to the number of flow variables (1).
Error: Too many equations, over-determined system. The model has 4 equation(s) and 3 variable(s).
Error: Internal error Found Equation without time dependent variables outlet.V_dot = V_dot
This is because the unconnected connector will generate one equation for the flow:
outlet.V_dot = 0.0;
This means outlet.V_dot is replaced in:
outlet.V_dot = V_dot;
And you get:
0.0 = V_dot;
But V_dot is a parameter and can not be assigned to in an equation section (needs an initial equation if the parameter has fixed=false, or a binding equation in the default case).

Which algorithm used by the rnorm function

Which algorithm is using by the rnorm function by default to generate standard-normally distributed random numbers?
See ?RNGkind. The default is an inversion algorithm:
normal.kind can be "Kinderman-Ramage", "Buggy Kinderman-Ramage" (not
for set.seed), "Ahrens-Dieter", "Box-Muller", "Inversion" (the
default), or "user-supplied". (For inversion, see the reference in
qnorm.) The Kinderman-Ramage generator used in versions prior to 1.7.1
(now called "Buggy") had several approximation errors and should only
be used for reproduction of old results. The "Box-Muller" generator is
stateful as pairs of normals are generated and returned sequentially.
The state is reset whenever it is selected (even if it is the current
normal generator) and when kind is changed.
You can change the algorithm by
RNGkind(normal.kind = "Box-Muller")
You can find what is currently set by looking at RNGkind()[2].
The other answer is sufficient, but left me with some more questions; in particular, I didn't see anywhere in the documentation* what on earth the "Inversion" algorithm is, so I dived into the source code, which also gives academic references to the papers originating the other possible algorithms, to figure out what exactly is being done.
case INVERSION:
#define BIG 134217728 /* 2^27 */
/* unif_rand() alone is not of high enough precision */
u1 = unif_rand();
u1 = (int)(BIG*u1) + unif_rand();
return qnorm5(u1/BIG, 0.0, 1.0, 1, 0);
So it seems at base the default "Inversion" algorithm generates a high precision floating point number (looks like 53 bits, or the mantissa size for 64-bit floating numbers), then sends it to the qnorm5 function which is a CDF function for the normal distribution.
As to how the qnorm5 function works (given there is no closed form for the Normal CDF nor inverse CDF), I haven't had much luck cracking what seems to be the source code here, but they do give further academic references, namely Beasley, J. D. and S. G. Springer (1977) and Wichura, M.J. (1988); the former being typically used for small quantiles of the CDF and the latter for large (z>7 or so).
It may also be interesting to note that (as of this writing) this algorithm appears to be shared by the Julia language, which also shares the qnorm5 code used by R.
*To be fair, in retrospect, Wichura is mentioned in ?qnorm which is referenced above. Still it's worthwhile to spell things out in this thread, I think.

arithmetic library for tracking worst case error

(edited)
Is there any library or tool that allows for knowing the maximum accumulated error in arithmetic operations?
For example if I make some iterative calculation ...
myVars = initialValues;
while (notEnded) {
myVars = updateMyVars(myVars)
}
... I want to know at the end not only the calculated values, but also the potential error (the range of posible values if results in each individual operations took the range limits for each operand).
I have already written a Java class called EADouble.java (EA for Error Accounting) which holds and updates the maximum positive and negative errors along with the calculated value, for some basic operations, but I'm afraid I might be reinventing an square wheel.
Any libraries/whatever in Java/whatever? Any suggestions?
Updated on July 11th: Examined existing libraries and added link to sample code.
As commented by fellows, there is the concept of Interval Arithmetic, and there was a previous question ( A good uncertainty (interval) arithmetic library? ) on the topic. There just a couple of small issues about my intent:
I care more about the "main" value than about the upper and lower bounds. However, to add that extra value to an open library should be straight-forward.
Accounting the error as an independent floating point might allow for a finer accuracy (e.g. for addition the upper bound would be incremented just half ULP instead of a whole ULP).
Libraries I had a look at:
ia_math (Java. Just would have to add the main value. My favourite so far)
Boost/numeric/Interval (C++, Very complex/complete)
ErrorProp (Java, accounts value, and error as standard deviation)
The sample code (TestEADouble.java) runs ok a ballistic simulation and a calculation of number e. However those are not very demanding scenarios.
probably way too late, but look at BIAS/Profil: http://www.ti3.tuhh.de/keil/profil/index_e.html
Pretty complete, simple, account for computer error, and if your errors are centered easy access to your nominal (through Mid(...)).

Math question regarding Python's uuid4

I'm not great with statistical mathematics, etc. I've been wondering, if I use the following:
import uuid
unique_str = str(uuid.uuid4())
double_str = ''.join([str(uuid.uuid4()), str(uuid.uuid4())])
Is double_str string squared as unique as unique_str or just some amount more unique? Also, is there any negative implication in doing something like this (like some birthday problem situation, etc)? This may sound ignorant, but I simply would not know as my math spans algebra 2 at best.
The uuid4 function returns a UUID created from 16 random bytes and it is extremely unlikely to produce a collision, to the point at which you probably shouldn't even worry about it.
If for some reason uuid4 does produce a duplicate it is far more likely to be a programming error such as a failure to correctly initialize the random number generator than genuine bad luck. In which case the approach you are using it will not make it any better - an incorrectly initialized random number generator can still produce duplicates even with your approach.
If you use the default implementation random.seed(None) you can see in the source that only 16 bytes of randomness are used to initialize the random number generator, so this is an a issue you would have to solve first. Also, if the OS doesn't provide a source of randomness the system time will be used which is not very random at all.
But ignoring these practical issues, you are basically along the right lines. To use a mathematical approach we first have to define what you mean by "uniqueness". I think a reasonable definition is the number of ids you need to generate before the probability of generating a duplicate exceeds some probability p. An approcimate formula for this is:
where d is 2**(16*8) for a single randomly generated uuid and 2**(16*2*8) with your suggested approach. The square root in the formula is indeed due to the Birthday Paradox. But if you work it out you can see that if you square the range of values d while keeping p constant then you also square n.
Since uuid4 is based off a pseudo-random number generator, calling it twice is not going to square the amount of "uniqueness" (and may not even add any uniqueness at all).
See also When should I use uuid.uuid1() vs. uuid.uuid4() in python?
It depends on the random number generator, but it's almost squared uniqueness.

Resources