Is Double Now Fully Equivalent to Real in QML? - qt

One peculiar aspect of the JS-heavy declarative user interface language QML is its inclusion of seemingly redundant decimal number types.
The native types declare two basic types for decimal numbers, whose names are similar to those in many other programming languages: real and double.
The table in the current Documentation (v5.8) describes double as...
Number with a decimal point, stored in double precision.
...and for real...
Number with a decimal point
Now the odd bit. The prior passage might lead one to believe that as with most language these are indeed different types based on the floating point precision (see: float and double in C/C++).
But that assumption is indicated to be incorrect. The documentation for real states:
Note: In QML all reals are stored in double precision, IEEE floating
point format.
This basic type is provided by the QML language.
Okay, a bit peculiar (why have two types then), but assuming that is wholly accurate, presumbably it would make sense to pick one (perhaps whichever was the standard in your codebase you're working on) and go with it.
But reportedly there's an unexpected issue, as noted by Kent Hansen on QtDeclarative:
The type "real" was documented to be a single-precision float, but
that's incorrect. It's always been double.
However, signal parameters of type "real" would be mapped to the C++
type "qreal", which can be either float or double depending on the
platform.
Since JavaScript floating point numbers have double precision, QML
should use the same, to avoid potential loss of precision.
That makes me question whether the statement given in the Qt Documentation is wholly accurate. Hansen's post was 4 years ago, back when Qt/QML v5.1-2 were the latest versions. Now we're at v5.6-8, so I'm wondering if what he reports is still an issue.
Can anybody answer whether this or any other inconsistencies still exist in the latest Qt/QML release versions (v5.6-8, as of Feb. 2017)?
Is this why two separate basic types for decimal numbers are included, in spite of the documentation seemingly indicates them to be identical?
Is there any other reason to use double versus float or vice versa in QML?

It's because of the tradition of keeping backward compatibility.
In Qt4 it was a float. In Qt5 the old code with real and double can run, but the implementation of the QML engine doesn't need to bother with floats any more.

Related

In Ada, it seems to be the general practice to declare specific subtypes, but why?

In this question, I asked how to define an unlimited upper bound for a range (turns out that the answer was fairly obvious, but not to someone new to Ada). In the answer, it was suggested to create a Specific Subtype for this.
A specific subtype of of the sort referred to in the question would look like this:
Type Speed is Float range 0 .. Float'Last
Additionally, I've noticed that a good portion of the code in this Ada project has specific types - like Feet_Float and Meters_Float and the like. Why is this the preferred practice, as opposed to just putting a range constraint on a basic float member variable in the class/package?
Ada doesn't prefer subtypes - Ada programmers do.
New types and subtypes (they are different, and both have their uses) help catch so many errors at so little cost or time penalty it's a mystery why good type systems fell so far out of fashion.
For example, recognise that the index for any array belongs to a subtype (possibly anonymous, but accessible as myArray'range as in for i in myArray'range loop ... end loop; or subtype myIndextype is myArray'range; theIndex : myIndextype; and you'll see that every buffer overflow vulnerability - or attack - ever written was simply a type error - or could have been, in Ada.
When you get a bug past the compiler, the first time your executable falls over with an Exception : Constraint_Error pointing spookily close to the mistake, you'll start to get a sense of the value of range-constrained types.
To expand on this a bit, I'll refer to a couple more Q&As.
First note that the compiler you're probably using, Gnat, may not be strictly Ada compliant unless you add a couple of optional flags on the command line (or project file) as described in the first example. Recent versions have turned some of them on by default.
Here's an example of a subtype being declared, used and going out of visible scope, (in a declare block, where the range of the subtype is unknown until runtime. Unlike many dynamic typed languages, this is both fast and safe, because the relevant storage is usually on the stack, if you're interested in the implementation details.
And an example of how not to use a declare block.
Here's an extreme example of not only declaring subtypes but telling the compiler how to pack them in storage. Common in embedded programming, either where space is tight (I have a complete digital watch in a processor with 1kbyte memory!) or for accessing specific bits in hardware registers. (Note that this example would be cleaner if updated to use Ada-2012 aspects.)
And this Q&A briefly covers the difference between new types and subtypes, for someone coming from Java. (I'm a little disappointed none of the Java experts managed an answer before it was closed, describing how they would handle the same issues)
The declaration of specific types has the following benefits:
Specific types prevent inappropriate mixing of abstractions. For instance, dividing the distance between the Moon and Earth in meters by the gross national product of Belgium expressed in Euros.
The name of the type more clearly documents the intended use of the type
Use of ranged types clearly documents the valid values for instances of the type.
The paradigm in Ada is to model the problem in the solution. One aspect of this is to model the range, accuracy, and precision of values in the problem space with appropriate scalar type and subtype definitions.
Why should one do this? McCormick, in analyzing why students with C experience and no Ada experience were able to complete his real-time S/W course's project in Ada but not in C found that the most important feature of Ada was
Modeling of scalar objects.
Strong typing.
Range constraints.
Enumeration types.
McCormick's paper
McCormick's site

Using float or double for own QML classes

I've created some components and helper methods using C++ and Qt to be used in my QML GUI. Some of those methods calculate GUI elements (size, transformations, scene graph items creation). The target is an ARM 32 Bit using OpenGl / Eglfs to render the output.
OpenGl is using float, QML is using real (defined as double).
What should I use in C++ to do not lose any precision?
If you use qreal (double) then you shouldn't lose any precision. If you git grep float in qtbase.git, you can find out a bit more:
dist/changes-5.2.0-****************************************************************************
dist/changes-5.2.0-* Important Behavior Changes *
dist/changes-5.2.0-****************************************************************************
dist/changes-5.2.0-
dist/changes-5.2.0- - Qt is now compiled with qreal typedef'ed to double on all
dist/changes-5.2.0: platforms. qreal was a float on ARM chipsets before. This guarantees more
dist/changes-5.2.0- consistent behavior between all platforms Qt supports, but is binary
dist/changes-5.2.0- incompatible to Qt 5.1 on ARM. The old behavior can be restored by
dist/changes-5.2.0: passing -qreal float to configure.
It's interesting to note that within Qt, there are large pieces of code that have switched to using only float:
commit 51d40d7e9bdfc63c5109aef5b732aa2ba10f985a
Author: Sean Harmer <sean.harmer#kdab.com>
Date: Mon Aug 20 20:55:40 2012 +0100
Make gui/math3d classes use float rather than qreal
This corrects the mismatch between using floats for internal storage
and qreal in the API of QVector*D which leads to lots of implicit
casts between double and float.
This change also stops users from being surprised by the loss of
precision when using these classes on desktop platforms and removes
the need for the private constructors taking a dummy int as the final
argument.
The QMatrix4x4 and QQuaternion classes have been changed to use float
for their internal storage since these are meant to be used in
conjunction with the QVector*D classes. This is to prevent unexpected
loss of precision and to improve performance.
The on-disk format has also been changed from double to float thereby
reducing the storage required when streaming vectors and matrices. This
is potentially a large saving when working with complex 3D meshes etc.
This also has a significant performance improvement when passing
matrices to QOpenGLShaderProgram (and QGLShaderProgram) as we no
longer have to iterate and convert the data to floats. This is
an operation that could easily be needed many times per frame.
This change also opens the door for further optimisations of these
classes to be implemented by using SIMD intrinsics.
As it says, this was to avoid surprise loss of precision as the functions took qreal (double) but stored their data as float. If you use qreal everywhere in your own code, you won't have this problem.
The other reason was performance. I can't comment too much on that, but I'd say that it's more important for Qt to optimise for such things than it is for most users of Qt.

Program extraction using native integers/words (not bignums) from Isabelle theory

This question comes in a context where Isabelle is used with formal software development in mind more than with pure maths theorization in mind (and from a standalone developer's context).
Seems at best, SML programs generated from an Isabelle theory, use SML's IntInf.int, not the native integer type, which is Int.int; even if Code_Target_Int, Code_Binary_Nat or Code_Target_Nat is used. Investigation of these theories sources seems to confirm it's all it can do. Native platform integers may be required for multiple reasons, including efficiency and the case the SML imperative program is to be optionally translated into an imperative language subset (ex. C or Ada), which is relevant when the theory relies on the Imperative_HOL theory. The codegen.pdf document which comes with the Isabelle distribution, did not help with it, except in suggesting the first of the options below.
Options may be:
Not using Isabelle's int and nat and re‑create a new numeric type from scratch, then use the code_printing commands (with its type_constructor and constant) to give it the native platform representation and operations (implies inclusion of range limitations in some way in the theory) : must be tedious, although unlikely error‑prone I hope, due to the formal environment. Note this does seems feasible with Isabelle's own int and nat… it makes code generation fails, and nothing tells which constants are missing in the code_printing command.
If the SML program is to be compiled directly (ex. with MLTon), tweak the SML environment with a replacement IntInf structure : may be unsafe or not feasible, and still requires to embed the range limitations in the theory, so the previous options may finally be better than this one.
Touch the generated program to change IntInf into Int : easy, but it is safe? (at least, IntInf implements the same signature as Int do, so may be it's safe). As above, requires to specifies bounds in the theory in some way, it's OK with this.
Dive into Isabelle internals : surely unreasonable, even worse than the second option.
There exist a Word theory, but according to some readings, it's seems not suited for that purpose.
Are they other known options not listed here? Are they comments on the listed options?
If there is no ready‑to‑cook solutions (I feel there is no at the time), what hints or tracks would be best known? (ex. links to documents, mentions of concepts).
Update
Points #2 and #3 of the list, may be OK (if it really is) only if there is a single integer type. If the program use more than only one, it's not applicable.
Directly generating native words from Isabelle int would be unsound, because your formalisation would not take overflow into account where it exists in reality.
It looks like the AFP entry Native_Word does what you want, though:
http://afp.sourceforge.net/entries/Native_Word.shtml

Main difference between OpenCL and OpenCL Embedded profile

Recently I was seeing OpenCL EP support on some development boards like odroid XU. One thing I know is that OpenCL EP is for ARM processors, but in what features will it vary from the main desktop based OpenCL.
The main differences are enumerated below (as of OpenCL 1.2):
64-bit integer support is optional.
Support for 3D images is optional.
Support for 2D image array writes is optional. If the cles_khr_2d_image_array_writes
extension is supported by the embedded profile, writes to 2D image arrays are supported.
There are some limitations on the available channel data types for images and image arrays (in particular, images with channel data types of CL_FLOAT and CL_HALF_FLOAT only support CL_FILTER_NEAREST sampler filter modes)
There are limitations on the sampler addressing modes available to images and image arrays.
There are some floating-point rounding changes that you may need to take into account.
Floating-point addition, subtraction, and multiplication will always be correctly rounded, other operations such as division and square roots have varying accuracies. There are tons of other floating-point things to watch out for as well.
Conversions between integer data types and floating point integers are limited in precision (but there are exceptions).
In short, the main differences here are in floating-point accuracy. In other words, the embedded profile need not adhere to the IEEE 754 floating-point specification, which may be a problem if you are doing lots of numerical calculations which rely on it. Quoted from the specification:
This relaxation of the requirement to adhere to IEEE 754 requirements
for basic floating- point operations, though extremely undesirable, is
to provide flexibility for embedded devices that have lot stricter
requirements on hardware area budgets.
There is also something that is not mentioned in section 10 but is worth noting: while desktop profiles must have a compiler available to compile OpenCL kernels, embedded profiles need not provide one. This can be seen through the clGetDeviceInfo documentation, which states:
CL_DEVICE_COMPILER_AVAILABLE: Return type: cl_bool
Is CL_FALSE if the implementation does not have a compiler available
to compile the program source. Is CL_TRUE if the compiler is available.
This can be CL_FALSE for the embededed (sic) platform profile only.
For a complete and detailed list of the OpenCL Embedded Profile specification, fire up your PDF reader, download the OpenCL spec (whichever version you are developing for), and find the relevant section.
The section 10 in the standard answers your question. This section is entirely dedicated to the OCL embedded profile, ans starts by enumerating the restriction that this profile implies.

Do people use the Hungarian Naming Conventions in the real world? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is it worth learning the convention or is it a bane to readability and maintainability?
Considering that most people that use Hungarian Notation is following the misunderstood version of it, I'd say it's pretty pointless.
If you want to use the original definition of it, it might make more sense, but other than that it is mostly syntactic sugar.
If you read the Wikipedia article on the subject, you'll find two conflicting notations, Systems Hungarian Notation and Apps Hungarian Notation.
The original, good, definition is the Apps Hungarian Notation, but most people use the Systems Hungarian Notation.
As an example of the two, consider prefixing variables with l for length, a for area and v for volume.
With such notation, the following expression makes sense:
int vBox = aBottom * lVerticalSide;
but this doesn't:
int aBottom = lSide1;
If you're mixing the prefixes, they're to be considered part of the equation, and volume = area * length is fine for a box, but copying a length value into an area variable should raise some red flags.
Unfortunately, the other notation is less useful, where people prefix the variable names with the type of the value, like this:
int iLength;
int iVolume;
int iArea;
some people use n for number, or i for integer, f for float, s for string etc.
The original prefix was meant to be used to spot problems in equations, but has somehow devolved into making the code slightly easier to read since you don't have to go look for the variable declaration. With todays smart editors where you can simply hover over any variable to find the full type, and not just an abbreviation for it, this type of hungarian notation has lost a lot of its meaning.
But, you should make up your own mind. All I can say is that I don't use either.
Edit Just to add a short notice, while I don't use Hungarian Notation, I do use a prefix, and it's the underscore. I prefix all private fields of classes with a _ and otherwise spell their names as I would a property, titlecase with the first letter uppercase.
The Hungarian Naming Convention can be useful when used correctly, unfortunately it tends to be misused more often than not.
Read Joel Spolsky's article Making Wrong Code Look Wrong for appropriate perspective and justification.
Essentially, type based Hungarian notation, where variables are prefixed with information about their type (e.g. whether an object is a string, a handle, an int, etc.) is mostly useless and generally just adds overhead with very little benefit. This, sadly, is the Hungarian notation most people are familiar with. However, the intent of Hungarian notation as envisioned is to add information on the "kind" of data the variable contains. This allows you to partition kinds of data from other kinds of data which shouldn't be allowed to be mixed together except, possibly, through some conversion process. For example, pixel based coordinates vs. coordinates in other units, or unsafe user input versus data from safe sources, etc.
Look at it this way, if you find yourself spelunking through code to find out information on a variable then you probably need to adjust your naming scheme to contain that information, this is the essence of the Hungarian convention.
Note that an alternative to Hungarian notation is to use more classes to show the intent of variable usage rather than relying on primitive types everywhere. For example, instead of having variable prefixes for unsafe user input, you can have simple string wrapper class for unsafe user input, and a separate wrapper class for safe data. This has the advantage, in strongly typed languages, of having partitioning enforced by the compiler (even in less strongly typed languages you can usually add your own tripwire code) but adds a not insignificant amount of overhead.
I still use Hungarian Notation when it comes to UI elements, where several UI elements are related to a particular object/value, e.g.,
lblFirstName for the label object, txtFirstName for the text box. I definitely can't name them both "FirstName" even if that is the concern/responsibility of both objects.
How do others approach naming UI elements?
I think hungarian notation is an interesting footnote along the 'path' to more readable code, and if done properly, is preferable to not-doing it.
In saying that though, I'd rather do away with it, and instead of this:
int vBox = aBottom * lVerticalSide;
write this:
int boxVolume = bottomArea * verticalHeight;
It's 2008. We don't have 80 character fixed width screens anymore!
Also, if you're writing variable names which are much longer than that you should be looking at refactoring into objects or functions anyway.
It is pointless (and distracting) but is in relatively heavy use at my company, at least for types like ints, strings, booleans, and doubles.
Things like sValue, iCount, dAmount or fAmount, and bFlag are everywhere.
Once upon a time there was a good reason for this convention. Now, it is a cancer.
Sorry to follow up with a question, but does prefixing interfaces with "I" qualify as hungarian notation? If that is the case, then yes, a lot of people are using it in the real world. If not, ignore this.
I see Hungarian Notation as a way to circumvent the capacity of our short term memories. According to psychologists, we can store approximately 7 plus-or-minus 2 chunks of information. The extra information added by including a prefix helps us by providing more details about the meaning of an identifier even with no other context. In other words, we can guess what a variable is for without seeing how it is used or declared. This can be avoided by applying oo techniques such as encapsulation and the single responsibility principle.
I'm unaware of whether or not this has been studied empirically. I would hypothesize that the amount of effort increases dramatically when we try to understand classes with more than nine instance variables or methods with more than 9 local variables.
When I see Hungarian discussion, I'm glad to see people thinking hard about how to make their code clearer, and how to mistakes more visible. That's exactly what we should all be doing!
But don't forget that you have some powerful tools at your disposal besides naming.
Extract Method If your methods are getting so long that your variable declarations have scrolled off the top of the screen, consider making your methods smaller. (If you have too many methods, consider a new class.)
Strong typing If you find that you are taking zip codes stored in an integer variable and assigning them to a shoe size integer variable, consider making a class for zip codes and a class for shoe size. Then your bug will be caught at compile time, instead of requiring careful inspection by a human. When I do this, I usually find a bunch of zip code- and shoe size-specific logic that I've peppered around my code, which I can then move in to my new classes. Suddenly all my code gets clearer, simpler, and protected from certain classes of bugs. Wow.
To sum up: yes, think hard about how you use names in code to express your ideas clearly, but also look to the other powerful OO tools you can call on.
Isn't scope more important than type these days, e.g.
l for local
a for argument
m for member
g for global
etc
With modern techniques of refactoring old code, search and replace of a symbol because you changed its type is tedious, the compiler will catch type changes, but often will not catch incorrect use of scope, sensible naming conventions help here.
I don't use a very strict sense of hungarian notation, but I do find myself using it sparing for some common custom objects to help identify them, and also I tend to prefix gui control objects with the type of control that they are. For example, labelFirstName, textFirstName, and buttonSubmit.
I use Hungarian Naming for UI elements like buttons, textboxes and lables. The main benefit is grouping in the Visual Studio Intellisense Popup. If I want to access my lables, I simply start typing lbl.... and Visual Studio will suggest all my lables, nicley grouped together.
However, after doing more and more Silverlight and WPF stuff, leveraging data binding, I don't even name all my controls anymore, since I don't have to reference them from code-behind (since there really isn't any codebehind anymore ;)
What's wrong is mixing standards.
What's right is making sure that everyone does the same thing.
int Box = iBottom * nVerticleSide
The original prefix was meant to be
used to spot problems in equations,
but has somehow devolved into making
the code slightly easier to read since
you don't have to go look for the
variable declaration. With todays
smart editors where you can simply
hover over any variable to find the
full type, and not just an
abbreviation for it, this type of
hungarian notation has lost a lot of
its meaning.
I'm breaking the habit a little bit but prefixing with the type can be useful in JavaScript that doesn't have strong variable typing.
When using a dynamically typed language, I occasionally use Apps Hungarian. For statically typed languages I don't. See my explanation in the other thread.
Hungarian notation is pointless in type-safe languages. e.g. A common prefix you will see in old Microsoft code is "lpsz" which means "long pointer to a zero-terminated string". Since the early 1700's we haven't used segmented architectures where short and long pointers exist, the normal string representation in C++ is always zero-terminated, and the compiler is type-safe so won't let us apply non-string operations to the string. Therefore none of this information is of any real use to a programmer - it's just more typing.
However, I use a similar idea: prefixes that clarify the usage of a variable.
The main ones are:
m = member
c = const
s = static
v = volatile
p = pointer (and pp=pointer to pointer, etc)
i = index or iterator
These can be combined, so a static member variable which is a pointer would be "mspName".
Where are these useful?
Where the usage is important, it is a good idea to constantly remind the programmer that a variable is (e.g.) a volatile or a pointer
Pointer dereferencing used to do my head in until I used the p prefix. Now it's really easy to know when you have an object (Orange) a pointer to an object (pOrange) or a pointer to a pointer to an object (ppOrange). To dereference an object, just put an asterisk in front of it for each p in its name. Case solved, no more deref bugs!
In constructors I usually find that a parameter name is identical to a member variable's name (e.g. size). I prefer to use "mSize = size;" than "size = theSize" or "this.size = size". It is also much safer: I don't accidentally use "size = 1" (setting the parameter) when I meant to say "mSize = 1" (setting the member)
In loops, my iterator variables are all meaningful names. Most programmers use "i" or "index" and then have to make up new meaningless names ("j", "index2") when they want an inner loop. I use a meaningful name with an i prefix (iHospital, iWard, iPatient) so I always know what an iterator is iterating.
In loops, you can mix several related variables by using the same base name with different prefixes: Orange orange = pOrange[iOrange]; This also means you don't make array indexing errors (pApple[i] looks ok, but write it as pApple[iOrange] and the error is immediately obvious).
Many programmers will use my system without knowing it: by add a lengthy suffix like "Index" or "Ptr" - there isn't any good reason to use a longer form than a single character IMHO, so I use "i" and "p". Less typing, more consistent, easier to read.
This is a simple system which adds meaningful and useful information to code, and eliminates the possibility of many simple but common programming mistakes.
I've been working for IBM for the past 6 months and I haven't seen it anywhere (thank god because I hate it.) I see either camelCase or c_style.
thisMethodIsPrettyCool()
this_method_is_pretty_cool()
It depends on your language and environment. As a rule I wouldn't use it, unless the development environment you're in makes it hard to find the type of the variable.
There's also two different types of Hungarian notation. See Joel's article. I can't find it (his names don't exactly make them easy to find), anyone have a link to the one I mean?
Edit: Wedge has the article I mean in his post.
Original form (The Right Hungarian Notation :) ) where prefix means type (i.e. length, quantity) of value stored by variable is OK, but not necessary in all type of applications.
The popular form (The Wrong Hungarian Notation) where prefix means type (String, int) is useless in most of modern programming languages.
Especially with meaningless names like strA. I can't understand we people use meaningless names with long prefixes which gives nothing.
I use type based (Systems HN) for components (eg editFirstName, lblStatus etc) as it makes autocomplete work better.
I sometimes use App HN for variables where the type infomation is isufficient. Ie fpX indicates a fixed pointed variable (int type, but can't be mixed and matched with an int), rawInput for user strings that haven't been validated etc
Being a PHP programmer where it's very loosely typed, I don't make a point to use it. However I will occasionally identify something as an array or as an object depending on the size of the system and the scope of the variable.

Resources