As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Given an operation such as int myNum = 5 + (5 * 9) or any other mathematical operation, what portions of this statement, if any, are performed by the compiler? Which ones are performed at runtime? Obviously, constantly changing variables cannot be simplified on compile, but certain operations might be. Does the compiler even care to do any such simplification (such as making the above statement int myNum = 50;)? Does this even matter in terms of load, speed, or any other objective measurement?
Detail is key here, please expound upon your thoughts as much as possible.
I mean this to apply to any arithmetical operation.
Check out constant folding.
Constant folding is the process of simplifying constant expressions at compile time. Terms in constant expressions are typically simple literals, such as the integer 2, but can also be variables whose values are never modified, or variables explicitly marked as constant.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme?
RFC1700 stated it must be so. (and defined network byte order as big-endian).
The convention in the documentation of Internet Protocols is to
express numbers in decimal and to picture data in "big-endian" order
[COHEN]. That is, fields are described left to right, with the most
significant octet on the left and the least significant octet on the
right.
The reference they make is to
On Holy Wars and a Plea for Peace
Cohen, D.
Computer
The abstract can be found at IEN-137 or on this IEEE page.
Summary:
Which way is chosen does not make too much
difference. It is more important to agree upon an order than which
order is agreed upon.
It concludes that both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either can be used in place of the other as long as it is consistent all across the system/protocol.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know commonly CPU has many compute units or CUDA cores. This make it suitable for compute-intensive algorithms.
But why it has so much more cores than CPU? When rending image, which kinds of algorithms are parallelizable?
This huge number of compute units is necessary for fast processing of frames when applying shaders.
This type of computing is highly parallelizable as each shader will be applied n times (maybe one time by pixel) and often in an independent way on the same frame.
Note that each compute-unit is made of many shader-cores.
This is why shaders support is a prerequisite for OpenCL as it implies some dedicated cores to do the rendering job, cores that can be "hijacked" to do other things => this is called GPGPU.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
With the rise of Scala.React I was wondering whether Qt's Signals & Slots mechanism would become obsolete when using Qt as a GUI framework for a Scala program. How would one of the two approaches excel in each of the following categories?
ease of coding, regarding conciseness and clarity
expressiveness: Does any technique provide possibilities that the other one does not (like with WPF's coerce mechanism of dependency properties)?
compile time type safety, e.g. when using QtScript to define Signals & Slots
performance - But would it actually matter in a GUI?
Suppose Scala.React was already in a completed state and well documented: When would you prefer one approach over the other?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
The following code does compiles with intel/nVidia OpenCL compilers (both based on LLVM):
struct Foo{ float2 bar; };
void baz(){
global struct Foo* foo;
((float*)(&foo->bar))[1]=1;
}
The AMD compiler says invalid type conversion, and accepts the code only with the global qualification as:
((global float*))(&foo->bar))[1]=1;
Which is of them is right according to the specification? (And: should I report the non-conforming compiler(s) somewhere?)
The OpenCL spec allows nearly infinite flexibility when it comes to casting pointers. Basically, the rule is that you the programmer know what you are doing for your particular hardware. It doesn't address the specific issue of casting across memory spaces, so this should probably be considered undefined behavior. The differences between vendors is to be expected.
As the CL spec matures over time, you can expect issues like the above to be explicitly addressed, I'd guess.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We are developing software, where the users have to assign priority numbers (integers) to jobs. Our basic question is if a high priority should be done with a high or low number (and vice-versa for low priority)? This is software to be used by other developers and we want to do what is most natural to the majority.
We have looked around and found both systems are being used and we also have different opinions internally.
What do you prefer?
Sorry if the tag is not good, but it is the best I could find.
It really depends on how you present the value to the user. If you present it as kind of a ranking, then 1 would be "better"/higher ranked than 2. But if you instead present it as a kind of "weight" then 2 would be "heavier" than 1. So it's more a matter of how you present it (be sure to be consistent with your choice).
Put personally, I have the feeling that the ranking system is more intuitive than a "weight" system. You normally think about the most important things, and you want to put them at the front, or handle them "first". Thus starting with 1 for high priorities and going to larger numbers for lower priorities seems more natural.
I think that DarkDust does a good job of capturing the ambiguity that causes why people do it differently. My advice is, for whichever convention you choose, annotate the word "priority" with an indication of the convention you chose. Say "priority rank" if lower values proceed first, and "priority weight" or "priority level" if higher values proceed first. Avoid "priority number", as you are implicitly talking about a quantity with a direction.
If you're adding numbers for more priority levels, then it's easier to add higher numbers the the end of the queue. If someone wants to make something a higher priority than a listed 1, then you have to shuffle the numbers. Of course, this is pretty implementation specific.