Priority values [closed] - math

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
We are developing software, where the users have to assign priority numbers (integers) to jobs. Our basic question is if a high priority should be done with a high or low number (and vice-versa for low priority)? This is software to be used by other developers and we want to do what is most natural to the majority.
We have looked around and found both systems are being used and we also have different opinions internally.
What do you prefer?
Sorry if the tag is not good, but it is the best I could find.

It really depends on how you present the value to the user. If you present it as kind of a ranking, then 1 would be "better"/higher ranked than 2. But if you instead present it as a kind of "weight" then 2 would be "heavier" than 1. So it's more a matter of how you present it (be sure to be consistent with your choice).
Put personally, I have the feeling that the ranking system is more intuitive than a "weight" system. You normally think about the most important things, and you want to put them at the front, or handle them "first". Thus starting with 1 for high priorities and going to larger numbers for lower priorities seems more natural.

I think that DarkDust does a good job of capturing the ambiguity that causes why people do it differently. My advice is, for whichever convention you choose, annotate the word "priority" with an indication of the convention you chose. Say "priority rank" if lower values proceed first, and "priority weight" or "priority level" if higher values proceed first. Avoid "priority number", as you are implicitly talking about a quantity with a direction.

If you're adding numbers for more priority levels, then it's easier to add higher numbers the the end of the queue. If someone wants to make something a higher priority than a listed 1, then you have to shuffle the numbers. Of course, this is pretty implementation specific.

Related

Memorizing code, pros how do you do it [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I apologize that this isn't a technical question, but as a programming student, I find it difficult at times to remember various arguments (especially in BASH scripting), so my question to the pros is, do you use references and "cheat sheets" or is it all from memory?
We don't memorise as in memorising the multiplication table. We memorise as in playing a musical instrument: kept using it.
We memorize only one command, man and bookmark only one URL, The Portable Operating System Interface (POSIX) which provides the cheat sheets for shell and kernel programming alike. We learn C only from one book, Kernighan, Ritchie: The C Programming Language and Unix/Network programming only with W.Richard Stevens' books.
Everything else is expendable :-)
Well I guess there is no real way to memorize things, you need to really have an understanding of the commands. To be honest a lot of people still may need to look in the "man" for particular commands to find all their options. The first step is feeling comfortable with bash and the commands and having a general appreciation of how bash commands generally work.
Then there is good old fashioned repeated use which will get you more familiar with the commands and then it will become second nature. So I guess the answer is to get a general appreciation and keep learning and applying what you learn on a regular basis.
I personally have a cheat sheet of commands that i use on my setup.
The more you code the less you will need to look something up. Google is your friend.
Firstly, try focus on remembering concepts and not variables or function names.
Secondly, using cheatsheets and references is a good choice for beginner, you can even print it and put nearby.

Why are standards often closed? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I understand why standards can be open while their implementations can be closed. However, I have a problem understanding the inverse. For example, the C++ standard is commercial, yet some of its implementations (e.g. gcc and clang) are open source. I believe PDF is like this as well.
More generally, would a closed standard not prohibit its broad use, which is one of the objectives of a standard? In reality, who benefits from what, and why are closed standards used?
In fact the C++ standard isn’t actually closed (its source is on Github …). You confuse “closed” with “published commercially”.
That’s a difference that comes stems from the unfortunate fact that maintaining and publishing standards documents simply costs money, and organisations such as ISO want to get paid for doing (part of) this work.
The situation is very similar to patent offices, and even more so to publishing in research: almost all research is open – for about any definition of the word – yet the publications are more often than not hidden behind paywalls, because the publishing houses pursue a business model that is paid per view (in addition to some upfront fee paid by the researchers).
On a personal note, I believe that this is a perverse situation that is an ugly anachronistic hold-over from a time before Internet when publishing a manuscript actually cost money. I’ve got some more things to say on this topic but the moderators would censor them. ;-)

Why is network-byte-order defined to be big-endian? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme?
RFC1700 stated it must be so. (and defined network byte order as big-endian).
The convention in the documentation of Internet Protocols is to
express numbers in decimal and to picture data in "big-endian" order
[COHEN]. That is, fields are described left to right, with the most
significant octet on the left and the least significant octet on the
right.
The reference they make is to
On Holy Wars and a Plea for Peace
Cohen, D.
Computer
The abstract can be found at IEN-137 or on this IEEE page.
Summary:
Which way is chosen does not make too much
difference. It is more important to agree upon an order than which
order is agreed upon.
It concludes that both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either can be used in place of the other as long as it is consistent all across the system/protocol.

Which is good a camelClass , minus "-" or an undersore "_" for naming CLASS & ID for specially cases [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have been writing CSS from 3yrs. I always naming my Class & ID in way so easy for others developer to understand it. But now i have a discussion with my follow colleague relating to the naming of Class & ID when some special cases comes.
Here the special cases means when some time (It's always happens with everyone) we have no name for Class & ID then we give names according to it's parent or combination of two name etc. like .panel_slider or .panelSlider etc.
So, my follow colleague said for this type of special cases it's good to give underscoe "_" or minus "-" between them. So, easy to write & understand. Like
.panel_slider or .panel-slider.
& I am saying it's better to give camel-case for this like we are writing in JS. So, it's not taking any additional space of underscoe "_" or minus "-". It's also easy to write & understand.
Like .panelSlider.
So, my question is which naming convention is good.
a) .panel_slider
b) .panel-slider
c) .panelSlider
I agree with #Ianzz: when you take a look at twitter bootstrap, you notice they don't use camelcase or underscores. (e.g. <div class="text-warning">) The guys at twitter bootstrap are all about best practices, so it doesn't hurt to follow their notation. But as everybody else said, there isn't really a rule.

What arithmetical operations, if any, are performed by the compiler? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Given an operation such as int myNum = 5 + (5 * 9) or any other mathematical operation, what portions of this statement, if any, are performed by the compiler? Which ones are performed at runtime? Obviously, constantly changing variables cannot be simplified on compile, but certain operations might be. Does the compiler even care to do any such simplification (such as making the above statement int myNum = 50;)? Does this even matter in terms of load, speed, or any other objective measurement?
Detail is key here, please expound upon your thoughts as much as possible.
I mean this to apply to any arithmetical operation.
Check out constant folding.
Constant folding is the process of simplifying constant expressions at compile time. Terms in constant expressions are typically simple literals, such as the integer 2, but can also be variables whose values are never modified, or variables explicitly marked as constant.

Resources