Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The IEEE has a long list of standards for almost every step within the software engineering process. How many of you have seen a reference to such standards in the documentation you read?
I think the idea of combining the suggestions from many veterans is a good thing, but I have the feeling that not many projects ever quote not even one single of those documents. Maybe only the huge ones?
Since the standards are paid, I do not expect to ever see them quoted from open source applications. My question is directed to those of you working with proprietary source code.
What exactly are you expecting? The average open source developer might not have access to IEEE standards, but the standards permeate through the entire computer industry. For example IEEE 754 specifics the standard for floating point computation that is used by most modern systems, including every one of the numerous open source JavaScript implementations.
The reason the usage of such standards isn't very visible has nothing to do with open or closed source, it is a function of how low level most IEEE standards are. Most programmers work and much higher levels than IEEE standard, many of which, are only of interest to hardware and driver developers. I expect the number of developers deterred from starting open source projects, because of lack of access to the standards to be quite small.
Never. The larger the project, the larger the cost. The larger the cost, the more importance in getting it done and selling it. Standards are just a set of ideals--they don't sell software for you.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Recently, I was reading a paper whose name is "On the Effectiveness of Concern Metrics to Detect Code Smells: An Empirical Study".
I come from a non-English speaking country, and I can not quite understand what Concern Metrics means in the field of software engineering.
It is not referring to the relationship between objects?
I have some understanding of java and c #, some people may be able to use java to give me an example.
Thanks.
Like it is said in the paper's abstract: "While traditional metrics quantify properties of software modules, concern metrics quantify concern properties, such as scattering and tangling." Are you familiar to the cross-cutting concern concept? This question provides examples of concerns: Cross cutting concern example Try to read papers on aspect-oriented programming (AOP) to grasp more concepts in order to understand better the relationship between concerns and code. The metrics are attempts to quantify, for instance, the amount of scatterness of a concern (e.g. login) over the source code.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
How are versions numbered? What is the proper idea behind going to next version, increments, etc?
For example, I often see v0.1, v0.2, v0.34567 etc. I assume these are softwares that are in beta, and haven't finished the first release yet.
But there are also many softwares that are v0.10.11, etc. how do they work?
There is not a specific standard - anybody can follow any scheme (or lack of scheme). It's up to corporate policy, development standards, or whatever guidelines you are under.
There are some popular standards out there. We try to follow the Semantic Versioning standard. The basic tenants include (quoted):
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes
MINOR version when you add functionality in a backwards-compatible manner
PATCH version when you make backwards-compatible bug fixes.
Links:
Semantic Versioning: http://semver.org/
Other versioning schemes: http://en.wikipedia.org/wiki/Software_versioning#Schemes
There are competing standards, which saddens me greatly, especially in a world where git is popular.
SymVer, as mentioned, helps a great deal, but a lot of popular software doesn't use it.
Unfortunately, this doesn't help a great deal when dealing with distros, who apply patches to specific versions of software, effectively changing it's version.
The closest to "proper" I have seen yet is done by NixOS. Each version of their software is hashed, as are all patches applied, and each end result has a different hash, line any change in Git.
The resulting output will be different as well, uniquely identifying it against others.
Until that method is adopted, it's a free-for-all, and versioning is not a consistent thing.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have some software which we added an open common file format (.iwb) to. The government organisation that initiated that work has been cut in the cutbacks.
Now a not for profit organisation has taken up the mantle, however its going to cost and once you pay you are not allowed to reveal the "materials" you gain.
http://www.imsglobal.org/iwbcff/jointheIWBCFFIalliance.cfm
I understand people need to be paid but the whole not sharing thing makes it feel like its going against what a standard is meant for.
What's a good strategy:
Pay up and shut up (there might be plenty of closed standards
that work in this way)
Fork the standard to an organisation that will not require people to pay to read it
Drop the file format
Stay behind the curve and reverse engineer the files
Any standard that is not freely accessible is no standard at all but is instead a proprietary format. I'd say either:
petition them to open the standard up
Drop your support for it (and tell your customers why you have to)
Fork an earlier open version and create a free version of the standard
Paying for access to a standard sounds like a horrible idea because:
It encourages this behavior
It's likely to just be wasted money because others won't want to pay either, and a standard used by no one is not a standard.
Publish the last version you had access to.
Site that you support that version of the standard.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm looking for the best way to interpret the standard (well, standardish) Ethernet PHY registers, to determine the speed that an Ethernet link is actually running at. (e.g. 10/100/1000 and full/half-duplex)
I daresay that this is to be found in the source of things like Linux, and I'm just off to look there now, but if anyone has a good reference I'd be interested.
What I'm interested in is if it actually linked and what it linked at, rather than the vast sea of possibilities that each end has advertised at the outset.
Thanks for the answer. It's intended as a language and platform agnostic question, because pretty much all MII/GMII Ethernet PHYs have the same basic registers. I happen to be on an embedded platform.
But I found a sensible sequence which was good enough for my restricted application by looking at various bits of Linux driver source - it's basically:
Check for link-up in basic-status (0x1)
If the link's up then check for negotiation-complete in basic status (0x1)
If the negotiation's complete then check for 1G in the 1000M-status register (0xa)
If you've not got 1G, then you've got 100M. (That's not a general rule, but it applies in this application)
Maybe this was really a hardware question rather than a software one...
To help you looking at how the Linux kernel does it: While each driver can do its own thing, there is a generic version which is supposed to be used when the chip follows the standard closely enough: Generic Media Independent Interface device support.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
How many people actually write an SDD document before writing a single line of code?
How do you handle large CSCI's?
What standard do you use for SDD content?
What tailoring have you done?
I certainly have. Historically and on recent projects.
Years ago I worked in organisations where templates were everything.
Then I worked other places where the templates were looser or non-existent or didn't fit the projects I was working on.
Now the content of the software design is pretty much governed by what I need to describe to get the idea across to the audience.
"before writing a single line of code" there wouldn't be a a lot of detail. The documents I produce before I start coding are meant to get the idea of what we need to build across to the affected teams and senior management so they introduce high level architecture, functionality, technologies, risks and scope. Those last two are really important. The rest is to show other teams where you need to interface with them and to leave managers with a lingering notion that cool stuff is happening.
Most big software companies have their own practices. For example Motorola has detailed documentation for every aspect of software development process. There are standard templates for each type of documents. Having strict standards allows effectively maintain huge number of documents and integrate it with different tools. Each document obtains tracking number from special document-tracking system. They even have system (last time I seen it was in stage of early development) for automatically requirements tracking - you can say which line of code relate to given requirement\design guideline.
I would suppose that most people who write SDD documents and use terminology like CSCI have to be using a specific software development methodology and most likely are working for some serious government customer. They usually tend to take their preparations quite seriously and the documents are ready and approved before any development starts.
In an Agile process the development and the design document could be developed in parallel. It means that there will be plenty of refactoring to be done but it usually delivers very good results in the end.
In more formal processes (like RUP) a SAD document is mostly created during the elaboration/prototyping phase based on the team research.