How to measure SW API? [closed] - software-design

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I know how to measure module implementation using LOC, SLOC or any other metrics.
But I would like to know: is it possible to "measure" several different APIs in order to find "the best one"?
For example:
Nucleus RTOS:
STATUS NU_Create_Semaphore(NU_SEMAPHORE *semaphore, CHAR *name, UNSIGNED initial_count, OPTION suspend_type);
Posix: int sem_init(sem_t *sem, int pshared, unsigned int value);
For example we can state that creation of semaphore in Nucleus OS shall use more stack than Posix variant. So can we conclude that in this case Posix is better API if we use "size of stack" as "measure"?
Or this analysis is just stupid?
And if above is not stupid then I am wondering more: it is "easy" to measure API which cover same functions (create sempahore, create threads etc) but how to measure APIs which provide same functionality where functions are not "equal"?
I can imagine test which will have same functionality made using different API.
After comparing several such created tests by different metrics (memory consumption, LOC, SLOC etc) can I conclude that one API is better then other?
TIA

Find the API that provides you with the capabilities / functions you need
From those, use the ones that are the simplest.
Long term, simplicity and maintainability are far more important than performance, especially if this API is not from an app-local library but a remote service.

Really depends on your Judging criteria.
best option is to list out all the available API's for your required functionality
fix up a judging criteria, which can be based on your requirement
it can be time complexity (order of function),Space Complexity, ease of Use, Your understanding of API, Re usability in other modules or whatever is your application requirement and based on it make a judgement.

Related

is this good to have pointers in programming languages such as golang,C or C++? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Most of the modern programming compilers such as JAVA etc does not have the support of pointers.
But in golang google introduce pointers again.
So, i just want to understand how pointer effects a programming language?
is there any kind of security thread because of pointers?
if this is because of security then why we have world's most secured system on LINUX and UNIX(both are build in C)
Technically, all languages use pointers. When you create an instance of an object in Java or C# or Javascript and pass it to a method you are actually passing a pointer to the piece of memory that contains that object by which you will manipulate the data on that piece of memory. Imagine a language where you could not pass by reference. You wouldn't be able to do much, now would you? Whenever you pass by reference in any language, you're - in the most basic of terms - using glorified pointers.
What you probably mean, however, is "why expose pointers to people; they are so much more complicated", or something of that sort... And the answer is probably speed and tradition. Some of the fastest languages we have are C and C++... They've been around for a relatively long time. They are tried and true, and people know how to use them. Most software is based off of them in one way or another. Most operating systems are written in C or some variation thereof. Even other programming languages.
As for your Go example, we have already had a question on that..
C/C++ pointer is operational.
Example
void f(int* a) {
a++
Direct operation is danger.
But, Golang pointer is not operational.
So, same name and same mean "pointer".
But, there are difference how to use.
The 'modern' comparison of JAVA and C# to C++ is the worst thing a programmer can make. JAVA and C# are managed languages and that means that the memory is not managed by the programmer at all (that is the main function of pointers).C++ is an unmanaged language and that is why C++ is so much faster than any managed language. Almost every modern PC game you will ever see is made using C++ because it runs faster than any managed language.
Pointers makes call-by-reference easier, but are more vulnerable to breach, because through pointers we can directly access to the memory location, and thus can be a security concern.
Those problems can be defensively coded to prevent but that requires users to be knowledgeable and diligent.

Difference of safety-critical SW development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
When developing safety-critical software using some quality standards (like e.g. IEC 61508 or DO 178-C) developers have to care about many things. I know that the verification in each development step is quite time consuming and expensive. Moreover, I know that some reduced programming languages are used.
But I am interested in the concrete difference to a "normal" SW-development process. I mean in the standard V-Model, verification and testing should also be part of each development step. What do I have to consider finding requirements? What do I have to consider in SW design?
It isn't so much as a change in the "V Model" that helps verify critical system, it's what you do at each step of the way.
For example you may prefer to plan your development using waterfall in order to have verification steps and controlled transition periods. This has the benefit of staying in line with any government regulations that may be in place.
While developing it is common to use a limited subset of assemblies (APIs) in order to prevent developers from preforming dangerous operations. This type of restriction can also ensure that developers utilize the APIs correctly, such as cleaning up objects as a requirement.
Once the product has been developed you'll likely have gone through all of the testing phases. It is common in industry to develop test fixtures in order to verify and generate data to prove to the government or customers that your system says what it does.
In general, this topic is very deep. You did mention standards, one more is the ISO 2008 standard. I think what you should keep in mind is that the process doesn't change much (the life cycle model stays generally the same). But what you do at each step of the model will change depending on the project. You can take classes on Project Management... In fact it is a tract and sometimes a full degree program. So there's tons to learn about process and how to manage different projects.
Googling system critical projects and project management will likely generate a trove of knowledge.
Hope that helps shed some like on the subject.
EDIT: Finding requirements, like in a waterfall process, is very time consuming. It will involve understanding the customers needs and goals of course. In general you have to spend lots of time in this area for government reasons and software architecture. It's not really a different technique... Be explicit, understanding the requirements is most critical. The system shall recover from 90 second timeouts within 5 seconds of resetting. <- its like all other requirements in SW engineering... Explicit and testable. Objective not subjective. Think Grammer Nazi level of consideration.
One example of a safety critical systems is lockheeds F-35... The system requirements manuals are huge and the process to make a change requires meetings and quite a bit of paperwork.

How easy is it to fake asynchronicity? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Clearly I don't understand the big deal about "asynchronous" environments (such as NodeJS) versus "synchronous" ones.
Let's say you're trapped in a synchronous environment. Can't your main loop just say:
while(1) {
events << check_for_stuff_from_the_outside_world();
for e in events {e.process()}
}
What's wrong with doing that, how is that not an asynchronous environment, how are asynchronous environments different?
Yes, this is more or less what Node.js does, except that instead of check_for_stuff_from_the_outside_world(), it should really be check_for_stuff_from_the_outside_world_plus_follow_on_stuff_from_previous_events(); and all of your events must also be written in such a way that, instead of completing their processing, they simply do a chunk of their work and then call register_stuff_for_follow_up(follow_on_event). In other words, you actually have to write all of your code to interact with this event framework; it can't be done "transparently", with only the main loop having to worry about it.
That's a big part of why Node.js is JavaScript; most languages have pre-existing standard libraries (for I/O and so on) that aren't built on top of asynchronous frameworks. JavaScript is relatively unusual in expecting each hosting environment to supply a library that's appropriate for its own purposes (e.g., the "standard library" of browser JS might have almost nothing in common with the "standard library" of a command-line JS environment such as SpiderMonkey), which gave Node.js the flexibility to design libraries that worked together with its event loop.
Take a look at the example on the Wikipedia page:
https://en.wikipedia.org/wiki/Nodejs#Examples
Notice how the code is really focused on the functionality of the server - what it should do. Node.js basically says, "give me a funciton for what you want to do when stuff arrives from the network, and we'll call it when stuff arrives from the network" so you're relieved of having to write all the code to deal with managing network connections, etc.
If you've ever written network code by hand, you know that you end up writing the same stuff over and over again, but it's also non-trivial code (in both size and complexity) if you're trying to make it professional quality, robust, highly performant, and scalable... (This is the hidden complexity of check_for_stuff_from_the_outside_world() that everyone keeps refering to.) So Node.js takes the responsibility for doing all of that for you (including hadling the HTTP protocol, if you're using HTTP) and you only need to write your server logic.
So it's not that asynchronous is better, per se. It just hapens to be the natural model to fit the functionality they're providing.
You'll see the asynchronous model come up in a lot of other places too: event-based programming (which is used in a lot of GUI stuff), RPC servers (e.g., Thrift), REST servers, just to name a few... and of course, asynchronous I/O. ;)

How do we deal with prototyping in Scrum? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We are new to Scrum and part way through the first sprint we have realised that one of the team members (a developer) needs to do some investigation into how navigation should work (from a user perspective) in the application.
So at the end of this investigation we should have a proposal or prototype of how something should work. But it wont have been actually coded in the application.
So my question is, how should we deal with something like this in terms of the sprint planning. I don't really see it as being user story, but what is it, and how is it treated in Scrum? Does something need to be added to the planning board for the investigation?
Thanks
Paul.
Try to treat prototyping like any other requirement as much as possible. Think about what you want to achieve, create a user story, define one ore several tasks and estimate them during sprint planning. Think of the development team being the user in this case. Definitely have it on the planning board and track progress in daily Scrum meetings. If you have problems estimating the tasks, define them as "time-boxed", i.e. with the fixed time budget, to prevent "endless" work without results.
Although you got the solution Just wanted to add something here.
Such prototyping/researching works are termed as Spikes in the Agile world.
Here, the team dedicates some members into such spikes only so much as to understand the feasibility of the user story and be in a position to help the entire team estimate for the user story.
SCRUM is rather an organizational process than a development model, like prototype-driven development. It means that different X-driven-development models can be easily incorporated, like TDD or even prototype-driven (PDD).
To incorporate PDD in SCRUM, one can set several milestones that are prototype versions. SCRUM can be used normally considering each prototype as a whole new project. It is good for a complex prototype.
However, if creating a prototype is very easy, and a single person can do it in one or two sprints worth of time, so it might be useful to retain a prototype-specialist, that, much like the application-specialist, monitors the work of the rest of the team to check consistency with the ultimate goal. However, a prototype specialist can iteratively provide new prototypes, guiding the work of the rest of the team in a practical manner, differently from the application specialist.

asp .net Application performance [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have an asp .net 4.0 application. I have an mdf file in my app_data folder that i store some data. There is a "User" table with 15 fields and an "Answers" table with about 30 fields. In most of the scenarios in my website, the user retrieves some data from "User" table and writes some data to "Answers" table.
I want to test the performance of my application when about 10000 users uses the system.What will happen if 10000 users login and use the system at the same time and how will the performance is affected ? What is the best practice to test my system performance of asp .net pages in general?
Any help will be appreciated.
Thanks in advanced.
It reads like performance testing/engineering is not your core discipline. I would recommend hiring someone to either run this effort or assist you with it. Performance testing is a specialized development practice with specific requirement sets, tool expertise and analytical methods. It takes quite a while to become effective in the discipline even in the best case conditions.
In short, you begin with your load profile. You progress to definitions of the business process in your load profile. You then select a tool that can exercise the interfaces appropriately. You will need to set a defined initial condition for your testing efforts. You will need to set specific, objective measures to determine system performance related to your requirements. Here's a document which can provide some insight as a benchmark on the level of effort often required, http://www.tpc.org/tpcc/spec/tpcc_current.pdf
Something which disturbs me greatly is your use case of "at the same time," which is a practical impossibility for systems where the user agent is not synchronized to a clock tick. Users can be close, concurrent within a defined window, but true simultaneity is exceedingly rare.

Resources