How easy is it to fake asynchronicity? [closed] - asynchronous

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Clearly I don't understand the big deal about "asynchronous" environments (such as NodeJS) versus "synchronous" ones.
Let's say you're trapped in a synchronous environment. Can't your main loop just say:
while(1) {
events << check_for_stuff_from_the_outside_world();
for e in events {e.process()}
}
What's wrong with doing that, how is that not an asynchronous environment, how are asynchronous environments different?

Yes, this is more or less what Node.js does, except that instead of check_for_stuff_from_the_outside_world(), it should really be check_for_stuff_from_the_outside_world_plus_follow_on_stuff_from_previous_events(); and all of your events must also be written in such a way that, instead of completing their processing, they simply do a chunk of their work and then call register_stuff_for_follow_up(follow_on_event). In other words, you actually have to write all of your code to interact with this event framework; it can't be done "transparently", with only the main loop having to worry about it.
That's a big part of why Node.js is JavaScript; most languages have pre-existing standard libraries (for I/O and so on) that aren't built on top of asynchronous frameworks. JavaScript is relatively unusual in expecting each hosting environment to supply a library that's appropriate for its own purposes (e.g., the "standard library" of browser JS might have almost nothing in common with the "standard library" of a command-line JS environment such as SpiderMonkey), which gave Node.js the flexibility to design libraries that worked together with its event loop.

Take a look at the example on the Wikipedia page:
https://en.wikipedia.org/wiki/Nodejs#Examples
Notice how the code is really focused on the functionality of the server - what it should do. Node.js basically says, "give me a funciton for what you want to do when stuff arrives from the network, and we'll call it when stuff arrives from the network" so you're relieved of having to write all the code to deal with managing network connections, etc.
If you've ever written network code by hand, you know that you end up writing the same stuff over and over again, but it's also non-trivial code (in both size and complexity) if you're trying to make it professional quality, robust, highly performant, and scalable... (This is the hidden complexity of check_for_stuff_from_the_outside_world() that everyone keeps refering to.) So Node.js takes the responsibility for doing all of that for you (including hadling the HTTP protocol, if you're using HTTP) and you only need to write your server logic.
So it's not that asynchronous is better, per se. It just hapens to be the natural model to fit the functionality they're providing.
You'll see the asynchronous model come up in a lot of other places too: event-based programming (which is used in a lot of GUI stuff), RPC servers (e.g., Thrift), REST servers, just to name a few... and of course, asynchronous I/O. ;)

Related

is this good to have pointers in programming languages such as golang,C or C++? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Most of the modern programming compilers such as JAVA etc does not have the support of pointers.
But in golang google introduce pointers again.
So, i just want to understand how pointer effects a programming language?
is there any kind of security thread because of pointers?
if this is because of security then why we have world's most secured system on LINUX and UNIX(both are build in C)
Technically, all languages use pointers. When you create an instance of an object in Java or C# or Javascript and pass it to a method you are actually passing a pointer to the piece of memory that contains that object by which you will manipulate the data on that piece of memory. Imagine a language where you could not pass by reference. You wouldn't be able to do much, now would you? Whenever you pass by reference in any language, you're - in the most basic of terms - using glorified pointers.
What you probably mean, however, is "why expose pointers to people; they are so much more complicated", or something of that sort... And the answer is probably speed and tradition. Some of the fastest languages we have are C and C++... They've been around for a relatively long time. They are tried and true, and people know how to use them. Most software is based off of them in one way or another. Most operating systems are written in C or some variation thereof. Even other programming languages.
As for your Go example, we have already had a question on that..
C/C++ pointer is operational.
Example
void f(int* a) {
a++
Direct operation is danger.
But, Golang pointer is not operational.
So, same name and same mean "pointer".
But, there are difference how to use.
The 'modern' comparison of JAVA and C# to C++ is the worst thing a programmer can make. JAVA and C# are managed languages and that means that the memory is not managed by the programmer at all (that is the main function of pointers).C++ is an unmanaged language and that is why C++ is so much faster than any managed language. Almost every modern PC game you will ever see is made using C++ because it runs faster than any managed language.
Pointers makes call-by-reference easier, but are more vulnerable to breach, because through pointers we can directly access to the memory location, and thus can be a security concern.
Those problems can be defensively coded to prevent but that requires users to be knowledgeable and diligent.

Coding speed, programming speed tradeoff: C++ or Java with native code [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I wanted to explore neural nets by writing a program to play with different types of networks. So far I've written a basic perceptron in c++. As I understand it neural networks can require a lot of computing power to do even fairly minor tasks, so optimization is a concern, or at least I want to take the idea seriously without going to an extreme like gpu programming.
I'm comfortable programming in Java, less so with C++, but would like to get more experience with it anyway.
My Question:
Given that I would be able to write the main program faster in Java, but am concerned about speed. Does it make more sense to write the main program in Java and write the more intensive parts in C++ or instead write the entire program in C++? I have no user interface requirements that would prefer one language over the other (i.e. I'm not planning to stick it in a web app).
**No one else would be using this, as there are already more professional open source versions available (FANN/Encog). This is purely for my entertainment/learning. I would like to learn more C++ so I want to write at least some if it with that (though if someone feels it makes more sense to completely write it in Java I would be interested in knowing why.)
This is a fairly subjective question and isn't really a good fit for SO but i'll give it a shot.
Depending on your implementation, you may find yourself far more limited by things like bandwidth and latency. If I were you, I'd write it as much as possible in a language you were comfortable with, profile it, and come back and rewrite the slow parts in something faster, with optimization in mind.
This is the procedure of optimization I'd follow for projects expected to be CPU bound in general.
Since you're doing it to grasp C++ better, it might benefit you to use C++ instead of java for the whole thing, and optimize parts of it later, but a neural net might not be a very "beginner friendly" project. It's possible you'd end up with a train wreck caused by a design better suited for another language. You'd still probably learn from it, but you wouldn't be able to use it after you finished.
You're going to spend most of your time doing easily-vectorised floating-point arithmetic. You really shouldn't use java for this; it has no facilities for vector arithmetic and its optimiser does not (or, perhaps, did not?) try to vectorise code itself. A modern C++ compiler will do a much better job.

Is there a MDSD/MDA success story for a real world application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am currently facing a situation where I as an advocate of test driven development have to compete with an advocate of model driven software development (MDSD) / model driven architecture (MDA).
In my opinion, code generation is a valuable tool in my toolbox and I make heavy use of templates and automation when needed. I also create diagrams in UML when I think this helps to understand the inner working or to discuss architecture on the white board. However, I strongly doubt that creating software via UML (creating statecharts and sequence diagrams to create working code not only skeletons of code) is more efficient for multi tier applications (database layer, business/domain layer and a Gui, maybe even distributed). It seems to me when it comes to MDSD, the CASE tooling suddenly isn't just a tool anymore but it is the thing to satisfy: As I see it, on the one hand, MDSDevelopers profit from the higher abstraction UML gives them but at the same time they are struggling with modifing the codegenerator/template/engine to fullfill their needs which might be easily implemented (and tested) if used another tool out of their toolbox (VisualStudio, Eclipse,...).
All this makes me wonder if there has been a success story (suceess being that the product was rolled out in time, within the budged and with only few bugs and parts of the software have been reused later on) for a real world application which fullfills this creteria and has been developed using a strict model driven approach:
it has nothing to do with the the Object Management Group (OMG) or with consultants related to MDSD/MDA/SOA/
the application is not related to Business Process Modelling and is not a CASE tool itself
the application is actively used by end user
it has at least three tiers, including a user interface which goes beyond displaying raw table values and is not one of the common MDA/MDSD examples ("how to model a coffee machine, traffic light, dishwasher").
A tiny, but nevertheless useful testimonial on the use of MDSD has been posted on the Model Driven Software Network:
http://www.modeldrivensoftware.net/profiles/blogs/viva-mdd-follow-up-building-a?xg_source=activity
It is a relatively small app being developed, but still a good example of MDSD in action.
More success stories are listed at Metacase's site (http://www.metacase.com/cases/index.html). Metacase sells MetaEdit+, which implements DSM (Domain-Specific Modeling). DSM is just a form of MDSD.
I am also developing ABSE (Atom-Based Software Engineering), another form of MDSD, very close to DSM. ABSE is outlined at http://www.abse.info.
I used MDA and code generation on an embedded system project using 4 processors connected via CAN. We had over 20 axes of motion and many, many sensors. The system was highly robust and maintainable as the mechanical components were evaluated and modified.
We worked in the models and generated code so the models were always up-to-date. We did a careful domain analysis to achieve subject matter isolation. The motor control required very high performance and so was not modeled or generated. Our network drivers were also hand-coded, and we wrote interfaces that allowed bridge services to send events to any service anywhere in the system as needed (although this was tightly controlled so as to minimize interprocessor dependencies).
Using the method took a bit of discipline, but having working models was great because they can be reviewed by non-software types.
Version control and differencing of the models was a bit of a challenge but we had a small, localized team so we were able to avoid merge issues.
The good people at Pathfinder Solutions (our tool vendor) can help mentor you through the project.
You could also take a look at the slides from previous Code Generation conferences. Several of these talks were from successful case studies e.g. http://www.codegeneration.net/cg2009/slides.php
I am working on one of the project for legacy modernization and its using MDA tool named Bluage. Its for a big healthcare organization and its in production so i could say that its successful. MDA is better in case of legacy modernization as it can generate KDM model from some technologies like pacbase which are going to be out of support.
I worked on a MDSD system that generated admin style web apps in Google Closure. I believe that your question is compelling. Too much complexity and your MDSD system is too hard to use. Too simple and you won't generate apps that are useful in the real world. Where MDSD really shines is in saving developer time typing lots of plumbing style code but how can MDSD remain effective over multiple releases? Requirements can go in many directions. That is the real challenge. I recently blogged about my MDSD lessons learned on that project.

scrum and refactoring [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
If everything in scrum is all about functional things that a user can see is there really any place for refactoring code unrelated to any new functional requirements?
I don't think that this has as much to do with Scrum as it does with project management philosophy.
Regardless of whether a project uses Scrum or not, many project managers do not like developers spending time on "unnecessary" things like code refactoring or restructuring that doesn't directly advance one of the outstanding functional requirements. It's not "work that yields results" like normal development, it's "work that prevents a delay of results later". Given the typically short time-lines used for Sprints, the benefit is often hard to see and nearly impossible to quantify.
Keeping code maintainable needs to be an item on your burn-down list (if you use a Scrum). It is just as important as new development. While it may not seem like something that is "visible to the user", ignoring it increases your technical debt. Down the road when the technical debt piles up enough that your code's lack of maintainability slows down development, the delays in new feature development will be visible to customers.
It's all a matter of management/philosophy. Instead of looking at refactoring and maintainability enhancements as "extra" work that doesn't impact customers, it should be viewed as a time investment to prevent customer-visible delays (and potentially bugs as well) down the road. Developers can sometimes see these benefits more clearly than managers can; if your manager doesn't understand the disadvantages of neglecting maintainability, you might want to grab several other developers and have a chat with your manager.
I think there is a fair case to make for technical debt refactoring where the effort/cost impact of maintaining the code is as high as, or higher even, than the cost of refactoring it to improve quality or work better / properly - specifically to lend it a higher degree of maintainability.
eg: if the software is so problematic you are losing customers, or money, you'd act fast to fix it.. Some might argue this is a business requirement of it's own, but it's often not placed front and centre on small to mid sized development projects, which instead focus on the technicalities of creating apps rather than the impact of the quality of the app on the bottom line.
I think you are probably talking about large scale refactoring rather than the continuous refactoring you would do whilst in the whole red-green-refactor cycle.
My approach would be something like this, if reafactoring an old feature makes it easier to add a new feature then go ahead and do it. But in some ways you are right, if there is no pressure on a particular unit to change (i.e. it is completely finished and will never change again and will never impact on other modules) then there is no practical need to refactor. However I rarely find a module that is quite so finalised.
If everything in Scrum is all about functional things that a user can see (...)
Any project and methodology should be about generating business value, you rarely do things just for the fun in a business environment. Having that said, I see quality in Scrum (and other Agile methods) as a way to not kill your velocity on the long run and, ultimately to achieve hyper productivity. I thus believe that a typical "Definition of Done" should include something like "no increase of technical debt" (put your quality standards in there). If you think a new feature will impact existing code that should be refactored, include this cost in the estimate (or create a refactoring item in your Product Backlog) and explain things to your Product Owner. Because at the end, it's up to the Product Owner to prioritize items and to decide if quality can be sacrificed temporarily (if your business die because you don't release a feature, what is the point of refactoring existing code?). But he must be aware that this can't be a long term strategy or he will kill the team velocity.
bta: Regardless of whether a project uses Scrum or not, many project managers do not like developers spending time on "unnecessary" things like code refactoring or restructuring that doesn't directly advance one of the outstanding functional requirements.
Definitely a noteworthy observation; my solution to this would be as follows:
Perform regular code reviews. Every code review should recommend actions to improve on deficiencies in the code.
There is now a requirement for jobs which improve code quality. Build these into the sprint and track them in the same way as any other job.
If your manager needs any more convincing, cast 'the maintainer' as a user, and describe some user stories for them - and then 'features' are things like 'the code is fully commented with xml doc comments' and 'the code does not produce any warnings from ReSharper'
If you can justify it as part of the process of completing other tasks by identifying issues/risks with current sets of code, and it is a better end result, go for it. But don't get overzealous and screw the timelines/budget.

What are some ways to optimize your use of ASP.NET caching? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have been doing some reading on this subject, but I'm curious to see what the best ways are to optimize your use of the ASP.NET cache and what some of the tips are in regards to how to determine what should and should not go in the cache. Also, are there any rules of thumb for determining how long something should say in the cache?
Some rules of thumb
Think in terms of cache miss to request ratio each time you contemplate using the cache. If cache requests for the item will miss most of the time then the benefits may not outweigh the cost of maintaining that cache item
Contemplate the query expense vs cache retrieval expense (e.g. for simple reads, SQL Server is often faster than distributed cache due to serialization costs)
Some tricks
gzip strings before sticking them in cache. Effectively expands the cache and reduces network traffic in a distributed cache situation
If you're worried about how long to cache aggregates (e.g. counts) consider having non-expiring (or long-lived) cached aggregates and pro-actively updating those when changing the underlying data. This is a controversial technique and you should really consider your request/invalidation ratio before proceeding but in some cases the benefits can be worth it (e.g. SO rep for each user might be a good candidate depending on implementation details, number of unanswered SO questions would probably be a poor candidate)
Don't implement caching yet.
Put it off until you've exhausted all the Indexing, query tuning, page simplification, and other more pedestrian means of boosting performance. If you flip caching on before it's the last resort, you're going to have a much harder time figuring out where the performance bottlenecks really live.
And, of course, if you have the backend tuned right when you finally do turn on caching, it will work a lot better for a lot longer than it would if you did it today.
The best quote i've heard about performance tuning and caching is that it's an art not a science, sorry can't remember who said it but the point here is that there are so many factors that can have an effect on the performance of your app that you need to evaluate each situation case by case and make considered tweaks to that case until you reach a desired outcome.
I realise i'm not giving any specifics here but I don't really think you can
I will give one previous example though. I worked on an app that made alot of calls to webservices to built up a client profile e.g.
GET client
GET client quotes
GET client quote
Each object returned by the webservice contributed to a higher level object that was then used to build the resulting page. At first we gathered up all the objects into the master object and cached that. However we realised when things were not as quick as we would like that it would make more sense to cache each called object individually, this way it could be re-used on the next page the client sees e.g.
[Cache] client
[Cache] client quotes
[Cache] client quote
GET client quote upgrades
Unfortunately there is no pre-established rules...but to give you a common sense, I would say that you can easily cache:
Application Parameters (list of countries, phone codes, etc...)
Any other application non-volatile data (list of roles even if configurable)
Business data that is often read and does not change much (or not a big deal if it is not 100% accurate)
What you should not cache:
Volatile data that change frequently (usually the business data)
As for the cache duration, I tend to use different durations depending on the type of data and its size. Application Parameters can be cached for several hours or even days.
For some business data, you may want to have smaller cache duration (minutes to 1h)
One last thing is always to challenge the amount of data you manipulate. Remember that the end-user won't read thousands of records at the same time.
Hope this will give you some guidance.
It's very hard to generalize this sort of thing. The only hard-and-fast rule to follow is not to waste time optimizing something unless you know it needs to be done. Then the proper course of action is going to be very much dependent on the nitty gritty details of your application.
That said... I'll almost always cache global applications parameters in some easy to use object. This is certainly more of a programming convenience rather than optimization.
The one time I've written specific data caching code was for an app that interfaced with a very slow accounting database, and then it was read-only for data that didn't change very often. All writes went to the DB. With SQL Server, I've never run into a situation where the built-in ASP.NET-to-SQL Server interface was the slow part of the equation.

Resources