ASP.NET hourglass on loading page - asp.net

On my new job there is a web-application written in Visual Basic .NET with usage of ASP.NET Webforms framework for producing and rendering of webpages.
It runs on a Windows server and requires Microsoft IIS web-server as an application host. The project is developed with Microsoft Visual Studio 2010
as a development environment and uses Intersystems Cache database. The application has a layered architecture (Interface -, Business -, Data access layer).
We use Firefox (78.1.0.esr(64-bit) as browser (internal policy).
Users complain that they don't know when a page is loading / request is being processed.
Apparently in the past Firefox visualized an hourglass when the page was loading.
What is the easiest way to visualize an hourglass for each request (independent of the page)?
It's a very large application.

For postbacks, you could use the asp:UpdateProgress control.
See: https://learn.microsoft.com/en-us/dotnet/api/system.web.ui.updateprogress?view=netframework-4.8
It allow you to display anything you want during the time the postback is being processed. I assume you could also use a css-class which turns your pointer into a hourglass if you wanted..

Well, as noted you don't mention any performance issues for what task?
And the browser display (or not) is a HUGE MASSIVE different issue here.
Browsers even Edge, or Firefox, and in fact all browsers I know of STILL display a loading animation. and that includes FireFox.
Edge browser: (based on Chrome engine now)
And FireFox
I am not aware of any changes in this area. However, it is BEYOND LAUGHABLE that some comments here suggest using ajax or some such!!! - I can provide some links to truck driving school, or maybe fast food training, as that is a laughable and gut busting suggestion here.
In fact?
Introduction of ajax calls WILL FOR SURE remove any browser "waiting" or circle delay animated icon that MOST ALL browsers have when a page is loading.
Next up:
Performance:
Well, does a page on the web site without any data load fast? In other words, do you have (or can you add a test page to the site - "hello world" on the page.
does that page load fast, or do all pages load slow. Or do pages with data operations and pages that pull data are slow?
And did the site run fast at one time, and is now with more data is it running slow? Or was it always slow? YOU REALLY need to answer the above questions.
Since you using a post-sql (or non sql database), then data in that system is actually saved in a format VERY similar to JSON, or XML. Just that the multi-value format used in that Cache database was invented in 1970, but that database uses strings for the data store.
In effect like say using google, or even SharePoint? You can have millions and millions of documents. Such computers even with low processor and low memory can easy say run a motor vehicles department for a whole country - 100 million people say.
So what they do AMAZING:
If you need to pull a patient record, or say a customer invoice? Such systems even with say 1 billion records can pull that WHOLE invoice out of the data base with ONE disk operation and seek. They are blistering fast WHEN retrieving what amounts to a master/child record compared to a sql system.
However, while those systems can say represent a whole invoice with one "string" (just like you can with XML or JSON?). What such systems do VERY poorly is "row" processing. So, get a record, modify that record, save it back? Fantastic performance.
However, row processing, or say using sql statements to update a lot of rows? That system is REALLY slow. And in fact, things are even worse since the ODBC drivers are in fact translating sql statements into "no sql, and into the string base database system).
So, I would ensure the data files are sized correctly. So, if that system used to run fast, but now is slow? The data files (their base size) need to be re-sized, and thus the huge mass of what we call "linked frames" will be reduced, and performance should increase huge. But you have to check the Cache database. (assuming that you have a few years of experience with that database system (so years of experience with a multi-value database is a basic requirement here).
So, do all web pages load slow - even ones without data? Then the can't be the database.
Or, is this occurring for some database pages that involve say updates?
And did the developers use the "data cube" like data objects, or do they use the SQL (odbc) translator for dealing with the database?
And was the system fast at one time? or was it always slow.
And do pages without much (or any) database operations run fast).
And VERY imporant:
Does the site run fast with say 5 users, but then run slow with 50 users?
(but then again, I can't imagine you not asked these super simple questions that anyone would when attempting to evaluate performance). I mean, you can have say "average" doctors, but when you have a special medical mystery, then you need a Dr. House here - the best of the best). Same goes with computers. It is possible you are making the assumption that the original developers of the software and system were drunken unemployed rodeo clowns, but then again, maybe they did a REALLY good job, and the database or software been out grown due to server high loads during peak times. But then again, perhaps worse that these "basic" questions about performance have not yet been asked by you?
Out of one of our computing science classes, there was only about 2-3 out of a class of about 80 that would "naturally" write the fastest running code. (these days, unfortantly tracking things like code execution time and other metrics is often not even considered anymore, but it should be). Same goes for say formula 1 racing. You can take two teams, each with a 400 million dollar budget, but one team BLOWS AWAY the competition, and yet all the teams hire the best talent money can buy. So, hiring some developers to build software? Sure, lots of developers. Hiring developers with top notch performance in mind? Well, then that's when you seek out the big guns, the guru's - the top talent.
and even other basic questions such as :
Does the system run well on the test or development box?
Did the system run well at one time, but is not slower over time?
Or was it always slow? (and then what prompted this issue and idea that performance now needs to be fixed and addressed compared to no one doing anything 5 years ago?).
And when the site was becoming slow, were the developers of the site contacted? (or why, or why not?).
But, talking about a cube like "no-sql" database? And THEN ALSO introduction of a simple browser question of which all have that "animation" icon? That is two VERY different questions here. Have to really wonder how both of this issues could be mixed up and introduced to the same post and question? (yikes!!!!).
All current browsers I am aware of do have a "wait" type of animation. But such animation have VERY little to do with application and database performance optimizing. Toss in that you using a so called post-relational, or so called multi-value database, then you introducing an area of expertise that most posters here likely don't have. (so you getting silly suggestions about ajax and the like).
I have 10+ years of experience on those multi-value databases, and as noted, they are not fast at row processing, but pulling, update and save of a record? Then such systems can easy beat sql based systems performance wise. So, the fine art of performance? It is without question a question and process for the "top dogs" talent wise in our industry when attempting to deal with performance.
So, was the system fast at one time, and now is it slow?
Are all pages - even those without data from databased slow?
Or are only some pages with data operations slow?
Are they using Cache data objects, or are they using the database provider and sql?
so, what type of data provider(s) are being used in this application?

Related

Real life experience developing with Meteor

I'm working on a project where we have to take the decision soon whether to invest in our current technology stack to improve it and make it more flexible to support our time to market (LAMP based stack) or whether to change to a different stack in the hope that it would make our development faster, more efficient and possibly more fun.
One framework we're looking at is Meteor. So I'm wondering: Does anyone have real life experience with starting or shifting a medium-sized project to Meteor (3 developers, couple of hundred active users, mostly short-lived small pieces of user-generated content that are viewed by all users and need to be updated instantly)? Do you have metrics on productivity, code quality, code efficiency that you could share? Or just overall a feeling for how it went? How happy are you with Meteor when working using it for more than just a week or two? How is maintainability over a longer period? How well does it scale up?
Would appreciate any insight!
I'll try to be as fact based as possible to keep this objective:
I switched from Django to Meteor, PostGreSQL to MongoDB.
Switching stacks has a huge cost. A new language, syntax, patterns, and maybe even IDE. Online courses to be taken, a solid node.js foundation, curiosity about io.js, ES6, and Mongo 3.0. A refresher on how JavaScript treats Dates and numbers, and how to use JavaScript to query mongo.
On top of that, you'll want your developers to peak under the hood to see the Meteor magic so they understand fibers, reactivity, DDP, and minimongo. All these things will cost each developer at LEAST 160 hours, yet are necessary to be a competent developer. Skip these steps, and you've got a team of monkeys pulling levers.
To answer your questions:
Productivity? It will hit rock bottom along with code quality. Then slowly climb, and possibly exceed the previous mark (IF it's something the developers enjoy). This is because client & server are in the same language & just a file away. Debugging messages & stack traces are pretty good & hot code reloads, although still not great, are good.
Code quality has absolutely nothing to do with the framework.
Code efficiency is good because reactivity in handled behind the scenes most of the time and fibers makes it possible to write server code in a synchronous fashion. This increases code readability.
Maintainability is another word for code quality.
Scalability is more of a question about node.js, but will work for the VAST majority of projects. An honest critique of node's shortcomings is here: https://medium.com/code-adventures/farewell-node-js-4ba9e7f3e52b

How to store huge amount of data in database

I have a simple basic question. Assume i have a large website like facebook, gmail and so on. this site probably save hundreds of gigabytes information every day. My question is how these sites save this large information in their database(Because of database capacity). Is there only one database? Is there only one server for this site? If there is another server and database, how they can communicate with each others?
They are clearly not using one computer...
The system behind such large sites are very complex, and distributed across datacenters. See - http://royal.pingdom.com/2010/06/18/the-software-behind-facebook/
Take a look at this site for info on various architectures employed by those sites (and this site): http://highscalability.com/all-time-favorites/
Most of these sites have gone with a strategy called NoSQL - that is they don't use traditional RDBM databases, but instead have created their own object relationship frameworks which have the ability to be persisted. This strategy works well at large scale as it drops a number of constraints which would seriously impact performance of traditional DB methods. However this generally comes at the cost of a lowering of reliability, which is generally considered acceptable for those sites' scenarios.
ps. if your question's general interest then no worries. If you're trying to build a highly scalable application hold off and consider it for a moment - are you going to be serving a significant percentage of the population of the world, or are you writing a site for maybe a few thousand users. If it's the latter you don't need Facebook style scaling; invest your effort and resources elsewhere. If it's the former start small then evolve your system, bringing in investment and expertise as your user base grows.

What cache strategy do I need in this case ?

I have what I consider to be a fairly simple application. A service returns some data based on another piece of data. A simple example, given a state name, the service returns the capital city.
All the data resides in a SQL Server 2008 database. The majority of this "static" data will rarely change. It will occassionally need to be updated and, when it does, I have no problem restarting the application to refresh the cache, if implemented.
Some data, which is more "dynamic", will be kept in the same database. This data includes contacts, statistics, etc. and will change more frequently (anywhere from hourly to daily to weekly). This data will be linked to the static data above via foreign keys (just like a SQL JOIN).
My question is, what exactly am I trying to implement here ? and how do I get started doing it ? I know the static data will be cached but I don't know where to start with that. I tried searching but came up with so much stuff and I'm not sure where to start. Recommendations for tutorials would also be appreciated.
You don't need to cache anything until you have a performance problem. Until you have a noticeable problem and have measured your application tiers to determine your database is in fact a bottleneck, which it rarely is, then start looking into caching data. It is always a tradeoff, memory vs CPU vs real time data availability. There is no reason to make your application more complicated than it needs to be just because.
An extremely simple 'win' here (I assume you're using WCF here) would be to use the declarative attribute-based caching mechanism built into the framework. It's easy to set up and manage, but you need to analyze your usage scenarios to make sure it's applied at the right locations to really benefit from it. This article is a good starting point.
Beyond that, I'd recommend looking into one of the many WCF books that deal with higher-level concepts like caching and try to figure out if their implementation patterns are applicable to your design.

How to increase my Web Application's Performance?

I have a ASP.NET web application (.NET 2008) using MS SQL server 2005. I want to increase the performance of the web site. Does anyone know of an article containing steps to do that, step by step, in SQL (indexes, etc.), and in the code?
Performance tuning is a very specific process. I don't know of any articles that discuss directly how to achieve this, but I can give you a brief overview of the steps I follow when I need to improve performance of an application/website.
Profile.
Start by gathering performance data. At the end of the tuning process you will need some numbers to compare to actually prove you have made a difference. This means you need to choose some specific processes that you monitor and record their performance and throughput.
For example, on your site you might record how long a login takes. You need to keep this very narrow. Pick a specific action that you want to record and time it. (Use a tool to do the timing, or put some Stopwatch code in you app to report times. Also, don't just run it once. Run it multiple times. Try to ensure you know all the environment set up so you can duplicate this again at the end.
Try to make this as close to your production environment as possible. Make sure your code is compiled in release mode, and running on real separate servers, not just all on one box etc.
Instrument.
Now you know what action you want to improve, and you have a target time to beat, you can instrument your code. This means injecting (manually or automatically) extra code that times each method call, or each line and records times and or memory usage right down the call stack.
There are lots of tools out their that can help you with this and automate some of it. (Microsoft's CLR profiler (free), Redgate - Ants (commercial), the higher editions of visual studio have stuff built in, and loads more) But you don't have to use automatic tools, it's perfectly acceptable to just use the Stopwatch class to time each block of your code. What you are looking for is a bottle neck. The likely hood is that you will find a high proportion of the overall time is spent in a very small bit of code.
Tune.
Now you have some timing data, you can start tuning.
There are two approaches to consider here. Firstly, take an overall perspective. Consider if you need to re design the whole call stack. Are you repeating something unnecessarily? Or are you just doing something you don't need to?
Secondly, now you have an idea of where your bottle neck is you can try and figure out ways to improve this bit of code. I can't offer much advice here, because it depends on what your bottle neck is, but just look to optimise it. Perhaps you need to cache data so you don't have to loop over it twice. Or batch up SQL calls so you can do just one. Or tighten your query filters so you return less data.
Re-profile.
This is the most important step that people often miss out. Once you have tuned your code, you absolutely must re-profile it in the same environment that you ran your initial profiling in. It is very common to make minor tweaks that you think might improve performance and actually end up degrading it because of some unknown way that the CLR handles something. This is much more common in managed languages because you often don't know exactly what is going on under the covers.
Now just repeat as necessary.
If you are likely to be performance tuning often I find it good to have a whole batch of automated performance tests that I can run that check the performance and throughput of various different activities. This way I can run these with every release and record performance changes each release. It also means that I can check that after a performance tuning session I know I haven't made the performance of some other area any worse.
When you are profiling, don't always just think about the time to run a single action. Also consider profiling under load, with lots of users logged in. Sometimes apps perform great when there's just one user connected, but when they hit a certain number of users suddenly the whole thing grinds to a halt. Perhaps because suddnely they are spending more time context switching or swapping memory in and out to disk. If it's throughput you want to improve you need to be figuring out what is causing the limit on throughput.
Finally. Check out this huge MSDN article on Improving .NET Application Performance and Scalability. Specifically, you might want to look at chapter 6 and chapter 17.
I think the best we can do from here is give you some pointers:
query less data from the sql server (caching, appropriate query filters)
write better queries (indexing, joins, paging, etc)
minimise any inappropriate blockages such as locks between different requests
make sure session-state hasn't exploded in size
use bigger metal / more metal
use appropriate looping code etc
But to stress; from here anything is guesswork. You need to profile to find the general area for the suckage, and then profile more to isolate the specific area(s); but start by looking at:
sql trace between web-server and sql-server
network trace between web-server and client (both directions)
cache / state servers if appropriate
CPU / memory utilisation on the web-server
I think First of all you have to find your Bottlenecks and then try to improve those.
This helps you to perform exactly where you have serios problem.
An in addition you needto improve your Connection to DB. For exampleusing a Lazy , Singletone Pattern and also create Batch request instead of single requests.
It help you to decrease DB connection.
Check your cache and suitable loop structures.
another thing is to use appropriate types, forexample if you need int donot create a long and etc
at the end ypu can use some Profiler (specially in SQL) andcheckif your queries implemented as well as possible.

How many apps should an internal development group be building/maintaining?

I've always been of the opinion an internal development group should really only be building/maintaining three applications.
An internal composite/pluggable/extendable application.
The company website.
(Optional) A mobile version of #1 for field employees.
I'm a consultant, and everywhere I go, my clients have dozens of one-off applications in the web and on the desktop for every need no matter how related to the others. Someone comes to IT and says "I need this", and IT developers turn around and write another one-off ASP.NET application, or another WinForms app.
What's your opionion? Should I embrace the "as many apps as we want/need" movement? I assume it's common; but is it sensible?
EDIT:
A colleague pointed out that it depends on the focus of the development - are you making apps or are you making a system? I guess to me, internal development is about making a system; development of shippable software products, like MS Word, iTunes, and Photoshop, is about making apps.
All of them?
Wow do I ever agree with you. The problem is that many one-off applications will (at some point) each have many one-off maintenance requests. Anything from business rule updates to requests for new reports. At some point the ratio of apps that need to be maintained to available development staff is going to be stretched/taxed.
From my perhaps (limited?) vantage point, I'm starting to think #1 and #3 could be boiled down to Sharepoint. Most one-off applications where I work (a large 500+ attorney law firm) consist of one or more of the following:
A wiki
A blog
Some sort of list (or lists joined together in some type of relationship), which can be sorted and arranged in different ways.
A report (either a Sharepoint data view or a SQL Server Report work just fine)
Or, the user just wants to "make a web page" and add content to it. But only they should be able to edit it. Except when they're out of the office, and then, etc...
Try to build any one of the above using [name your technology], and you've got lots of maintenance cycles to look forward to (versus a relatively minor Sharepoint change).
If I could restate what I think is your point: why not put most of your dev cycles to work improving and maintaining a single application that can support most of your business' one-off needs, rather than cranking out an unending stream of smallish speciality apps?
This question depends on so many things and is subjective besides. I've worked at companies that have needed several different apps because we do business in discreet silos. In that case, an internal group may not build and maintain apps, but may build several, with another group that is responsible for maintenance.
Also, what do you mean by "app"? If you broaden the term enough, then you could say "it's all just one big app".
In short, I think the main consideration is the capacity of the group and what business needs are.
I think there should be internal development teams that each has a system which may contain multiple applications within it. To take a few examples of what I mean by systems:
ERP - If you are a manufacturer of products, you may need a system to keep track of inventory, accounting of books and money, and other planning elements. There are a wide range of scales of such systems but I suspect in most cases there is some customization done and that is where a team is used and may end up just doing that over and over if the company is successful and a new system is needed to replace the previous one as these can take years to get fully up and running. The application for the shop floor is likely not the same one as what the CFO needs in order to write the quarterly earnings numbers to give two examples here.
CRM - How about tracking all customer interactions within an organization that can be useful for sales and marketing departments? Again, there are many different solutions and generally there is customizations done which is another team. The sales team may have one view of the data but if there is a support arm to the company they may want different data about a customer to help them.
CMS - Now, here I can see your three applications making sense, but note what else there is beyond simple content.
I don't think I'd want to work where everything is a home grown solution and there is no outside code used at all. Lots of code out there can be used in rather good ways such as tools but also components like DB servers or development IDEs.
So what's the alternative to several one-off applications? One super-huge application that runs everything and everything? That seems even worse to me...

Resources