A while ago I wrote a webserver which I'm using on a site of mine.
When I navigate to another page in Chrome while the images from this homemade webserver are still loading, they stay cached as only half-loaded.
Is this a known bug in Chrome, or an issue with my implementation of the HTTP protocol?
My webserver uses E-Tags for caching.
First Rule of Programming: It's your fault
Start with your code, and investigate
further and further outward until you
have definitive evidence of where the
problem lies.
You need to apply this rule here. What are the chances that Chrome, when communicating with Apache, is likely to exhibit this kind of bug deep into it's 6 (at least) major iteration?
I would put a traffic analyser onto your server and view the exchanges carefully. Next I would compare them with those from a well-established web server like Apache and note any differences.
Related
This is the graph of one of my sites https://www.alebalweb-blog.com, first line of firefox development tools -> Network, and I'm not sure that the blocked and waiting entries are "normal".
Waiting, I suspect it's the server's fault, it's a small vps on Vultr - Ubuntu 18.04, the other day I updated to php7-4-fpm and I haven't activated opcache, memcached, acpu or anything else yet, because (unfortunately) my sites are small, less than a thousand visits a day, and I don't know if it makes sense to activate chace systems, maybe they also affect indexing and positioning on search engines?
Even if yandex and bing give a lot of work for my little server... and maybe just them would take advantage from the cache?
Blocked, it is more confusing, I'm not sure it's me, everything happens before you get to my server? Maybe it's Vultr's fault? Maybe namesilo's fault? (where domains are registered) Maybe mine, some apache configuration or something else? Maybe they're normal values? I have no idea.
Can anyone help me understand if they are normal values? And if they are not, to understand how I can improve?
-------------------------update------------------------
I have read the pages you have suggested to me, even they do not seem to have understood much or found a solution....
I did some things on my little server, like: blocked yandex, enabled opcache, installed memcached.
The intent is to stabilize, to begin to understand something.
I have done many other tests these days, and I have seen results like these:
This is another site, but it is on the same server, the one highlighted is matomo (statistics), the tracking javascript script, is in a sub-domain, but always on the same server.
The difference is enormous, and the tests were done within seconds of each other.
So at this point maybe the question is: do you have any suggestions on what else I can do to start understanding something?
At least to understand if to create these timings is me, if it is my server, the scripts of my sites, the browsers, the connection or what else.
None of what you've posted looks very bad, but your service is sometimes taking > 6s to respond to the initial connection request. There are probably a lot of small things wrong that you can fix, I would start with looking at this question which addresses the same problem I'm seeing with your site.
The timing looks bit large as for me.
Seems the server is not responding during 150ms (blocked) especially on main page.
Then takes up to 150ms for TLS setup, 200ms to load content etc.
But this is not stable.
Sometimes it took about 800ms to receive homepage, sometimes the whole thing took less then 200ms.
Most likely it is server issues (as your virtual server share physical hardware machine with other servers).
And just for reference:
What does "Blocked" really mean in the Firefox developer tools Network monitoring?
Also, there is some general things to consider as troubleshoot:
I suggest to create local (localhost) version of the site, then:
Check time actually required to render homepage (inside server log)
Temporary remove gzip compression
Temporary remove https
Temporary remove output buffering in php (hope your code does not need it)
Check if any "post processing" content hooks are active in php
I often work from remote, in the train or on places where I don't have any or a stable internet connection. Our app loads some fonts, CSS and JS from different CDNs (google and microsoft). When I'm offline I don't have access to this files and can't work properly.
Even worse, when I have a bad internet connection, my browser waits till it runs in a timeout, and this slows down everything.
Is there a solution where I can set up a local fallback for some URLs and server this content when no internet connection is available?
I'm on OS X, and maybe there is some proxy stuff out there I don't know which can handle such a thing. btw: HTTP would be enough, so no dealing with SSL would be necessary for development.
There's a great answer to a similar question on the webmasters StackExchange site. In short, you can use Charles Proxy to redirect certain requests to a local file. Should work well, as long as it's not a massive list of assets you have (or dynamic requests).
Alternatively, you could just use a build script of some sort (depends on your toolchain) to rewrite the asset URLs to local versions (and of course make sure they're pointing to the proper versions when committing code).
I have a newly deployed mvc app on a win2008 server box.
I am trying to troubleshoot some very strange ie6 behaviour when over https. if a ie6 user connects to the webserver over https a simple post back or ajax call takes around 1 minute to complete, no errors are raised on the browser, it just sits there ticking away for about a minute, then completes as expected (both server and client as expected). the same post back or ajax call over http works in < 2 seconds.
There are no errors or events raised on the server, so i am flying blind here.
has anyone experienced this behaviour before, any ideas? with no errors or events to work with im not sure where to start. any other browser over https works fine, just ie6.
cheers
andrew
a quick follow up on this one. on further investigation the issue was only occurring on windows 2000 ie6 machines, xp and ie6 was ok. I guess from these results there must be something in the encryption/decryption framework on windows 2000 conflicting with the iis7 server.
I have managed to convince the windows 2000 ie6 users that its time to upgrade!
This brings up another question, when, if at all, do you think its acceptable to block certain versions of software from your web apps?
andrew
Two Win2003 servers running ASP.NET sharing same SQL Server, one is DEV the other is LIVE. They are both clones of each other, one is the development box. The dev box is going really slow but I noticed it even happens on a 404 response even. When I browse to a fake URL with either domain to get a 404, the dev box was like 1.4 seconds and other box was like 200ms. So it wasn't recent code changes. Is there some IIS configuration or web.config setting that would cause this?
(I did a traceroute to both and it turned out equal)
It could be a lot of things:
The DEV machine is resolving or trying to resolve the clients' DNS name.
The DEV machine has to perform a DNS query for the DB machine.
The DEV machine not using connection pooling (check the connection string).
Is the KeepAlive setting the same in both machines?
Is there any AD authentication involved? Could that be slower from DEV?
What if you do the POST locally on DEV? Is it still slow?
Are you on the actual dev console?
If so, is it a Firefox or WebKit IPv6 issue? Many of us devs have IPv6 available on our own boxes, and some browsers are pretty slow with it if it's not set up completely. Try using IE and see if you get a fast response on your dev box.
When I access local apps in Chrome, it takes several seconds for the page to display. Same with Firefox until I disabled IPv6 in its config.
Since even a 404 Not Found is slow, that says it probably isn't related to your DB, unless perhaps you're doing some sort of logging or other DB access from Global.asax or an HttpModule.
Have you looked in the Windows error log to see if any errors are being reported?
If ping and tracert from your client to both servers looks OK, and if it also looks OK from the web servers to the DB, then you might look at things like:
Hardware problems (flakey network cables are a common culprit). Maybe try swapping your live and dev machines, and see if the problem stays on the same hardware. Flakey disks can also cause slowness as the controller retries.
IIS-related configuration errors. If the site works and is just slow, you might look at your back-end logging, tracing, etc, if you have any.
You might also look into upgrading to Win 2008. IIS 7 has some much-improved debug facilities, including things like Failed Request Tracing.
Last year we develop intranet web site using WAP and ASP.NET for server side, the site was already on production and was considered successful. We use low end handset which had built in Openwave version 6.
Now we update the application using XHTML-MP, because we think this will be the future mobile application technology that will be supported. But the performance was very worse. We tested both application on same time and same module, the new application is 10 seconds (average) longer than the old one. We eliminate several possibility such as redirect and we already compress the page (both application pare are 2 kb size). During the test, we encountered the XHTML-MP application often get network error, such "Cannot resolve host name" and "Request Time Out", but not on the WAP application using the same device and browser. The application use the same proxy. We tested both using the direct access and using proxy (WAP Gateway).
We put logger in our application that track how long application was executed in server time, and it was less than a seconds.
We already invest so much time and money on this, but we can't figure out what is the cause of problem.
Does this mean that rendering XHTML-MP was longer than rendering WAP on Openwave browser? And why I haven't see any documents on Internet that mention about this? Is developing new web mobile using XHTML-MP are suggested?
Any help and suggestion are very appreciate.
ucin
May I ask how much css formatting are you doing? It's recommended that you don't use css extensively to format the page since many handsets don't have enough power to process that (at least a few years ago, that was the case).
This is obviously very device (or device range) specific, could you tell us which devices struggle to format xhtml?
If so, is it not possible to show WML to these old troublesome devices? You could look at their agent string for example to detect what markup to output them.