The issue: a content editor saves a new content item and gets a 404 on the proper-looking url for the new object. If they then refresh, the item is there, perfectly normal.
This happens for multiple Archetypes-based content types, and we've seen it on at least two different sites. We've seen it on Plone 3.x and 4.0.3. Here's what these sites have in common:
HAProxy load balancing (with and without session affinity)
Multiple ZEO clients
Using either ZODB 3.9.7 or 3.8.4
The issue happens only some of the time, maybe for 1 out of 4 content items
Has anyone seen anything like this?
I do not have an answer for you; this should not really happen. I certainly have not seen this.
You'll need to gather more information to troubleshoot this, and that perhaps requires interactive access to experts, and SO is not the place for such troubleshooting.
All I can do is advice that you gather as much information as possible, including a full trail of the user interaction from the various logs, including HAProxy and the ZEO server.
It may require additional instrumentation at the server level (when the NotFound error occurs, dump additional information about what is present, etc).
Some recomendations/questions:
Verfy that the content object was, indeed, created.
Check that the content views are correct (the ones that are declared in profiles/default/types/yourtype.xml).
Does it happen when adding content directly to plone instance (Without caching and load balancing?
Does it happen when adding content to direct plone instance with load balancing, but without caching? ---> And so on ...?
Maybe not an elegant one, but you might try inserting print statements or pdb breakpoints in the code so you can track whenever a content object was, indeed, created. Do this only as a desperate method of "instrumentation".
Yes. we have recently started seeing the same issue. We have almost the same setup. Haproxy (no session affinity).
I'm wondering since the pattern seems to be haproxy... perhaps its an issue with a request being redistributed after a timeout?
Updated:
We had this issue. It happens when you have a redirect back to the changed object after a save. This is because the 2nd request hits another zeo client which doesn't realise it's out of date.
The only solution we found was a to add temporary session affinity in haproxy (20s) during any POST. Not ideal but does work. I was just searching for a better solution which is why I found this old post.
Related
EDIT1: intershop.urlrewrite.CheckSource is turned off already
We are recently having quite big problems with URL rewrite rules not being loaded in test and production multi-node environments.
The problem started happening after introducing another organization and it's related application onto the servers.
From then on we have tried multiple changes and debugging methods to try to figure it out, but without any result.
Also the major problem is that it doesn't happen all the time and server restart can fix it but not always.
Here are the details so far how the problem manifests (this has been going on for more than a month now on our production system):
Most of the time it starts happening after the deploy of new code and starting up the server
Then multiple people from multiple computers and locations try opening the website and some open it and others get either 404 or "URL invalid" page, so its 50/50.
On the PC where someone successfully opens page, if you try again in incognito mode then you may get again 404(probably connects to another node/appserver).
Usually the problem is resolved either by server restart or by restarting a single node(no code or configuration changes) although this is no reliable way and on the last occurrence we tried multiple restarts and it didn't help. After a few days one of the team members restarted only a single node for debugging purposes and then it started working normally again.
After setting up more detailed log messages and turning on debug messages for URL rewrite classes we have come to the conclusion that the rule loading fails. We have come to this conclusion because we have added debug message on the very start of our applyExpand() method and it never gets shown. This can be observed on the image below:
All of this leads to conclusion that iterator on line 149 is empty.
Please advise on possible causes of this problem and how to resolve it.
The rule loading is implemented so that it's possible to edit/add/remove rules on-the-fly without need to restart the web server. This happens when the property intershop.urlrewrite.CheckSource is set to true. For this, the last-modified time of the file is evaluated. Maybe this doesn't work correctly.
I would recommended to set this property to false and test again if the problems still occur.
With the help of IS support, we have managed to figure out that the problem was that the URL rewrite rules were in a cartridge that wasn't part of every possible application on server which would result in undefined behavior when loading them (it would load on one appserver and on another it wouldn't).
The fix was to add a new common cartridge for all possible applications which would then hold urlrewrite rules and which would definitely be loaded on server startup.
For a long time I've been updating ASP.NET pages on the server and never find the correct way to make changes visible on files like CSS and images.
I know if a append something in the URL the browser will think the file is another one:
<img src="/images/myLogo.png?v=1"/>
or perhaps changing its name:
<img src="/images/myLogo.v1.png"/>
Unfortunately it does not look the correct way. In a case were I'm using App_Themes the files in this folder are automatically injected in the page in a way I can't easily change the URL.
So my question is:
When I'm publishing de ASP.NET Application on the server what is the correct way to signal to IIS (and it notify browser after that) that a file was changed? It is not automatic? Should I change some configuration in IIS or perhaps make some "decoration" in the code?
I've already tried many questions here in SO like "ASP.NET - Invalidate browser cache", "How to refresh the browser cache of an image?", "Handle cached images? How to get the browser to show the new version?", and even "What is an elegant way to force browsers to reload cached CSS/JS files?" but none of them actually take another aproach else in a way you must handle it manually in the code instead of IIS or ASP.NET configuration.
The closer I could find is "Asking browsers to cache our images (ASP.NET/IIS)" where they set expiration but not based on the fact the files were update. Instead they used days or hour to cache those file so they would updated even when no changes were made.
I'm want to know if IIS or ASP.NET offers something related to this, automatically send to the browser that the files was changed. Is it possible/built in?
The options you have to update the browser side, cached item are:
Change the file name
Add url parameter
Place it on cache for a limited time (eg for couple of hours)
Compare the date-time of creation.
Signaling with eTag.
With the three two you avoiding one server call for each item, but the third option load it again after some time.
With the others you have to make one call to the server to see if needs to be load it again.
So you can not have all here, there is not correct way, and you need to chose what is the best for you, and what you can do. The faster from client perspective is the (1) and (2) options.
The direct answer to your question is to use eTag, or date-time compare of the file creation, but you loose that way, a call to the server, you only win the size of what is travel back.
Some more links:
http eTag
How do I support ETags in ASP.NET MVC?
Configuring ETags with Http module in asp.net
How to control web page caching, across all browsers?
Jquery getScript caching
and you can find even more.
I am having a simple blank page without any source code.The page also taking very long time to come.I am not able to understand the reason behind this.
The domain is getting a high requests.
What exact settings needs to be done in iis 7.0 so that it will be faster.
Please help.
ASP.NET pages always have an initial delay when the first request is made after the file has been created/edited/uploaded because the server needs to recompile them, however it shouldn't be more than 2-3 seconds in practice, and does not affect subsequent pageloads.
The only thing I can think of is an overloaded server. Assuming you're on a shared hosting package then I recommend you find another ISP. If not, then I'm afraid there's a lot more to it than just a "page pages load faster" switch hidden away.
Is there a way to programmatically prime the asp.net output cache? I've investigated the caching API and can't seem to find an obvious way to do this. Has anyone tried something like this? If so, what method did you use?
I gave some thought to this last year and ended up concluding that it was not that important for the case, but if it's important for you website, all you have to do is to simply call the webpages from somewhere like Application_Start (after all code has run) event but you shouldn't stop there!
The cache will eventually expire and to avoid that you should set up some way to cache the pages again before any clients requests that page.
Make the outputcache dependent on someother object in cache and set an expiration callback.
Thus, when that cache object expires, so does your pages and you should make http requests to the pages you want to recache and so on.
I'm answering to this question, but the amount of effort and question marks I still have in my mind lead me to advise not to go through with this...
UPDATE
The only kind of dependency you may set in outputcache is sql dependency. Use it if you want, but if you would need to depend your outputcache on some other business object, then this might get very difficult. I could tell you that you could set a database object and depend your database on it and expire it yourself using some kind of timer.
Man, the longer I write the more solutions and difficulties I find! I can't write a book for something that is not worthy your precious time. Believe me you that the usefulness for this will be nearly zero.
Priming the cache is as others have suggested as easy as requesting the pages you want cached. Of course if you do this programmaticly it will only request the HTML and not all the linked resources (CSS, JavaScript, Images...) which is a good thing to avoid wasted bandwidth.
For many websites the items that are cached which consume the biggest performance penalties are common to many or all pages. For example a navigation system on a large CMS or storefront may query the database and do a bunch of rendering work which can then be cached for all pages. Also a big part of the initial load in ASP.net is when the website if first accessed and loaded into memory. Both of these issues can be addressed by even calling a single page on your site, but there is nothing stopping you from making a list of URLs and calling each one periodically.
If your cache policy is set for a 20 minutes timeout, maybe request each page once every 17-18 minutes.
Here are some resources with source code to help you get started:
Good Simple Primer on requesting web URL in C#
Website Monitoring Windows Service
Asyncronous Website Monitor
As I mentioned before, you can easily extend these to "foreach" over an array or list of URLs to be requested.
I'm more into the LAMP stack, but I've been asked to work on a site that is running Windows and IIS 2008. I'm a beginner with IIS, so please be patient with me on this, and please ask me to provide more information if that is needed to determine.
I read the answer here (Slow first page load on asp.net site), but it seems like if I go to the site with one browser it takes long to load the first page, then fast on all other pages, then if I open up another browser, it's the same thing, so it's not something that is saved on the server, but per session?
Is there a way to have the application running at all times?
Right now it is taking 12 to 15 seconds for the first page to load.
I have access to the WebControlCenter and FTP.
I would look in the Global.asax page and see if there is anything going on when a session is started. There usually is a method in there called Session_Start that is called whenever a session is started. Also, it might have to do with the site being configured in debug mode. You can change the web.config setting to false, which has a big impact on performance.
I'm familiar with the phenomenon described in the question you've linked to, but your what you're describing does seem a bit odd.
firstly- try Jeff's suggestion and see if indeed there's something at the beginning of the session which slows it down.
If not- try answering this-
1. is the first page always slow or only on first access to it?
2. what happens if you open another tab in the browser (not a different browser)?
3. it's possibel that the page contains some heavy resources (like images, script files etc.) which are only downloaded on the first access to the page. try tracing your http responses you get and see what their sizes are.
4. try to enable trace on your web page to see the events which are taking the longest time (on aspx you need to add 'Page Trace="true"' to the page declaration)
hope one of these helps...
Have you tried a http debugger here? Lots of things could be going on here, but the fact that you get different behavior by using different browsers indicates it is probably some particular resource that is overweight.