I'm using Nginx and the extension tx_news. When I'm working in backend with news(cut,edit,close,...) I get sometimes a 502-Bad Gateway exception. This exception I get after a second. I don't understand what's going wrong here. Sometimes it works well and sometimes I get this exception more often. I only get this exception in combination with tx_news. The rest of backend is working fine. Maybe this is only a configuration problem of nginx connections or something like this. Is anybody here who has an idea?
Every hint can help me :-)!
Check the consistency of your database records on such a page.
I used to get 502 bad gateway errors on Nginx after copying a page with fluidcontent elements because somhow a FCE got itself as its parent, which caused an infinite loop upon rendering the page module. But all I saw was the bad gateway page. This only happened with a specific version of fluidcontent.
Maybe you have some relations in your news records that are circular after pasting? Worth checking for it.
Related
What status code should I set when updating and reloading the website for search engines to understand?
Currently, I use the code 503, but Google registers it as a site error!
503 is the most appropriate HTTP response code for the scenario you describe. There are no 4xx or 3xx errors that indicate "try again later".
But lets step back:
It shouldn't be a problem if Google occasionally fails to crawl your website. If your business model is significantly impacted by this, I would suggest that there is something a bit wrong with your model. (And besides, that is a matter for discussion between you and Google!)
It would be more a problem if your website's (real) users see 503's.
There is an obvious solution for you. Reimplement your mechanisms for reloading your website so that you don't need to send a 5xx response while reloading.
For example, implement a pair of sites and flip between them using a load balancer or similar. That way you can be updating one site while the other site is serving requests. When the new site is ready switch it into production in the loadbalancer.
There are probably other ways to do this ... depending on how your site works.
For some of the users of our app, they are consistently getting a NMARoutingErrorNetworkCommunication error whenever they try to do navigation.
They have attempted to retry the routing to no success. They have switched from wifi to cellular, tried different locations, and routing parameters with no success. They have switched from Truck to Car routing with no success either.
On top of this, everything else works. The map screen is fine, Google routes fine, and Apple routes fine.
From what I can tell, this is a rejection by the server for routing. Is there any more explanation as to what this bug is, or how it might be debugged? It effects probably 2% or less of our users, making it hard to recreate. But since it's an error with the HERE servers itself, it makes me wonder how much control I have to fix it.
NMARoutingErrorNetworkCommunication - The online route calculation request failed because of a networking error. The route calculation request should be retried.
It could be possible that the request coming from your application could have some incorrect params that may be causing long enough to receive from our end, or routing request is blocking. If you could more details with code snippet, how this is calling request at your end.
I'm working on a small web application, and I'm trying to decide if I should make the effort to emit semantically appropriate HTTP status codes from within the application.
I mean, it makes sense for the web server itself to emit proper response codes. 500 Internal Server Error for a misconfigured Apache or 404 Not Found for a missing index.php or whatever all make sense, since there's nothing else the server can really do.
It also makes sense to manipulate the browser with 303 See Other or other HTTP mechanisms which actually produce behavior.
But if all that happened is a missing GET parameter, for example, is there any reason to go out of my way to return 400 Bad Request? Or how about 404 Not Found, if my application is handling all the routing by itself? From what I can tell there isn't any behavior associated with either of those error codes.
My general opinion: provide codes if the code provides actionable data for the user.
If all you're doing is presenting content, then in most cases I think it's less important. If YouTube fails to load a video, I mostly care about the fact that I can't watch my video. That it failed with a 418 status might be intellectually interesting, but it doesn't really provide me with any helpful information (Even assuming a non-silly failure code).
On the other hand, if you're allowing some kind of user interaction with a server, then the codes become much more important. I might actually care about why my request failed, because I'm now in a position to do something about it.
However, there are some codes that are actionable. 410 Gone for example: If my request failed for that reason, but I just got back a generic "Stuff Broke" message, I'd probably repeat the request a bunch of times, get nowhere, and give up in frustration. Knowing that the thing I'm looking for doesn't exist is a pretty useful thing for me to know.
I think its very important for a web service to respond with appropriate codes as sometime the developer using the service might not know whats wrong or why the app stopped working unless he views the status code.
we recently migrated our application (IIS Server + DB Server) to AWS and also modified the network architecture a little bit. The entry point of the system is an Astaro Firewall (we use the AWS AMI) which also host the SSL certificate of the web server. Everything related to the firewall has been done by a vendor and we only have some read-only privileges.
We are getting 403 errors in a few situations but I will explain one, as they all may be related.
We got a form which query the database and return a report in HTML format (this report also have some checkbox to do updates). The first time the form is submitted, we always get the report back. If we wanna post the form again, updated with new data, it crash, returning error 403. We noted that it doesn't crash when the first results returned a very low number of rows (or none).
Looking at the details of the POSTs in Developer Tools, what seems to be the only difference between a working and 403 error reply is the size of the data posted. The second post is always bigger because it contains the data of the first report (as the page have also other option to checkbox the rows).
Also, looking at the IIS logs we don't see any traces of the POST that crash. Nothing at all.
This problem happen only in production. In dev environment it's all working flawlessly. The only difference is that the production have the firewall/ssl, while development is all open. This is why we think it may be related to SSL.
The vendor is not the most helpful, we are looking for help to pinpoint the issue and trying to take the situation in our hands.
Any input appreciated.
I already put this into the old forum so I hope this will be fine.
Suddenly in one location users to the CMS side now are getting errors. If they work elsewhere there is no problems. I know the forum usage is low but if I shall slap the network people silly I need to have some pointers.
User gets several errors during the loading homepage process.
Err 1: A few times: JavaScript alert -
[synchronizer] unable to get client-side resource with ID xxxx
Err 2: Sometimes:
Unspecified error. on /library/javascript/mdvc.js
Err 3: several times:
A GUI system error occured. Details:[CmdsHTTPDone]
<tcmapi:Response xmlns:tcmapi="http://www.tridion.com/ContentManager/5.0/TCMAPI" success="false" actionWF="false" ID="WebGUIResponder.aspx"><tcmapi:Error><tcm:Line Cause="true" mlns:tcm="http://www.tridion.com/ContentManager/5.0"><![CDATA[Request message cannot be empty. ]]></tcm:Line></tcmapi:Error></tcmapi:Response>
Err 4: Sometimes we also get "permission denied" errors on TaskBarControl.js or other scripts.
In the end.. all views empty.
When trying to use a web proxy tool (Fiddler2) to see what is sent/received; user do NOT get any problems. Can log in and use the CMS without any problems. As long as the local web proxy tool is used, user have no problems with the CMS. As soon as tool is shut down, same problems come back.
So using this tool, we cannot even debug as we don't know what impact fiddler has on the connection making it work. Just in one location for Prod and Test (same issues) but DEV still is fine.. so my deduction is.. "some rule in the local network" is wrong - but how to proceed?
The CME GUI loaded in the browser reguarly checks back with the CME server. This looks like the browser cannot get a connection with the CME server.
For further troublehsooting you can try what happens if you do a full reload (CTRL-F5) of the web browser to see if it has a connection issue indeed.
If it is a connection issue it might not be Tridion related at all.
This is probably a proxy issue -- especially since you say that you cannot reproduce it using Fiddler. Fiddler works by acting as a proxy, so that would explain the lack of symptoms when using it.
You can try just using your browser's developer tools (press F12). Then watch for any requests that come back with a different status code than 200 or 304. You can then show this to your network team who can hopefully troubleshoot the issue from there.