I have a file outside a Wordpress install which contains a form that submits to itself. I can access and fill the form out. The form submits and reloads as expected without validation, but when using javascript to submit the form I receive a Wordpress 404 error. The URL of the file stays the same when receiving the 404 error. If I refresh the page it works fine (without 404 error).
I don't know what the difference would be between the two methods of submitting the form. Why would Wordpress get involved in one over the other?
I guess a simple solution would be to update my .htaccess mod_rewrite rules to explicitly ignore the file, could anyone help with that?
Any other suggestions as to the differences between the two methods (form submit v.s javascript submit) would be greatly appreciated, I just can't think of why this would happen.
I tracked down the issue to the form processing. Looking in the logs I found a "Premature end of script headers" error was throwing a 500 Internal Server Error, resulting in a 404 error while trying to use an ErrorDocument to handle the request... the 404 was being handled by wordpress. The premature end of script was caused by some mysql connection code... but in other cases could be caused by an emailer or other form processing scripts. Hope that helps others who run into this problem.
Related
The form is created using Contact Form 7, but the submit action is made with AJAX and a file php which made a database conection, the error return: 500 (Internal Server Error) on php file and: Uncaught TypeError: $(...).AjaxDebug is not a function on AJAX function.
In the debug file generated by WP there is nothing related with this error.
someone can help me to understand this problem?
I disabled the plugins one by one, hoping for the problem is a conflict, but nothing change.
you clean the cache?! use a plugin like Super Cache good luck
I'm having a few issues with my site where certain pieces of script were failing within the application that involved checks on things such as the cgi.script_name. I've managed to trace this back to an issue which is caused by my web.config file. I ran a cfdump inside a cfsavecontent file within the application.cfm and then wrote the file into my stucture and loaded up the HTML page.
I found that at the time the dump was performed, the cgi.script_name value was /errorpages/404.cfm yet the page loaded normally and the cgi.script_name reads /cart.cfm? I'm very confused. I've currently disabled ANY error setup and it works fine, but within Error Pages on IIS7, i had originally set up the 404 to execute /errorpages/404.cfm and under the error feature settings to throw a custom error page when it encountered an error. Does anybody know where the hell i am going wrong?
When I purposefully throw a php error from within embedded php code in the php filter module, Drupal displays the message The website encountered an unexpected error. Please try again later.
We like to send users with unanticipated programmer errors to an error handling page so that they don't land on a dead error page without us getting notified, so I'm trying to find out how to intercept this in Drupal. I've tried searching within Drupal for where this error string gets outputted, with no luck.
How exactly does Drupal handle errors occurring within embedded php code, or more directly: how can I make it redirect to another page or catch the error in another way?
Thanks
That specific error comes as a result of Drupal explicitly setting the PHP exception handler to a custom function (done in _drupal_bootstrap_configuration()).
The exception handler itself is _drupal_exception_handler(), which invokes _drupal_log_error(), and that's where the error page is generated.
I've never tried it, but I reckon you'd get away with implementing hook_boot() and using set_exception_handler() to provide your own implementation of the core Drupal functions to theme that error page differently.
It might seem a bit of a long way round but since _drupal_error_log() doesn't invoke any hooks (it probably happens too early on for that anyway) I can't see any other way to do it without editing the core files.
Currently we are using a Kentico CMS for out web site and we used to have a page called pages/page1.aspx. We removed that page but every day the google, bing and yahoo sarch robot tries to read that page. Because the page doesn't exist the CMS throws the following error (in the log)
Event URL: /pages/page1.aspx
URL referrer:
User agent: Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)
Message: The file '/pages/page1.aspx' does not exist.
Stack Trace:
at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath)
// and the rest of the stacktrace
When we get too many of these errors the whole site crashes (have to clear .Net temp files and restart app pool). Basically I can go to a page that doesn't exist, hit refresh many times and take the site down. Extremely bad. However, first thing, how can I get the bots to not try to access this page?
Thanks in advance.
If it's just a single page, or a few pages that are causing this, modify robots.txt to tell the legitimate search engines not to check it.
I'd also check what HTTP response you're sending when the page is not found? You might be sending something that causes the spider to think it should keep checking? Instead of a 404 maybe you should try permanently redirecting to your home page?
Finally, WTF? I'd talk to the Ketnico folks about this bug.
I think that you have a configuration error. While a robots.txt file would hopefully correct this issue, bots can choose to ignore that file.
A better solution would be to setup your error pages correctly. What happens when you go to a page that doesn't exist? It sounds like your system is showing a yellow screen, which is an unhandled exception bubbling all the way up to the user. I would check your error page setup so that users (and robots) get redirected to a 404 error page. I'm guessing that when Yahoo and others see that 404 page, they will stop trying to index it.
Have you tried using a robots.txt file?
Our website keeps logging a "page not found" error under the Drupal Recent Log Entries, but we do not know where it is coming from. When clicking into the detail of the error, the location is marked as "http:///*", user is "Guest", and severity is "warning". Does anybody have ideas on where we can find out what's causing these errors to log?
Page not found warnings like this can be caused every kind of code that triggers an http request, like loading a .js or .css file or an image. A way to debug this is the quick-and-dirty method described by Lullabot which means that you add a debug_backtrace() at the point where the warning is registered (in your case probably the watchdog() function, see the comment by Randy Fay).
PS. #Arvind: If you're using the Drupal API correctly, there shouldn't be too many drupal_goto calls in your code.
do you have atypo error in your code? check if you have mistakenly commented coz i see /* in the link.
This has to be a local error. I would check for my drupal_goto calls in my code.
this is my guess and i'm no drupal expert. let us know if it helps.
atb!
You might have an issue with your favicon like I did, although I don't recalling it generating errors in my watchdog.
It's what triggered a lot of page not founds on my setup.
Basically, you normally put your favicon in your root. However, some browsers ( I believe IE6 ) , don't check the root for your favicon, but your theme folder.
I just duplicated my favicon in my theme folder, and my problem was solved.
Worth a try.