Bad requests for WordPress RSS and author URLs - wordpress

On a popular WordPress site, I'm getting a constant stream of requests for these paths (where author-name is the first and last name of one of the WordPress users):
GET /author/index.php?author=author-name HTTP/1.1
GET /index.rdf HTTP/1.0
GET /rss HTTP/1.1
The first two URLs don't exist, so the server is constantly returning 404 pages. The third is a redirect to /feed.
I suspect the requests are coming from RSS readers or search engine crawlers, but I don't know why they keep using these specific, nonexistent URLs. I don't link to them anywhere, as far as I can tell.
Does anybody know (1) where this traffic is coming from and (2) how I can stop it?

Check Apache logs to get the "where" part.
Stopping random internet traffic is hard. Maybe serve them some other error codes and it will stop. It probably wont tho.
Most my sites have these, most of the time I track them to Asia or the americas, blocking the ip works but if they are few and far between that would be just wasting resources.

Related

Website nginx flooded with no such file or directory request with a strange request, how do I block them?

so checking the website logs I've found 2000 requests x day, with different base URL but 2 different type of ending string, here the examples:
*var/www/vhosts/domain.com/httpdocs/random-slug/*],thor-cookies,div.cookie-alert,div.cookie-banner,div.cookie-consent,div.cookie-content,div.cookie-layer,div.cookie-notice,div.cookie-notification,div.cookie-overlay,div.cookieHolder,div.cookies-visible,div.gdpr,div.js-disclaimer,div.privacy-notice,div.with-cookie,.as-oil-content-overlay
Second one:
*var/www/vhosts/domain.com/httpdocs/random-slug/*],sibbo-cmp-layout,thor-cookies,div.cookie-alert,div.cookie-banner,div.cookie-consent,div.cookie-consent-popup,div.cookie-content,div.cookie-layer,div.cookie-notice,div.cookie-notification,div.cookie-overlay,div.cookie-wrapper,div.cookieHolder,div.cookies-modal-container,div.cookies-visible,div.gdpr,div.js-disclaimer,div.privacy-notice,div.v-cookie,div.with-cookie,.as-oil-content-overlay,
I tried to google them, and I found random website like Binance, from the content the string it seems to be referring to an overlay for cookie consent, but I don't have one on my website, so I'm wondering why I get this many requests all failed (2: No such file or directory)
So I'm wondering if anyone knows what is this, and if I can block directly requests like those 2 to avoid getting the nginx errors flooded with them.
I tried to search around for a solution, the only thing that came in mind was to do a nginx redirect that returns 410 error, but this case is particular because that ] that divide the slug and the file not found, I don't know how to do it, and actually if I go to that URL the page actually works, so a better redirect would be to the slug directly before the bracket.
Thanks.
It's some cybercriminal, or probably just a script kiddie, sending probe URLs to sites on every IP address they can think of, looking for servers that might be vulnerable to some exploit or other.
All public-facing web sites get some of this garbage. You can't make it go away, unfortunately. It's almost as old as the web, but considerably stupider.
You CAN keep your software up to date so it's not YOUR site where they find a vulnerability.

Nginx not logging all access in access.log (missing data of redirected requests)

In Nginx I have a redirection of all incoming http traffic to the same url but with https.
When I check the access log I only see the 301 error, but not the following petition, that can be a 200 or a 404 or whatever.
How can I see that information in the logs of Nginx?
All I want to see is what happen after you get redirected, because the redirection may work but the underlying url may not, and as of now I can only know what works by trying myself (and that doesn't mean that in a moment someone can get another different response because of who knows)
The following requests should be in same access.log file. The only note is that they will not follow each other.
301 response is returned to browser and then it determine follow propose URL or not. And this is not happen imediately. So after initial 301 log record folowing request may be logged after 10, 100 or even 1000 non-related log message. All is depend on trafic and how many logs you have for single page.
I'll post a separate answer to this, because I think someone may find it useful.
The problem is not with Nginx or its logging, if you try to access the problematic URL you can see that from a browser the request is properly recorded, that means, it's not recorded when the request is sent from an external application, scanners or whatever software used, that doesn't react the same way as a web browser and that don't follow the redirection.

What happens if a 302 URI can't be found?

If I make an HTTP request to get index.html on http://www.example.com but that URL has a 302 re-direct in place that points to http://www.foo.com/index.html, what happens if the redirect target (http://www.foo.com/index.html) isn't available? Will the user agent try the original URL (http://www.example.com/index.html) or just return an error?
Background to the question: I manage a legacy site that supports a few existing customers but doesn't allow new signs ups. Pretty much all the pages are redirected (using 302s rather than 301s for some unknown reason...) to a newer site. This includes the sign up page. On one of the pages that isn't redirected there is still a link to the sign up page which itself links through to a third party payment page (i.e. on another domain). Last week our current site went down for a couple of hours and in that period someone successfully signed up through the old site. The only way I can imagine this happened is that if a 302 doesn't find its intended URL some (all?) user agents bypass the redirect and then go to originally requested URL.
By the way, I'm aware there are many better ways to handle the particular situation we're in with the two sites. We're on it! This is just one of those weird situations I want to get to the bottom of.
You should receive a 404 Not Found status code.
Since HTTP is a stateless protocol, there is no real connection between two requests of a user agent. The redirection status codes are just a way for servers to politely tell their clients that the resource they were looking for is somewhere else now. The clients, however, are in no way obliged to actually request the resource from that other URL.
Oh, the signup page is at that URL now? Well then I don't want it anymore... I'll go and look at some kittens instead.
Moreover, even if the client decides to do request the new URL (which it usually does ^^), this can be considered as a completely new communication between server and client. Neither server nor client should remember that there was a previous request which resulted in a redirection status code. Instead, the current request should be treated as if it was the first (and only) request. And what happens when you request a URL that cannot be found? You get a 404 Not Found status code.

Redirect loop in ASP.NET app when used in America

I have a bunch of programs written in ASP.NET 3.5 and 4. I can load them fine (I'm in England) and so can my England based colleagues. My American colleagues however are suffering redirect loops when trying to load any of the apps. I have tried myself using Hide My Ass and can consistently recreate this issue.
I'm stumped. What could be causing a redirect loop for users in a specific country?!
The apps are hosted on IIS 6 on a dedicated Windows Server 2003. I have restarted IIS with no luck.
Edit
I should have made it clear that unfortunately I do not have access to the machines in the US to run Firefox Firebug/Fiddler. The message I get in Chrome is This webpage has a redirect loop..
When you say "a redirect loop", do you mean a redirect as in an http redirect? Or do you mean you have a TCP/IP routing loop?
A TCP/IP loop can be positively identified by performing a ping from one of the affected client boxes. If you get a "TTL expired" or similar message then this is routing and unlikely to be application related.
If you really meant an http redirect, try running Fiddler, or even better, HttpWatch Pro and looking at both the request headers, and the corresponding responses. Even better - try comparing the request/response headers from non-US working client/servers to the failing US counterparts
you could take a look with Live HTTP Headers in firefox and see what it's trying to redirect to. it could possibly be trying to redirect to a url based on the visitor's lang/country, or perhaps the dns is not fully propagated...
if you want to post the url, i could give you the redirect trace
What could be causing a redirect loop
for users in a specific country?!
Globalization / localization related code
Geo-IP based actions
Using different base URLs in each country, and then redirecting from one to itself. For example, if you used uk.example.com in the UK, and us.example.com in the US, and had us.example.com redirect accidentally to itself for some reason.
Incorrect redirects on 404 Not Found errors.
Spurious meta redirect tags
Incorrect redirects based on authentication errors
Many other reasons
I have tried myself using Hide My Ass
and can consistently recreate this
issue.
I have restarted IIS with no luck.
I do not have access to the machines
in the US to run Firefox
Firebug/Fiddler.
The third statement above don't make sense in light of the other two. If you can restart IIS or access the sites with a proxy, then you can run Fiddler, since it's a client-side application. Looking at the generated HTML and corresponding HTTP headers will be the best way to diagnose your problem.

Are there any safe assumptions to make about the availability of a URL?

I am trying to determine if there is a way to check the availability of a potentially large list of urls (> 1000000) without having to send a GET request to every single one.
Is it safe to assume that if http://www.example.com is inaccessible (as in unable to connect to server or the DNS request for the domain fails), or I get a 4XX or 5XX response, then anything from that domain will also be inaccessible (e.g. http://www.example.com/some/path/to/a/resource/named/whatever.jpg)? Would a 302 response (say for whatever.jpg) be enough to invalidate the first assumption? I imagine sub domains should be considered distinct as http://subdomain.example.com and http://www.example.com may not direct to the same ip?
I seem to be able to think of a counter example for each shortcut I come up with. Should I just bite the bullet and send out GET requests to every URL?
Unfortunately, no you cannot infer anything from 4xx or 5xx or any other codes.
Those codes are for individual pages, not for the server. It's quite possible that one page is down and another is up, or one has a 500 server-side error and another doesn't.
What you can do is use HEAD instead of GET. That retrieves the MIME header for the page but not the page content. This saves time server-side (because it doesn't have to render the page) and for yourself (because you don't have to buffer and then discard content).
Also I suggest you use keep-alive to accelerate responses from the same server. Many HTTP client libraries will do this for you.
A failed DNS lookup for a host (e.g. www.example.com) should be enough to invalidate all URLs for that host. Subdomains or other hosts would have to be checked separately though.
A 4xx code might tell you that a particular page isn't available, but you couldn't make any assumptions about other pages from that.
A 5xx code really won't tell you anything. For example, it could be that the page is there, but the server is just too busy at the moment. If you try it again later it might work fine.
The only assumption you should make about the availability of an URL is that "Getting an URL can and will fail".
It's not safe to assume that a sub domain request will fail when a parent one does. Namely because inbetween your two requests your network connection can go up, down or generally misbehave. It's also possible for the domains to be changed in between requests.
Ignoring all internet connection issues. You are still dealing with a live web site that can and will change constantly. What is true now might not be true in 5 minutes when they decide to alter their page structure or change the way the display a particular page. Your best bet is to assume any get will fail.
This may seem like an extreme view point. But these events will happen. How you handle them will determine the robustness of your program.
First don't assume anything based on a single page failing. I have seen many cases where IIS will continue to serve static content but not be able to serve any dynamic content.
You have to treat each host name as unique you cannot assume subdomain.example.com and example.com point to the same IP. Or even if they do there is no guarentee that are the same site. IIS again has host headers that allows you to run multiple sites using a single IP Address.
If the connection to the server actually fails, then there's no reason to check URLs on that server. Otherwise, you can't assume anything.
In addition to what everyone else is saying, use HEAD requests instead of GET requests. They function the same, but the response doesn't contain the message body, so you save everyone some bandwidth.

Resources