Odd requests from Mobile Safari in access.log - mobile-safari

In access.log of a web server a see the following 9 requests:
Mozilla/5.0 (iPad; CPU OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1 | /example_page.php
MobileSafari/602.1 CFNetwork/811.4.18 Darwin/16.5.0 | /apple-touch-icon-152x152-precomposed.png
MobileSafari/602.1 CFNetwork/811.4.18 Darwin/16.5.0 | /apple-touch-icon-152x152.png
MobileSafari/602.1 CFNetwork/811.4.18 Darwin/16.5.0 | /apple-touch-icon-precomposed.png
MobileSafari/602.1 CFNetwork/811.4.18 Darwin/16.5.0 | /apple-touch-icon.png
MobileSafari/602.1 CFNetwork/811.4.18 Darwin/16.5.0 | /apple-touch-icon-152x152-precomposed.png
MobileSafari/602.1 CFNetwork/811.4.18 Darwin/16.5.0 | /apple-touch-icon-152x152.png
MobileSafari/602.1 CFNetwork/811.4.18 Darwin/16.5.0 | /apple-touch-icon-precomposed.png
MobileSafari/602.1 CFNetwork/811.4.18 Darwin/16.5.0 | /apple-touch-icon.png
The same 9 requests from the same IP address that request the same URLs are repeated several times a day. There are no other requests from the same IP.
It seems like JavaScript is not executed on the example_page.php page during these requests. What does these requests mean?
Thanks in advance

This is normal behavior for a Safari browser. The browser is looking for icons that might look better than the normal favicon in order to show your icon in various contexts (like in bookmark links, or on the default home screen showing frequently visited sites). See more information about this behavior here...
Apple Developer - Configuring Web Applications

Related

Nginx proxy_pass, limit active concurrent connections

I have a service running in low end machine (behind Nginx) and the CPU performance is rather weak. One of the API needs a lots of CPU time, so it's required to limit the max concurrent requests. But if the request is cached, it can response much faster.
What I want to do is limit the max concurrent connections sent to the backend service for the certain API. I researched limit_req and limit_conn, but neither of them satisfies my case. limit_req may cause high load (too many miss) or low load (when most of the requests are cached), it is not easy to determine the value. While limit_conn will drop the rest of the requests (I want them to be queued).
Currently, I'm using apache2 mpm module, but it limits all the requests.
Is it possible to make Nginx keep max connections and make the others wait?
Nginx Cache Based
If many of the requests try to access the exact same data, then you can use the locking mechanism to at least prevent overloading the server when really not useful.
proxy_cache_lock on
I do not know of another solution for your situation. Holding requests when N were already sent to the service does not seem to be an option by default. If you had multiple such servers, you could setup nginx as a load balancer, but that's a quite different concept.
Apache2 Request Based
With Apache you can specify the maximum number of client connections that can be made at once. By setting this value to a very small number, including 1, it will automatically queue additional requests.
MaxRequestWorkers 1
(In older versions, before 2.3.13, use MaxClients)
This is a server configuration so all connections will be affected. Therefore it is important that you run a separate instance for that specific service and route all access through that one specific Apache2 server.
O Internet
|
v
+-------------------+
| | proxy based on URL or such
| Main HTTP Server +---------------+
| | |
+---------+---------+ |
| v
| +-----------------------------------+
v | |
+-------------------+ | Apache with MaxRequestWorkers=1 |
| | | |
| Main Services | +---------+-------------------------+
| | |
+-------------------+ |
v
+--------------------------+
| |
| Slow Service Here |
| |
+--------------------------+

Is a "Hard Coded" user agent enough for a program to work on multiple computers?

I'm using idHttp to login to some sites and download a few files, and I was wondering since my program is going to be run on multiple computers with different windows and softwares when I say for example:
idHttp.userAgent := 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 OPR/38.0.2220.41';
Is it enough? Or do I have to somehow extract the correct useragent information of that computer from somewhere and send that? I mean is a hard coded user agent the way to go and enough for a program to be compatible on multiple computers?
login to some sites and download a few files
By this you're most likely dealing with cookies. This is a difference to i.e. search engines which want to index the internet and more or less request anything, without having credentials to log in anywhere.
my program is going to be run on multiple computers with different windows and softwares
This is irrelevant to your program.
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 OPR/38.0.2220.41'
By this the server expects you being able to behave just like the internet browsers you're naming. Which you obviously won't.
In your case you don't have an interactive internet browser - you have an automated bot, and those should have an appropriate useragent. If you read https://en.wikipedia.org/wiki/User_agent#Format_for_automated_agents_.28bots.29 you'll see that a useragent like this would be more fitting to your program: website owners can identify you (which can have both advantages and disadvantages) and also look up more about your purpose under the URI you're giving them:
MyProgram/1.0 (+http://myprogram.org/what_i_am_doing.html)

google pagespeed service returns 503 hovewer site is online

I have strange problem, cause I don't know how to get background request
I have newly updated site "whitestudio.org", and trying to test in the mentioned above service. Hovewer, google return me 503 response message. Don't know what to do.
Also, tried to test from command line, tested with success. Can anyone help me?
curl -H "Mozilla/5.0 (compatible; GoogleBot/2.1; +http://www.google.com/bot.htm)" whitestudio.org
By the way pagespeed bot header for mobile is
Mozilla/5.0 (iPhone; CPU iPhone OS 8_3 like Mac OS X) AppleWebKit/537.36 (KHTML, like Gecko; Google Page Speed Insights) Version/8.0 Mobile/12F70 Safari/600.1.4
(Wrote for those who want to test in future).
And that error was related to incorrect configured "A" record [DNS configuration - We migrated from one host to another.

Getting 'Coikoe' in http request instead of 'Cookie'

Our website is receiving http requests from a user which contains 'Coikoe' tag instead of 'Cookie'.
Http request object received from firefox is mentioned below :
com.pratilipi.servlet.UxModeFilter doFilter: REQUEST : GET http://www.pratilipi.com/books/gujarati HTTP/1.1
Host: http//www.pratilipi.com
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Referer: http://www.pratilipi.com/?action=login
Coikoe: _gat=1; visit_count=1; page_count=2
X-AppEngine-Country: XX
X-AppEngine-Region: xx
X-AppEngine-City: XXXXXX
X-AppEngine-CityLatLong: 12.232344,12.232445
Http request object received from google chrome is mentioned below :
com.pratilipi.servlet.UxModeFilter doFilter: REQUEST : GET http//www.pratilipi.com/books/hindi HTTP/1.1
Host: http//www.pratilipi.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36
Referer: http//www.pratilipi.com
Accept-Language: en-US,en;q=0.8,ta;q=0.6
Coikoe: _gat=1; visit_count=1; page_count=1
X-AppEngine-Country: XX
X-AppEngine-Region: xx
X-AppEngine-City: xxxxxx
X-AppEngine-CityLatLong: 12.232344,12.232445
User is using window 8 system.
Question : Why is this happening and how can I solve it? I have never seen anything like this before. Anyone has come accross anything like this
Thank You
This user will be using some sort of privacy proxy.
The same happens for the Connection request header as explained in Cneonction and nnCoection HTTP headers: the proxy mangles the header so it won't be recognized by the receiver, but by merely shuffling some letters around the TCP packet's checksum will remain the same.
I'm gonna give a rather speculative answer based on some online research.
I went through all the specifications for cookies right from the early drafts and there doesn't seem to be anything about coikoe or misspelling cookies.
I found another user (Pingu) who complained about the same on Twitter about the same. His relevant tweets:
(1) Weird problem: have a device that changes "Cookie" to "Coikoe" in TCP stream and don't know which it is. No deep packet inspection in place.
(2) There is a Linksys Wifi Router, a Cisco Switch adding a VLAN-Tag and a Linux box routing the VLAN to Internet router. Nothing else. #Coikoe
I then asked him about it earlier today. This was his replay:
It must have been something with my routing and iptables setup on the Linux box to allow the guests only limited access.
I can remember the problem. But do not remember how I solved it. It happened from Clients connected to my Guest WiFi.
Given my understanding from your discussion in the comments earlier, I'm guessing that the router sends a coikoe header instead of a cookie if the user has limited connectivity and/or problems with the access point.
Also see this ruby code to see how they have handled the different cookie header:
def set_cookie_header
request.headers["HTTP_COOKIE"] ||= request.headers["HTTP_COIKOE"]
end
I looked lots of other popular forums like reddit, 4chan, stackoverflow, facebook and google, but I could not get anything else. Goodluck with your problem.
Well this is something like a typo mistake, just to confirm , use the following powershell command in the project directory
Get-ChildItem -recurse | Select-String -pattern "Coikoe" | group path | select name
and i hope you will be able to find the mistake you have made.

Are UDP packets ephemeral or will the server keep received packets until read?

I have a backend process that does work on my database. That's used on a separate computer so that way the frontend works miracles (in terms of speed at least). That backend process creates a UDP server and listen for packets on it.
On the frontend computer, I create child process from a server. Each child may create data in the database that require the backend to do some more work. To let the backend know, I send a PING using a UDP client connection.
Front End / Backend Setup Processing
+-------+ +---------+ +----------+
| | | | | Internet |
| Front | PING | Backend | | Client |
| End |-------->| | +----------+
| | | | HTTP Request |
+-------+ +---------+ v
^ ^ +----------+
| | | FrontEnd |--------+
| | +----------+ PING |
v v HTTP Response | v
+---------------------------+ v +---------+
| | +----------+ | Backend |
| Cassandra Database | | Internet | +---------+
| | | Client |
+---------------------------+ +----------+
Without the PING, the backends ends its work and falls asleep until the next PING wakes it up. Although there is a failsafe, I put a timeout of 5 minutes so the backend wakes up once in a while no matter what.
My question here is about the UDP stack, I understand it is a FIFO, but I am wondering about two parameters:
How many PING can I receive before the FIFO gets full?
May I receive a PING and lose it if I don't read it soon enough?
The answer to these questions can help me adjust the current waiting loop of the backend server. So far I have assumed that the FIFO had a limit and that I may lose some packets, but I have not implemented a way to allow for packets disappearing (i.e. someone sends a PING, but the backend takes too long before checking the UDP stack again and thus the network decides that the packet has now timed out and removes it from under my feet.)
Update: I added a simple processing to show what happens when (it is time based from top to bottom)
How many PING can I receive before the FIFO gets full?
It depends on the size of your socket receive buffer.
May I receive a PING and lose it if I don't read it soon enough?
Yes and no. Datagrams which have been received and which fit into the socket receive buffer remain there until they have been read or the socket is closed. You can tune the size of the socket receive buffer within limits. However a datagram that arrives when the socket receive buffer is full are dropped.
You can set the default buffer size on your system with sysctl, or set it per socket using setsockopt with the SO_RCVBUF option.
int n = 512 * 1024; // 512K
if (setsockopt(my_socket, SOL_SOCKET, SO_RCVBUF, &n, sizeof(n)) == -1) {
perror("Failed to set buffer size, using default");
}
There is also a maximum set on the system that you can't go over. On my machine default receiving buffer size is 208K and max is 4M:
# sysctl net.core.rmem_max
net.core.rmem_max = 4194304
# sysctl net.core.rmem_default
net.core.rmem_default = 212992

Resources