I am running a Google Compute Engine instance. About every ten seconds, I get a request from a local link address (such as 169.254.169.254) requesting metadata from my instance. The request is on the computeMetadata path, suggesting that google is trying an [to get meta data from my instance].
Why am I receiving these requests? Do I have compute engine configured incorrectly? Right now my app returns a 404, should it do something else?
This is the full request:
010.240.059.243.48574-169.254.169.254.00080: GET /computeMetadata/v1beta1/instance/network-interfaces/0/public-endpoint-ips?alt=text&wait_for_change=true&timeout_sec=60&last_etag=NONE HTTP/1.1
Accept-Encoding: identity
Host: metadata
Connection: close
User-Agent: Python-urllib/2.7
The images provided by default on GCE will automatically configure themselves based on data returned by the metadata server.
This particular request is to find IPs that are forwarded to this instance as part of Load Balancing. Basically, the script at /usr/share/google/google_daemon/manage_addresses.py will continually wait for new IP addresses to be forwarded to this instance. Once it notices a new incoming IP (as indicated by the metadata server) it will configure the instances network stack to respond to that IP.
The question in my mind is: why are you seeing these? Are you doing something interesting to capture the requests sent to that address? These should be completely transparent to any application.
Related
I have a VPN Docker container set up using Gluetun, which is running an HTTP Proxy, I’m trying to see if it’s possible to do a Lua http.request to retrieve both my direct (local) external IP, AND my tunnelled external IP too?
I’ve found a few pages that help explain how I might do this, but I’m not sure how to retrieve both continuously. (Main page being)
Fetching page of url using luasocket and proxy
Here is my current code.
local url = require "socket.url"
local http = require "socket.http"
print("----------EXTERNAL IP DIRECT---------------")
local result, status = http.request("http://api.ipify.org/")
print(result, status)
print("---------EXTERNAL IP VIA PROXY-------------")
http.PROXY="http://192.168.102.134:8888/" -- locally hosted http proxy, no name/password
local result1, status1 = http.request("http://api.ipify.org/")
print(result1, status1)
When I run this first, I get the following.
---------EXTERNAL IP DIRECT---------------
2.234.10.99 200
---------EXTERNAL IP VIA PROXY-------------
192.168.102.107 200
Which provides my external IP fine, but the bottom proxy IP listed is the local IP of the machine I’m running the code on, which surprised me, also every subsequent running of the code returns my local IP for both.
----------EXTERNAL IP DIRECT---------------
192.168.102.107 200
---------EXTERNAL IP VIA PROXY-------------
192.168.102.107 200
Observations..
When I set the http.proxy value, that seems to be retained for all subsequent requests.
I can see the requests recorded positively on the (Gluetun) http proxy container logs, so they are being passed ok.
Does anyone have any ideas on how Lua can act as a http client to retrieve my tunnels external IP?
I'm working on an application which takes HTTP message to and from the routers web server.
The problem i'm facing is in the HTTP basic authentication.
RFC 7617 states:
"the server can reply with a challenge using the 401 (Unauthorized) status code"
What I've seen from the browser HTTP captures that it isn't the case for every router. For example, TPLINK TLWR840N doesn't sends me 401 and i can get the resource by simply transferring http request along with the correct credentials in the form of base64{username:pass} in the http message as shown below.
GET //main/ddos.htm?_=1572950350469 HTTP/1.1
Host: 192.168.0.1
Accept: */*
Connection: keep-alive
Referer: http://192.168.0.1
Cookie: Authorization=Basic YeRtaW46YWRtaW5AMTIz
It gives me the requested content if the password is correctly given otherwise it redirects me to the login page (why this router doesn't follow the 401 protocol?).
I have another TPLINK TL-WR841N router which doesn't take credentials (in http message) in the form of base64{username:pass} as the previous router, but instead it takes credentials in the form of base64(user):md5(password). I have two question about this router (and all routers in general)
I want to know how the router communicates the protocol for credentials to the browser so that i can embed that thing in my application. I have inspected the http messages (in the Chrome/Firefox) but couldn't found the message where the protocol is being communicated.
When i login to TPLINK TL-WR841N router, unlike the previous model, the web browser contains some SessionID in the URL, e.g. the URL shows www.192.168.0.1/SessionID/path/to/resource. I would like to know how this SessionID is communicated to the browser?
People who write router maintenance applications, as well as people who design graphics cards driver installer screens (looking at you, AMD), do not adhere to any guidelines, best practices or protocols whatsoever.
But they don't need to, either. They've written an application that happens to use HTTP, but you're not obliged to use all of HTTP. They write the frond-end as well as the back-end, so they can closely control their server as well as their client.
The client most likely is a dumb couple of HTML pages that does some requests using JavaScript.
If they were to decide that the web interface authenticates to the server with a request header that literally states LetMeIn: true, then that would work as well.
HTTP does not mandate that the server should return a 401 when that header is missing or bears false, so they don't have to.
My understanding so far is that when someone tries to access web page the following happens:
HTTP request is formed
New socket is opened
HTTP request is sent
If everything went OK, the web browser accepts HTTP response and builds DOM tree out of received HTML. If there are any resources missing, new HTTP request needs to be made for each one separately.
Each of those HTTP requests requires opening another socket (establishing new virtual connection with server).
Q: How is that efficient? I understand those resources could be located on another host (which would indeed require new TCP connection) but if they are all on the same host wouldn't it be way more efficient to transfer all data within single TCP connection.
Each of those HTTP requests requires opening another socket (establishing new virtual connection with server).
No it doesn't. HTTP 1.1 uses persistent connections by default, and HTTP 1.0 before it had the unofficial Connection: keep-alive header, which accomplished the same thing, nearly twenty years ago.
Q: How is that efficient?
It isn't, and that's why it doesn't happen.
I understand those resources could be located on another host (which would indeed require new TCP connection) but if they are all on the same host wouldn't it be way more efficient to transfer all data within single TCP connection.
Yes, and that is what happens by default.
So, a DNS server recognizes https://www.google.com as 173.194.34.5
What does, say, https://www.google.com/images/srpr/logo11w.png look like to a server? Or are URL strings machine readable?
Good question!
When you access a url, first a DNS lookup will be done on the host part (www.google.com), after that the browser will look at the protocol and connect using that (https in this case).
After connecting, the browser will tell the server:
"Hi! I'm trying to connect to www.google.com and I would like the resource /images/srpr/logo11w.png). This looks like this on the protocol:
GET /images/srpr/logo11w.png HTTP/1.1
Host: www.google.com
The Host part is a HTTP header. There are usually more headers.
So the short answer is:
The server will get access to both the hostname, and the full path the browser tried to access.
https://www.google.com/images/srpr/logo11w.png
consists of several parts
protocol (https)
address of the server (www.google.com, that gets translated to IP)
path to the resource (/images/srpr/logo11w.png, in this example it seems like it would be an image in a directory srpr, which is in a directory images in the root of the website)
The server processes path to the resource the user requested (via GET method) based on various rules and returns a response.
i'm trying to develop an application that listens to a specific port (for example 9999) in localhost. how could i retrieve the URL when user types <127.0.0.1:9999/somedir> in his web browser?
To retrieve the URL you would have to implement some pieces of the HTTP protocol.
This is the official documentation of the HTTP protocol.
If you just want the path of the entered URL, you can parse just some of the request data. The following is an example of a HTTP request made by a browser:
GET /index.html HTTP/1.1
Host: www.example.com
The first word at the first line is the command to be performed. Next, the path at the server, and than the protocol and its version. The next line (at this example) specify the host. This is used for example to a server to provide many web sites. This feature is called virtual host.
It is important to note that each line of the HTTP request and the response are separated by the \r\n characters.
Take a look at the HTTP Protocol on Wikipedia. It is a good start to implement some very basic functionality.