I was trying to send an HTTP POST request with certain parameters to a third party API which would return data. I was trying to it to work but was having issues. As part of my research into resolving the problem I started to read about cross domain HTTP requests.There were site after site on how to perform cross domain HTTP requests and why some methods were good and others bad. However, it was all written in a way that suggested that cross domain requests weren't the 'done' thing.
Now, please excuse my ignorance as I'm very new to all this, but this confused me somewhat. Surely cross domain HTTP requests are the whole point of HTTP requests? Someone writes some script to which one can send HTTP requests (with the proper credentials to authorise access) and the script can talk to the underlying application, do some processing based on the parameters sent by the requester and return some data.
Of course I know you can have scripts in your own website (from the same domain) to which you can send information and get results returned, such as validation scripts.
In essence my question is: "Are cross domain HTTP requests not the norm?".
I appreciate that my question is more of a discussion rather than a problem with a specific answer but I'd appreciate any help that can be offered.
Short answer is it's normal. You can make api, authorization on different domains for one project and consume it from another domain that just frontend. Why it can be abnormall?
Related
I have a simple HTTP server where you can create and manage todos. You can also add plugins in order to, for example, send an email to the people who starred a todo when that todo has been completed. I currently check for all enabled plugins through an query to the database, and then query each API endpoint for the different plugins (Gmail, Notion, Trello, etc). After this is finished, I send a response back to the user. This is a problem, because it means I rely on the speed of the external API's I am requesting for my response. If the Notion api is slow, then my endpoint is also slow.
Is there a way to first send a response after, for example, the server marks the todo as completed, but then send a different response after all the plugins have been queried (Gmail, Notion, Trello, etc)? Would I have to use web sockets? Or is the way I currently handle external API queries the only way to do it?
You are right thinking that you want to decouple requests from customers with backend processing (reaching out other providers); and web sockets is one of options to do that. HTTP2 streams is another options. And, of course, pulling is also a way (simple, but not too efficient).
With the new HTTP reporting headers being developed and refined, it seems more important than ever to be able to tell/validate where the reports are coming from.
For example, someone attempting to "hack" the site can very easily flood the reporting endpoint with false reports, drowning out the details of what they're attempting. It's also a vector for a DDOS attack.
Is there some mechanism for doing this aside from obfuscation?
Do the User Agents sign their reports?
Any advice would be much appreciated!
I took a quick glance through the standard draft for the Report-To header, but it doesn't seem to touch on it.
One thought on application-level mitigation: record the IPs of all clients that are connected and authenticated and only accept reports from IPs that are whitelisted in this way. This assumes that the browser sends its reports direct from the client machine (I believe this is the case, but can anyone confirm?).
When working with Firebase (Firebase cloud function in this case), we have to pay for every byte of bandwidth.
So, i wonder how can we deal with case that someone who somehow find out our endpoint then continuous request intentionally (by a script or tool)?
I did some search on the internet but don't see anything can help.
Except for this one but not really useful.
Since you didn't specify which type of request, I'm going to assume that you mean http(s)-triggers on firebase cloud functions.
There are multiple limiters you can put in place to 'reduce' the bandwidth consumed by the request. I'll write a few that comes to my mind
1) Limit the type of requests
If all you need is GET and say for example you don't need PUT you can start off by returning a 403 for those, before you go any further in your cloud function.
if (req.method === 'PUT') { res.status(403).send('Forbidden!'); }
2) Authenticate if you can
Follow Google's example here and allow only authorized users to use your https endpoints. You can simply achieve this by verifying tokens like this SOF answer to this question.
3) Check for origin
You can try checking for the origin of the request before going any further in your cloud function. If I recall correctly, cloud functions give you full access to the HTTP Request/Response objects so you can set the appropriate CORS headers and respond to pre-flight OPTIONS requests.
Experimental Idea 1
You can hypothetically put your functions behind a load balancer / firewall, and relay-trigger them. It would more or less defeat the purpose of cloud functions' scalable nature, but if a form of DoS is a bigger concern for you than scalability, then you could try creating an app engine relay, put it behind a load balancer / firewall and handle the security at that layer.
Experimental Idea 2
You can try using DNS level attack-prevention solutions to your problem by putting something like cloudflare in between. Use a CNAME, and Cloudflare Page Rules to map URLs to your cloud functions. This could hypothetically absorb the impact. Like this :
*function1.mydomain.com/* -> https://us-central1-etc-etc-etc.cloudfunctions.net/function1/$2
Now if you go to
http://function1.mydomain.com/?something=awesome
you can even pass the URL params to your functions. A tactic which I've read about in this medium article during the summer when I needed something similar.
Finally
In an attempt to make the questions on SOF more linked, and help everyone find answers, here's another question I found that's similar in nature. Linking here so that others can find it as well.
Returning a 403 or empty body on non supported methods will not do much for you. Yes you will have less bandwidth wasted but firebase will still bill you for the request, the attacker could just send millions of requests and you still will lose money.
Also authentication is not a solution to this problem. First of all any auth process (create token, verify/validate token) is costly, and again firebase has thought of this and will bill you based on the time it takes for the function to return a response. You cannot afford to use auth to prevent continuous requests.
Plus, a smart attacker would not just go for a req which returns 403. What stops the attacker from hitting the login endpoint a millions times?? And if he provides correct credentials (which he would do if he was smart) you will waste bandwidth by returning a token each time, also if you are re-generating tokens you would waste time on each request which would further hurt your bill.
The idea here is to block this attacker completely (before going to your api functions).
What I would do is use cloudflare to proxy my endpoints, and in my api I would define a max_req_limit_per_ip and a time_frame, save each request ip on the db and on each req check if the ip did go over the limit for that given time frame, if so you just use cloudflare api to block that ip at the firewall.
Tip:
max_req_limit_per_ip and a time_frame can be custom for different requests.
For example:
an ip can hit a 403 10 times in 1 hour
an ip can hit the login successfully 5 times in 20 minutes
an ip can hit the login unsuccessfully 5 times in 1 hour
There is a solution for this problem where you can verify the https endpoint.
Only users who pass a valid Firebase ID token as a Bearer token in the Authorization header of the HTTP request or in a __session cookie are authorized to use the function.
Checking the ID token is done with an ExpressJs middleware that also passes the decoded ID token in the Express request object.
Check this sample code from firebase.
Putting access-control logic in your function is standard practice for Firebase, BUT the function still has to be invoked to access that logic.
If you don't want your function to fire at all except for authenticated users, you can take advantage of the fact that every Firebase Project is also a Google Cloud Project -- and GCP allows for "private" functions.
You can set project-wide or per-function permissions outside the function(s), so that only authenticated users can cause the function to fire, even if they try to hit the endpoint.
Here's documentation on setting permissions and authenticating users. Note that, as of writing, I believe using this method requires users to use a Google account to authenticate.
I set up a free domain on 000webhost.com
I am using this as a web server to receive data from SIM908+arduino setup and store it in the database. Then display it on a web page.
I am sending the data from the SIM908 using HTTP GET requests. Basically I am sending two pieces of information, one is the location (lat and long) and other is a string. Both are sent using GET requests. The problem is very unusual so bear with me. EVERYTHING WORKS FINE, for a while. After several GET requests are sent, for some reason, 000webhost just deactivates my domain. I simply cannot access it. Every time I try to browse to the page it times out. It remains like this for around 7-8 hours after which the domain works fine again. I tried another hosting byethost.com, but GET requests from the SIM908 do not work there at all. Everything is 100% OK. The code, arduino setup everything is fine. My question is why is 000webhost stopping my domain? Really need a good answer or at least some direction, i am completely lost.
**NOTE: Please don't suggest POST method unless you explicitly know how to perform a POST operation using SIM908 AT commands, as far as I know it's not possible.
You are using the free webhost which has limitations. They will block you if your site is getting too much requests. Just read the limitations of free accounts with the server.
Look for a better free service or buy one. There is no issue with sim900 or arduino.
The following hosting service providers might be better than the one you are currently using in terms of limitations
Host Buddy You would get two months free
Free Hostia
Free Hosting .eu
I've been trying to follow good RESTful APIs practices when designing them. One of them which happens to be very simple and common keeps getting hard to follow:
Use GET http verb to retrieve resources
Why? Consider you have a URI to get account information like this:
http://www.example.com/account/AXY_883772
Where AXY_883772 is an account id in a bank system. Security auditing will raise a warning stating that:
Account ID will appear on HTTP ACCESS LOGS
Account ID might get cached on browser's history (even though is unlikely to use a browser regularly to access a RESTful API)
And they end up by "recommending" that POST verb should be used instead.
So, my question is:
What can we do about it? Just follow security recommendations and avoid using GET most of the times? Use some kind of special APACHE/IIS/NGINX access log configuration to avoid logging access to certain URLs?
If you have sensitive information in your urls, and you are logging urls you are logging sensitive information.
So there's two obvious solutions:
Don't log the url
Use a different url that doesn't contain the sensitive information
The last one could be implemented by using some (different) id that your server maps back to the normal id.
If neither of those solutions are an option for you, then you cannot use GET and therefore it's not good RESTful design.
I realize all these things are probably already obvious to you; But it's the most accurate answer I could give.
It's worth nothing that this doesn't just apply to GET, it would actually also be the case for PUT, DELETE and often POST.