How to create 503 error in s3fs fuse - http

I am trying to replicate a scenario where s3fs command error out with return code 503 for the request made. How can I recreate this scenario

I created a node.js application which will return 503 status code.
How to specify HTTP error code?
Now I use the endpoint of node js application in s3fs command I get the desired result.

Related

Azure VM IIS HTTP Status Codes Changed

I have a couple Azure VMs behind a Basic Load Balancer with an HTTP URL Health Probe for the Backend Pool. To mark a server down, that URL returns Status Code 503 (Service Unavailable), but when I call that page from those VMs, the Status Code returned is 403. That still has the desired effect, I suppose, of marking the server down - but I dont understand why the code I set has changed.
This is from an ASP.NET web forms application on the VMs. I look at developer tools in the browser, and from my local machine or from a Dev server on our local network, that page returns Status Code 503, but calling that page from the VMs in Azure, the Status Code is 403.
Here's where I set the Status Code in that page:
Response.Clear()
Response.StatusCode = 503
Response.Flush()
I suppose I should mention that my local is a Windows 10 box, and the server VM is Windows Server 2016. Both are running IIS 10. The application is compiled with .NET Framework 4.6.
Here's the dev tools from my localhost:
Here's the dev tools from the server in azure:
Why the change? Anything I can do to stop this behavior?
So today I tried enabling Failed Request Tracing, but either something wasnt set up correctly, or the error was being handled elsewhere, and didnt result in any failed requests being logged.
Since I wasnt getting any failed requests logged, I opened up Process Monitor and could see that immediately after the call to my Health Probe page, I was getting a call to my custom HTTP Error page. That page must have been what was giving the 403 (dont know why, b/c that page works correctly for other HTTP Errors with a friendly error message and logging of the error to my custom error tracking solution).
I was going to change the Status Code to see if there was something special with the 503 that I was setting that was handled differently in IIS, but that got me thinking about how I was setting the status code...
In my research today, I saw this page https://www.leansentry.com/HowTo/AspNet-Response-Flush-Poor-Performance which cautions against using Response.Flush(). The code that I had implemented was in the Page_PreRender method, so there's not really a need to Flush there anyway.
I removed the Response.Flush and of course, my troubles went away.
The Health Probe page no longer triggers an Error from the Azure VM, and therefore, the status code that I get in my client browser is the 503 that I set in code.
So I guess this case is closed. I will need to figure out why the HTTP Error page was throwing a 403 instead of returning the friendly error message, but that should be easy enough...

403 Forbidden Error when calling an AWS API Gateway in Python

I set up a REST API in AWS with a PUT method to upload files to an S3 bucket. The "Authorization" field in the Method Request is set to NONE. I'm calling the API in Python like so:
file = {"file": open('file.jpg', 'rb')}
requests.put(https://api-id.execute-api.us-east-1.amazonaws.com/Prod/bucketname/filename, files=file)
However, each time this command runs, it returns the error:
"403 Client Error: Forbidden for url: https://api-id.execute-api.us-east-1.amazonaws.com/Prod/bucketname/filename"
This doesn't make sense to me; authorization is set to NONE, so anybody should be able to call the API - why am I getting "Forbidden"? Also, the request works perfectly fine in Postman - I am able to call the API and upload the file and it returns "200 Successful".
I've searched other posts on Google and StackOverflow to no avail. What is going on?
Figured it out, I was sending Binary files up to the gateway without adding those file types to the "Binary File Types" section in Settings. For some reason this resulted in a 403 Forbidden Error (even though it wasn't an authentication issue at all).

How to handle the push data

I dont understand the explanation on the website.
https://developer.foursquare.com/overview/realtime
I am working with classic asp, do they refer to a query string or a form when they say parameter.
I received the following error; Your Server returned: 502 Bad Gateway.
Bad gateway means that your script is trying and failing to connect to an external url

Is Heroku overriding my 500 error message?

I'm trying to return a custom error message from http 500 responses. When testing it locally it provides the custom response, but when running on Heroku it gives a generic "Internal Server Error" response. Does Heroku override 500 error responses? And if so, is there a way to have it use the custom one I sent?
You need to upload your custom error messages and maintenance pages to an external source such as S3 for example. You then need to add to your apps config the location of your error pages so they can be called when needed.
heroku config:add \
ERROR_PAGE_URL=http://s3.amazonaws.com/your_bucket/your_error_page.html \
MAINTENANCE_PAGE_URL=http://s3.amazonaws.com/your_bucket/your_maintenance_page.html
See the Heroku docs for more info: https://devcenter.heroku.com/articles/error-pages

Gradle failing to download dependency when HEAD request fails

I have set up a dependency in my Gradle build script, which is hosted on Bitbucket.
Gradle fails to download it, with error message
Could not HEAD 'https://bitbucket.org/....zip'. Received status code 403 from server: Forbidden
I looked into it, and it seems that this is because :
Bitbucket redirects to an amazon url
the Amazon url doesn't accept HEAD requests, only GET requests
I could check that by testing that URL with curl, and I also got a 403 Forbidden when sending a HEAD request with curl.
Otherwise, it could be because Amazon doesn't accept the signature in the HEAD request, which should be different from the GET one, as explained here.
Is there a way around this ? Could I tell Gradle to skip the HEAD request, and go straight to the GET request ?
I worked around the problem by using the gradle-download-task plugin, and manually writing caching as explained here

Resources