There is a third-party website that uses HTTPS and where the start page performs a POST upon login. I have inspected that POST request in my browser and then I have been able to manually create the request with Fiddler's composer. Thus, depending on the credentials, I could either successfully or unsuccessfully log in with Fiddler. The return code is always 302, which comes along with either a redirect (header "Location") to the user management page or a login failed page, respectively.
However, when I create that request using the Retrofit library, it does not work. I get response code 200, which in this specific case is not to be considered a success.
In order to inspect the POST request from Retrospect, I have directed it to Fiddler (http://localhost:8888) instead of the third-party URL. If I copy that request into the composer and adjust the URL to be the third-party one, the request does work. I.e., I could not find anything wrong with the request built by Retrofit.
Does anybody have an idea what could be wrong?
My code is written in Kotlin, but should be easily understandable if you know Java:
import okhttp3.ResponseBody
import retrofit2.Call
import retrofit2.Retrofit
import retrofit2.http.*
interface MyApi {
#POST("<relative login url>")
#FormUrlEncoded
#Headers(
//...
)
fun login(
#Field("username") username: String,
#Field("password") password: String
) : Call<ResponseBody>;
}
fun main(args: Array<String>) {
val baseUrl = "https://<url>"
val retrofit = Retrofit.Builder().baseUrl(baseUrl).build()
val myApi = retrofit.create(MyApi::class.java)
val code = myApi.login("<username>", "<password>").execute().code()
println(code)
}
As already stated in the comments, but to make it more understandable to others too here goes an answer.
When using retrofit with okhttp redirects will be followed by default. The reason for this is because the default okhttp client is set to follow redirects. This is why you never get a 302 - The redirect is followed automatically and you get the 200 from the followed url.
You can disable this behaviour by building your retrofit instance with a properly configured okhttp client:
OkHttpClient client = new OkHttpClient.Builder()
.followRedirects(false)
.followSslRedirects(false)
.build();
Retrofit retrofit = new Retrofit.Builder()
.client(client)
// other configurations for retrofit
.build();
Notice that here we create an instance of retrofit with a client configure not to follow redirects. This will effectively make you receive the 302 and other redirect codes.
(note that I didn't fully configure the retrofit instance here to focus the answer in the important part)
Related
I have a web application composed by frontend code that invokes the same api implemented by two backend modules. This api returns a url in a JSON object. The backend modules are both written with spring mvc but in different versions.
The url-building is the same and it is something like this:
#GetMapping(path = "/app1/menu", produces = MediaType.APPLICATION_JSON_UTF8_VALUE)
public JsonObject getMenu(HttpServletRequest req) throws IOException {
JsonObject menu = new JsonObject();
menu.addProperty("href", ServletUriComponentsBuilder.fromRequest(req)
.replacePath(req.getContextPath())
.path("/index.html")
.toUriString());
return menu;
}
As you can see this code simply adds a constant to the incoming request and returns it.
The first app uses spring mvc 4 (4.3.5.RELEASE precisely).
The second module uses the 5.1.4.RELEASE version.
When all these apps are deployed on a load balanced server (2 tomcat instance with a load balancer upfront) and https the problem shows up.
Say that the request url is, for app1, something like this:
https://example.com/context/app1/menu
The app1 returns correctly
https://example.com/context/index.html
For the app2 the request issued by the frontend is
https://example.com/context/app2/menu
And the answer is
http://example.com/context/another_index.html
So it looses the https scheme
It seems that the ServletUriComponentsBuilder.fromRequest has changed behaviour?
I have taken a (quick I admit) look at the commits in the git repo but haven't
found anything ....
I have an ASP.NET Core backend with some Web APIs, and an AngularJS client on separate subdomains (Actually on localhost with different ports), and I can't seem to get Antiforgery to work.
I get 400 Bad request response on APIs protected with [ValidateAntiForgeryToken], this is my backend code :
services.AddAntiforgery(options => options.HeaderName = "X-XSRF-TOKEN");
...
app.Use(next => context => {
tokens = antiforgery.GetAndStoreTokens(context);
context.Response.Cookies.Append("XSRF-TOKEN", tokens.RequestToken, new CookieOptions { HttpOnly = false });
return next(context);
});
First problem i've got is that the cookie is not set on my AngularJS app, i've resolved it by setting WithCredentials to true : $httpProvider.defaults.withCredentials = true;
The second one is that the X-XSRF-TOKEN header is not set automatically by AngularJS, and i've solved it by setting an interceptor : config.headers['X-XSRF-TOKEN'] = $cookies.get('XSRF-TOKEN');
Now I can receive the token and send it with my $http requests, I don't understand why API calls are rejected ! Is it related that the applications are not on the same port ?
Thanks in advance :)
Try using this
config.headers['XSRF-TOKEN'] = $cookies.get('XSRF-TOKEN');
Use [AutoValidateAntiforgeryToken] instead of [ValidateAntiForgeryToken]
ValidateAntiForgeryToken: This filter validates the request token on each and every method it is placed on, regardless of the HTTP verb. So it even validates GET and HEAD requests where there is no antifrogery token to validate so causes 400 bad request.
AutoValidateAntiforgeryTokenAttribute: AutoValidateAntiforgeryToken is almost similar to ValidateAntiforgeryToken except for the fact that it doesn’t validate tokens on GET, HEAD, OPTIONS, and TRACE requests.
Generally, a controller may contain GET as well as POST action methods. POST action methods require validating the anti-forgery token and not the GET action methods. So, if the ValidateAntiforgeryToken is declared on the controller, the HTTP GET requests become invalid, it would throw an error.
AutoValidateAntiforgeryToken can be used in such case.
Sometimes it is not required that you validate all the tokens, for example, with requests like:
GET
HEAD
OPTIONS
TRACE
AutoValidateAntiforgeryToken should be used in such cases.
It is advisable to use AutoValidateAntiforgeryTokenAttribute rather than using ValidateAntiForgeryTokenAttribute globally because if we apply ValidateAntiForgeryTokenAttribute globally then we will get error in case of requests like GET, HEAD, TRACE, etc. becuase in such request server does not receive the anti-forgery tokens which will cause validation errors for those requests.
Sometimes there might be some requirements for ignoring the anti-forgery tokens or you need to ignore the tokens for specific actions of the controllers. In such cases, you can use an IgnoreAntiforgeryToken filter.
I'm attempting to build a service in ServiceStack whose sole responsibility will be to interpret requests, and send a redirect response. Something like this:
[Route("/redirect/", "POST")
public class Redirect : IReturnVoid
{
public string Something { get; set; }
}
public class RedirectService : Service
{
public object Post(Redirect req)
{
// make some decisions about stuff
return new HttpResult(){ StatusCode = HttpStatusCode.Redirect, Headers = {{HttpHeaders.Location, "place"}}};
}
}
I did initial testing using fiddler, setting a content-type of application/json and creating an appropriate request body.This did exactly as expected: the service request gave a 302 response and redirected to the expected location.
I've also tested this by using a basic Html form post, with an action of http://myserviceuri/redirect/, which also works as expected and redirects appropriately.
However, i've hit an issue when attempting to use the SS c# client to call the same service. If I call the following code in an aspx code behind or an mvc controller
var client = new JsonServiceClient("uri);
client.post(new Redirect{Something = "something});
I get a 500 and the error message:
The remote certificate is invalid according to the validation procedure.
Which makes sense as it's a development server, with a self-cert. But I get the feeling that, as I can call the service successfully by other means, that this is a red herring.
Should I be using a different type of c# client to make the request, or setting any more custom headers, or something else? Am I fundamentally not understanding what i'm trying to do?
Please let me know if more info is needed. Thanks.
What's happening here is that the JsonServiceClient is happily following the redirect, doing more than what you've expected it to do.
I'll reference a related question and answer for posterity ( - hopefully you've resolved this issue a long time ago...).
POST to ServiceStack Service and retrieve Location Header
Essentially you'd use .net's WebRequest or the ServiceStack extensions mentioned in the answer to see the redirect and act as you see fit.
Having some problem developing a SignalR client for a Hub hosted in asp.net website with gzip compression enabled. Since we are using IIS compression, the response from SignalR also gets compressed, but, the client does not understand the response and we get a Json parsing error on the client side.
SignalR internally uses HttpWebRequest to make make http requests and HttpWebRequest can be configured to automatically decompress the response using AutomaticDecompression property. So, if somehow I can get hold of the HttpWebRequest object used by SignalR to make the request, I should be able to set the enable automatic decompression.
I thought I should be able to get access to the HttpWebRequest by providing HubConnection.Start with my custom implementation of IHttpClient, IHttpClient.GetAsync takes a prepareRequest action which I thought should give me access to the HttpWebRequest, but, HttpHelper.GetAsync wraps the HttpWebRequest with HttpWebRequestWrapper before passing to prepareRequest and HttpWebRequestWrapper does not provide access to HttpWebRequest.
HttpHelper class is internal so can't use it as well, so, I am not exactly sure how to enable automatic decompression with SignalR.
I can expose the HttpWebRequest in HttpWebRequestWrapper, but, would prefer a simpler solution if one exists. Any thougths?
I am using SignalR version 0.5.1.10822
My auto decompression HttpClient:
public class HttpClientWithAutoDecompression : IHttpClient
{
readonly DefaultHttpClient _httpClient = new DefaultHttpClient();
private readonly DecompressionMethods _decompressionMethods;
public HttpClientWithAutoDecompression(DecompressionMethods decompressionMethods)
{
_decompressionMethods = decompressionMethods;
}
public Task<IResponse> GetAsync(string url, Action<IRequest> prepareRequest)
{
Task<IResponse> task = _httpClient.GetAsync(url,
request =>
{
[ERROR: request is actually HttpRequestWrapper and
does not expose HttpWebRequest]** ]
var httpWebRequest = (HttpWebRequest) request;
httpWebRequest.AutomaticDecompression = _decompressionMethods;
prepareRequest(request);
});
return task.ContinueWith(response =>
{
Log.Debug(this, "Response: {0}", response.Result.ReadAsString());
return response.Result;
});
}
....
}
To the best of my knowledge GZip encoding and streaming do not mix. In the case of the forever frame transport the client wouldn't be able to decode any on the streaming content until the entire response, or at least a significant block of data, is received (due to the way the data is decoded). In the case of web sockets there is not support for encoding of any kind at this time, although there is apparently an extension to the specification for per message encoding being worked on.
That said, if you wanted to attempt to provide support for the LongPolling transport, the only way I can see this being possible is to provide your own SignalR IHttpClient implementation. You can see right now that the DefaultHttpClient class uses HttpHelper::GetAsync which creates the HttpWebRequest internally and you can never get your hands on that because you only have access to the IRequest which is HttpWebRequestWrapper at that point.
By creating your own IHttpClient you can take over the initial instantiation of the HttpWebRequest, set the AutomaticDecompression and then wrap that up yourself with the HttpWebRequestWrapper.
We have a Flex client and a server that is using the Spring/Blazeds project.
After the user logs in and is authenticated, the spring security layer sends a redirect to a new URL which is where our main application is located.
However, within the flex client, I'm currently using HTTPService for the initial request and I get the redirected page sent back to me in its entirety.
How can I just get the URL so that I can use navigatetourl to get where the app to go where it needs to?
Any help would greatly be appreciated. Thanks!
One solution would be to include a token inside a comment block on the returned page, for instance:
<!-- redirectPage="http://localhost/new-location" -->
then check for it's presence inside the HTTPService result handler. The token's value could then be used in your call to navigateToURL.
Another solution would be to examine the HTTP response headers and extract the value of the "Location" header using ActionScript. Consider using the AS3 HTTP Client lib.
From the examples page http://code.google.com/p/as3httpclientlib/wiki/Examples To determine the 'Location' header from the response:
var client:HttpClient = new HttpClient();
var uri:URI = new URI("http://localhost/j_security_check");
client.listener.onStatus = function(event:HttpStatusEvent):void {
var response:HttpResponse = event.response;
// Headers are case insensitive
var redirectLocation:String = response.header.getValue("Location");
// call navigateToURL with redirectLocation
// ...
};
// include username and password in the request
client.post(uri);
NOTE: AS3 HTTP Client depends on AS3 Core and AS3 Crypto libs.
You can also simply use the URLLoader class, no need for external code. One of the events it dispatches is HTTPStatusEvent.HTTP_RESPONSE_STATUS. Just plug into that and retrieve the redirected url:
urlLoader.addEventListener(HTTPStatusEvent.HTTP_RESPONSE_STATUS, onHTTPResponseStatus);
private function onHTTPResponseStatus(event:HTTPStatusEvent):void
{
var responseURL:String = event.responseURL;
}
I am (successfully) using this code right now, so if it doesn't work for some reason, let me know.