I have been playing around with using Web API (Web Host) as a proxy server and have run into an issue with how my Web API proxy handles responses with the "Transfer-Encoding: chunked" header.
When bypassing the proxy, the remote resource sends the following response headers:
Cache-Control:no-cache
Content-Encoding:gzip
Content-Type:text/html
Date:Fri, 24 May 2013 12:42:27 GMT
Expires:-1
Pragma:no-cache
Server:Microsoft-IIS/8.0
Transfer-Encoding:chunked
Vary:Accept-Encoding
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
When going through my Web API based proxy, my request hangs unless I explicitly reset the TransferEncodingChunked property on the response header to false:
response.Headers.TransferEncodingChunked = false;
I admit, I don't fully understand what impact setting the TransferEncodingChunked property has, but it seems strange to me that in order to make the proxy work as expected, I need to set this property to false when clearly the incoming response has a "Transfer-Encoding: chunked" header. I am also concerned about side effects to explicitly setting this property. Can anyone help me understand what is going on and why setting this property is required?
UPDATE: So I did a little more digging into the difference in the response when going through the proxy vs. not. Whether I explicitly set the TransferEncodingChunked property to false, the response headers when coming through the proxy are exactly the same as when not going through the proxy. However, the response content is different. Here are a few samples (I turned off gzip encoding):
// With TransferEncodingChunked = false
2d\r\n
This was sent with transfer-encoding: chunked\r\n
0\r\n
// Without explicitly setting TransferEncodingChunked
This was sent with transfer-encoding: chunked
Clearly, the content sent with TransferEncodingChunked set to false is in fact transfer encoded. This is actually the correct response as it is what was received from the requested resource behind the proxy. What continues to be strange is the second scenario in which I don't explicitly set TransferEncodingChunked on the response (but it is in the response header received from the proxied service). Clearly, in this case, the response is NOT in fact transfer encoded by IIS, in spite of the fact that the actual response is. Strange...this is starting to feel like designed behavior (in which case, I'd love to know how / why) or a bug in IIS, ASP.Net, or Web API.
Here is a simplified version of the code I am running:
Proxy Web API application:
// WebApiConfig.cs
config.Routes.MapHttpRoute(
name: "Proxy",
routeTemplate: "{*path}",
handler: HttpClientFactory.CreatePipeline(
innerHandler: new HttpClientHandler(), // Routes the request to an external resource
handlers: new DelegatingHandler[] { new ProxyHandler() }
),
defaults: new { path = RouteParameter.Optional },
constraints: null
);
// ProxyHandler.cs
public class ProxyHandler : DelegatingHandler
{
protected override async System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
{
// Route the request to my web application
var uri = new Uri("http://localhost:49591" + request.RequestUri.PathAndQuery);
request.RequestUri = uri;
// For GET requests, somewhere upstream, Web API creates an empty stream for the request.Content property
// HttpClientHandler doesn't like this for GET requests, so set it back to null before sending along the request
if (request.Method == HttpMethod.Get)
{
request.Content = null;
}
var response = await base.SendAsync(request, cancellationToken);
// If I comment this out, any response that already has the Transfer-Encoding: chunked header will hang in the browser
response.Headers.TransferEncodingChunked = false;
return response;
}
}
And my web application controller which creates a "chunked" response (also Web API):
public class ChunkedController : ApiController
{
public HttpResponseMessage Get()
{
var response = Request.CreateResponse(HttpStatusCode.OK);
var content = "This was sent with transfer-encoding: chunked";
var bytes = System.Text.Encoding.ASCII.GetBytes(content);
var stream = new MemoryStream(bytes);
response.Content = new ChunkedStreamContent(stream);
return response;
}
}
public class ChunkedStreamContent : StreamContent
{
public ChunkedStreamContent(Stream stream)
: base(stream) { }
protected override bool TryComputeLength(out long length)
{
length = 0L;
return false;
}
}
From an HttpClient standpoint, content chunking is essentially a detail of the transport. The content provided by response.Content is always de-chunked for you by HttpClient.
It looks like there's a bug in Web API that it doesn't correctly (re-)chunk content when requested by the response.Headers.TransferEncodingChunked property when running on IIS. So the problem is that the proxy is telling the client, via the headers, that the content is chunked when in fact it is not. I've filed the bug here:
https://aspnetwebstack.codeplex.com/workitem/1124
I think your workaround is the best option at the moment.
Also notice that you have multiple layers here that likely weren't designed/tested for proxying scenarios (and may not support it). On the HttpClient side, note that it will automatically decompress and follow redirects unless you turn that behavior off. At a minimum, you'll want to set these two properties:
http://msdn.microsoft.com/en-us/library/system.net.http.httpclienthandler.allowautoredirect.aspx
http://msdn.microsoft.com/en-us/library/system.net.http.httpclienthandler.automaticdecompression.aspx
On the WebApi/IIS side, you've found at least one bug, and it wouldn't be suprising to find others as well. Just be forewarned there may be bugs like this currently writing a proxy using these technologies outside their main design use cases.
Related
This question is similar to below but my issue is with Android grpc client
How can I make a GRPC call for a service which is inside a subdirectory? (in .Net Framework)
I am getting 404 error while accessing the grpc streaming api :
UNIMPLEMENTED: HTTP status code 404
invalid content-type: text/html
headers: Metadata(:status=404,content-length=1245,content-type=text/html,server=Microsoft-IIS/10.0,request-id=5154500d-fb58-7903-65d6-3d3711129101,strict-transport-security=max-age=31536000; includeSubDomains; preload,alt-svc=h3=":443",h3-29=":443",x-preferredroutingkeydiagnostics=1,x-calculatedfetarget=PS2PR02CU003.internal.outlook.com,x-backendhttpstatus=404,x-calculatedbetarget=PUZP153MB0788.APCP153.PROD.OUTLOOK.COM,x-backendhttpstatus=404,x-rum-validated=1,x-proxy-routingcorrectness=1,x-proxy-backendserverstatus=404,x-feproxyinfo=MA0PR01CA0051.INDPRD01.PROD.OUTLOOK.COM,x-feefzinfo=MAA,ms-cv=DVBUUVj7A3ll1j03ERKRAQ.1.1,x-feserver=PS2PR02CA0054,x-firsthopcafeefz=MAA,x-powered-by=ASP.NET,x-feserver=MA0PR01CA0051,date=Tue, 11 Oct 2022 06:24:18 GMT)
The issue is that the /subdirectory_path is getting ignored by the service in the final outgoing call.
Here's the code I am using to create the grpc channel in android (gives 404)
val uri = Uri.parse("https://examplegrpcserver.com/subdirectory_path")
private val channel = let {
val builder = ManagedChannelBuilder.forTarget(uri.host+uri.path)
if (uri.scheme == "https") {
builder.useTransportSecurity()
} else {
builder.usePlaintext()
}
builder.executor(Dispatchers.IO.asExecutor()).build()
}
The uri is correct since it works with web client.
For web client the channel is defined like this (working)
var handler = new SubdirectoryHandler(httpHandler, "/subdirectory_path");
var userToken = "<token string>";
var grpcWebHandler = new GrpcWebHandler(handler);
using var channel = GrpcChannel.ForAddress("https://examplegrpcserver.com", new GrpcChannelOptions { HttpHandler = grpcWebHandler,
Credentials = ChannelCredentials.Create(new SslCredentials(), CallCredentials.FromInterceptor((context, metadata) =>
{
metadata.Add("Authorization", $"Bearer {userToken}");
return Task.CompletedTask;
}))
});
I tried to inject the subdirectory_path in the uri for my android client but unable to find appropriate api. grpc-kotlin doesn't expose the underlying http-client used in the channel.
Could someone please help me with this issue, how can I specify the subdirectory_path? (before the service and method name)
The path for an RPC is fixed by the .proto definition. Adding prefixes to the path is unsupported.
The URI passed to forTarget() points to the resource containing the addresses to connect to. So the fully-qualified form is normally of the form dns:///example.com. If you specified a host in the URI like dns://1.1.1.1/example.com, then that would mean "look up example.com at the DNS server 1.1.1.1." But there's no place to put a path prefix in the target string, as that path would only be used for address lookup, not actual RPCs.
If the web client supports path prefixes, that is a feature specific to it. It would also be using a tweaked grpc protocol that requires translation to normal backends.
I am trying to play Widevine encrypted content on an Android TV application using Exoplayer. I have my video URL which is served from a CDN and acquired with a ticket. I have my widevine license URL, a ticket and a auth token for the license server.
I am creating a drmSessionManager, putting the necessary headers needed by the license server as follows:
UUID drmSchemeUuid = C.WIDEVINE_UUID;
mediaDrm = FrameworkMediaDrm.newInstance(drmSchemeUuid);
static final String USER_AGENT = "user-agent";
HttpMediaDrmCallback drmCallback = new HttpMediaDrmCallback("my-license-server", new DefaultHttpDataSourceFactory(USER_AGENT));
keyRequestProperties.put("ticket-header", ticket);
keyRequestProperties.put("token-header", token);
drmCallback.setKeyRequestProperty("ticket-header", ticket);
drmCallback.setKeyRequestProperty("token-header", token);
new DefaultDrmSessionManager(drmSchemeUuid, mediaDrm, drmCallback, keyRequestProperties)
After this Exoplayer handles most of the stuff, the following breakpoints are hit.
response = callback.executeKeyRequest(uuid, (KeyRequest) request);
in class DefaultDrmSession
return executePost(dataSourceFactory, url, request.getData(), requestProperties) in HttpMediaDrmCallback
I can observe that everything is fine till this point, the URL is correct, the headers are set fine.
in the following piece of code, I can observe that the dataSpec is fine, trying to POST a request to the license server with the correct data, but when making the connection the response code returns 405.
in class : DefaultHttpDataSource
in method : public long open(DataSpec dataSpec)
this.dataSpec = dataSpec;
this.bytesRead = 0;
this.bytesSkipped = 0;
transferInitializing(dataSpec);
try {
connection = makeConnection(dataSpec);
} catch (IOException e) {
throw new HttpDataSourceException("Unable to connect to " + dataSpec.uri.toString(), e,
dataSpec, HttpDataSourceException.TYPE_OPEN);
}
try {
responseCode = connection.getResponseCode();
responseMessage = connection.getResponseMessage();
} catch (IOException e) {
closeConnectionQuietly();
throw new HttpDataSourceException("Unable to connect to " + dataSpec.uri.toString(), e,
dataSpec, HttpDataSourceException.TYPE_OPEN);
}
When using postman to make a request to the URL, a GET request returns the following body with a response code of 405.
{
"Message": "The requested resource does not support http method 'GET'." }
a POST request also returns response code 405 but returns an empty body.
In both cases the following header is also returned, which I suppose the request must be accepting GET and POST requests.
Access-Control-Allow-Methods →GET, POST
I have no access to the configuration of the DRM server, and my contacts which are responsible of the DRM server tells me that POST requests must be working fine since there are clients which have managed to get the content to play from the same DRM server.
I am quite confused at the moment and think maybe I am missing some sort of configuration in exoplayer since I am quite new to the concept of DRMs.
Any help would be greatly appreciated.
We figured out the solution. The ticket supplied for the DRM license server was wrong. This works as it is supposed to now and the content is getting played. Just in case anyone somehow gets the same problem or is in need of a basic Widevine content playing code, this works fine at the moment.
Best regards.
I got simple JAX-RS resource and I'm using Apache CXF WebClient as a client. I'm using HTTP basic authentication. When it fails on server, typical 401 UNAUTHORIZED response is sent along with WWW-Authenticate header.
The strange behavior happens with WebClient when this (WWW-Auhenticate) header is received. The WebClient (internally) repeats the same request multiple times (20 times) and than fails.
WebClient webClient = WebClientFactory.newClient("http://myserver/auth");
try {
webClient.get(SimpleResponse.class);
// inside GET, 20 HTTP GET requests are invoked
} catch (ServerWebApplicationException ex) {
// data are present when WWW-authenticate header is not sent from server
// if header is present, unmarshalling fails
AuthError err = ex.toErrorObject(webClient, AuthError.class);
}
I found the same problem in CXF 3.1.
In my case for all async http rest request if response came 401/407, then thread is going in infinite loop and printing WWW-Authenticate is not set in response.
What I analysed the code I found that :
In case of Asynchronous call Control flow from HttpConduit.handleRetransmits-> processRetransmit-> AsyncHTTPConduit.authorizationRetransmit
which return true and in HttpConduit the code is
int maxRetransmits = getMaxRetransmits();
updateCookiesBeforeRetransmit();
int nretransmits = 0;
while ((maxRetransmits < 0 || nretransmits < maxRetransmits) && processRetransmit()) {
nretransmits++;
}
If maxRetransmits = -1 and processRetransmit() return true then thread going in infinite loop.
So to overcome this issue we pass maxRetransmitValue as 0 in HttpConduit.getClient().
Hope it will others.
This has been fixed in the latest versions of CXF:
https://issues.apache.org/jira/browse/CXF-4815
I'm trying to implement the client-side caching of web service calls, and based on information from the web, I was able to do it according to the SetCachingPolicy() function as shown in code 1 below.
I was able to successfully get it working with a web method, RetrieveX, but not with method RetrieveY. I noticed that RetrieveX has no parameters and RetrieveY has one string parameter, and on inspection under Fiddler, the difference seems to be that the HTTP GET request of the web service call from RetrieveY has a query string for the parameter.
All HTTP GET web service calls so far without a query string is doing the caching properly, but not this call that has a query string in it.
Examination under Fiddler indicates that RetrieveX has the following caching information in output 1, and RetrieveY has the information in output 2.
Is this a limitation of this caching method or can I do something to get the client side caching of RetrieveY working?
Code 1: SetCachingPolicy
private void SetCachingPolicy()
{
HttpCachePolicy cache = HttpContext.Current.Response.Cache;
cache.SetCacheability(HttpCacheability.Private);
cache.SetExpires(DateTime.Now.AddSeconds((double)30));
FieldInfo maxAgeField = cache.GetType().GetField(
"_maxAge", BindingFlags.Instance | BindingFlags.NonPublic);
maxAgeField.SetValue(cache, new TimeSpan(0, 0, 30));
}
Code 2: RetrieveX
[System.Web.Services.WebMethod]
[System.Web.Script.Services.ScriptMethod(UseHttpGet = true)]
public string[] RetrieveX()
{
SetCachingPolicy();
// Implementation details here.
return array;
}
Code 3: RetrieveY
[System.Web.Services.WebMethod]
[System.Web.Script.Services.ScriptMethod(UseHttpGet = true)]
public string[] RetrieveY(string arg1)
{
SetCachingPolicy();
// Implementation details here.
return array;
}
Output 1: RetrieveX caching info
HTTP/200 responses are cacheable by default, unless Expires, Pragma, or Cache-Control headers are present and forbid caching.
HTTP/1.0 Expires Header is present: Wed, 12 Sep 2012 03:16:50 GMT
HTTP/1.1 Cache-Control Header is present: private, max-age=30
private: This response MUST NOT be cached by a shared cache.
max-age: This resource will expire in .5 minutes. [30 sec]
Output 2: RetrieveY caching info
HTTP/200 responses are cacheable by default, unless Expires, Pragma, or Cache-Control headers are present and forbid caching.
HTTP/1.0 Expires Header is present: -1
Legacy Pragma Header is present: no-cache
HTTP/1.1 Cache-Control Header is present: no-cache
no-cache: This response MUST NOT be reused without successful revalidation with the origin server.
I ran into this issue as well, I thought I'd share what worked for me. The underlying issue is that VaryByParams is not being set on the response. If you add this to your SetCachingPolicy() method RetrieveY should begin working as expected:
cache.VaryByParams["*"] = true
I've been having some problems with missing post data in ASP.NET MVC which has lead me to investigate how ASP.NET MVC deals with invalid content lengths. I had presumed that a post with a invalid content length should be ignored by MVC.NET but this doesn't seem to be the case.
As an example, try creating a new ASP.NET MVC 2 web application and add this action to the HomeController:
public ActionResult Test(int userID, string text)
{
return Content("UserID = " + userID + " Text = " + text);
}
Try creating a simple form that posts to the above action, run fiddler and (using "Request Builder") modify the raw data so that some of the form data is missing (e.g. remove the text parameter). Before executing the request, remember to un-tick the "Fix Content-Length header" checkbox under the Request Builder options then set a break point on the code above and execute the custom http request.
I find that the request takes a lot longer than normal (30 seconds or so) but to my amazement is still processed by the controllers action. Does anyone know if this is expected behavior and, if so, what would you recommend to safeguard against invalid content-lengths?
ASP.NET does not ignore the Content-Length request header. Consider the following controller action as an example which simply echoes back the foo parameter:
[HttpPost]
public ActionResult Index(string foo)
{
return Content(foo, "text/plain");
}
Now let's make a valid POST request to it:
using (var client = new TcpClient("127.0.0.1", 2555))
using (var stream = client.GetStream())
using (var writer = new StreamWriter(stream))
using (var reader = new StreamReader(stream))
{
writer.Write(
#"POST /home/index HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: localhost:2555
Content-Length: 10
Connection: close
foo=foobar");
writer.Flush();
Console.WriteLine(reader.ReadToEnd());
}
As expected this prints the response HTTP headers (which are not important) and in the body we have foobar. Now try reducing the Content-Length header of the request:
POST /home/index HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: localhost:2555
Content-Length: 5
Connection: close
foo=foobar
Which returns a single f in the response body. So as you can see an invalid HTTP request could lead to incorrect parsing of the parameters.