I wanted to put a situation to you to get your thoughts on the use of Flurl; if I have developed a restful api which is designed to support authentication and sessions from multiple users and you part of the operation of the restful service is that it needs to make authenticated calls to another outside service. If I used the standard flurl implementation of call asyn from a string URL and if I need to set different headers depending on the user that authenticated to my service, would this cause unpredictable behaviour due to it using a single httpclient (as they are all calling the same host).
Doing it the way you describe is completely safe. Setting headers fluently off a URL string or Url object will apply them to the request, not the client. Example:
await url.WithHeader(name, value).PostAsync(body);
This call can be made a zillion times from different threads with different header values and a single shared HttpClient instance with no conflicts. This works because under the hood it sets the header on the HttpRequestMessage, not the default headers on the HttpClient.
Related
I'm just trying to get going with JMeter, and I try to understand more about User-defined cookies.
What purpose do they fulfill when you add them with hard-coded values, like for instance if you define a cookie called A and give it a value B to a certain domain for your HTTP sampler?
Much grateful for all information!
There could be several possible reasons:
Replay user session without having to re-login (i.e. session hijacking) so you would be able to debug your test by running a single request rather than the whole sequence (open login page, login, navigate somewhere, do something, etc.)
The value doesn't have to be hard-coded, it may come from correlation or calculation by the JSR223 Pre-Processor
Negative test scenarios (providing invalid cookie value to check for anticipated error)
You name it
Some web sites have Javascript scripts executed on page load which are adding/removing/updating HTTP cookie(s)
The set() method of the cookies API sets a cookie containing the specified cookie data. This method is equivalent to issuing an HTTP Set-Cookie header during a request to a given URL.
JMeter isn't executing Javascript
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages.
So in JMeter you can manually manipulate HTTP cookie as you expect your site to execute
I have a backend which generates three JWT tokens - reference token, access token and refresh token. Reference token stores a reference to the access token, which is used to access API and refresh token is used to reissue access token when it is timed out. The problem is I do not want to pass access token to the client, but want to use nginx to store it in memcached. So, my whole task is to filter the response from the backend, which currently looks as simple as:
{"reference_token":"...","access_token":"...","refresh_token":"..."}
Nginx should filter this response, get access token from this response and store it in memcached. Finally, it should return to the client a new response:
{"reference_token":"...","refresh_token":"..."}
As you can see, there should be no access_token any more. Access token is something which I try to secure and not to show it and even pass it to the client. What I do not know, is what is the best approach to implement this, what Lua block should I use for this task. I know about body_filter_by_lua , but documentation shortly says that:
Note that the following API functions are currently disabled within this context due to the limitations in NGINX output filter's current implementation
So, it seems like body filtering is rather limited and I'm not even sure if it is possible to call memcached API inside this block. So, how can I implement my task in real world? At least, what Lua (openresty) tricks should I use to approach this task?
You may issue a subrequest (e.g., ngx.location.capture) to your backend within you content handler for example.
Next you may filter a body as you want and use then lua-resty-memcached which use cosocket API.
The drawback of this approach is that you would have full buffered proxy.
I have an asp.net application that I'm attempting convert the front end to Angular. Getting header information is important to the view. I'm used to getting the header information like so in C#:
httpContext.Request.Headers["USERID"]
How can I do the same thing in an angular controller?
In asp.net each request runs in its own independent context and hence the header access as you have shown in your code make sense.
This does not hold good for angular or in fact any client side framework. You can always get the headers for any request or response made using angular $http but the question is which request? During the lifetime of the app you would make many such requests.
Let's say you want to get the current userid, you can create a service that returns the logged in user. There are two ways to implement such a sevice
create a method on server to return this data. Invoke this method from service and cache results
on the client side assuming there is a login request made through angular, implement a success callback method which can update the service with the logged user id.
You can look at $http documentation here to understand how to access headers.
My back-end server is built using the Microsoft WCF REST Starter Kit Preview 2. I want to add some request processing to all requests, except for those methods I explicitly disable by marking them with an attribute. I have over a hundred service methods and there are only a few I want to exclude from this extra processing. I'll tag them all if I have to, but I'm trying to avoid disrupting what's already written.
I haven't seen anything I can add to WebInvoke, and adding an interceptor won't let me examine the method that the request is routed to.
I am asking for an explanation of how to register HttpOperationHandler object(s) so I can do my extra request processing (i.e. authorization based on information in the request headers) before it is handed off to the method it was routed to. Can someone please explain how to do this, without rewriting my existing codebase to use Web API?
You can't use an HttpOperationHandler with WCF REST Starter Kit. However the Web API is very compatible with ServiceContracts that were created for WCF REST Starter kit. You should be able to re-host them in a Web API host relatively easily. You may have to change places where you access WebOperationContext, but it should not be a huge change.
I solved my problem by adopting another method. It authenticates all requests. I can't control which method it applies to, but I was able to work around that.
I created a custom ServiceAuthorizationManager class to process the Authorization header. The CheckAccess() method returns true to allow the request through or false if the user is not authenticated or not authorized to perform the service. I hooked it up to the ServiceHost for my services by creating a custom WebServiceHostFactory class and assigning an instance to the Authorization.ServiceAuthorizationManager in its CreateServiceHost() methods.
Although I can't directly check method attributes for the service being executed, the Message.Headers member of the object passed to CheckAccess() has a To property that contains the URI of the service being called. If necessary, I could examine it to determine what method the request would be routed to.
The ServiceAuthorizationManager applies to all requests, so no web methods or classes must be marked with any special attributes to enable it.
I have an HTTP Module to handle authentication from Facebook, which works fine in classic pipeline mode.
In integrated pipeline mode, however, I'm seeing an additional request pass through for the default document, which is causing the module to fail. We look at the request (from Facebook) to retrieve and validate the user accessing our app. The initial request authenticates fine, but then I see a second request, which lacks the posted form variables, and thus causes authentication to fail.
In integrated pipeline mode, an http request for "/" yields 2 AuthenticateRequests in a row:
A request where AppRelativeCurrentExecutionFilePath = "~/"
A request where AppRelativeCurrentExecutionFilePath = "~/default.aspx"
That second request loses all of the form values, so it fails to authenticate. In classic mode, that second request is the only one that happens, and it preserves the form values.
Any ideas what's going on here?
UPDATE: Here is an image of the trace from module notifications in IIS. Note that my module, FBAuth, is seeing AUTHENTICATE_REQUEST multiple times (I'd expect 2 - one for authenticate and one for postauthenticate, but I get 4).
I'm starting to believe this has something to do with module/filter configuration because I've found a (Vista) box running the same code that doesn't fire these events repeatedly - it behaves as expected. I'm working through trying to figure out what the difference could be...
Thanks!
Tom
My solution was to add the following code at the end of Application_BeginRequest:
if (Request.RawUrl.TrimEnd('/') == HostingEnvironment.ApplicationVirtualPath.TrimEnd('/'))
Server.Transfer(Request.RawUrl+"Default.aspx", true);
DefaultHttpHandler is not supported,
so applications relying on sub-classes
of DefaultHttpHandler will not be able
to serve requests If your application
uses DefaultHttpHandler or handlers
that derive from DefaultHttpHandler,
it will not function correctly. In
Integrated mode, handlers derived from
DefaultHttpHandler will not be able to
pass the request back to IIS for
processing, and instead serve the
requested resource as a static file.
Integrated mode allows ASP.NET modules
to run for all requests without
requiring the use of
DefaultHttpHandler. Workaround
Change your application to use
modules to perform request processing
for all requests, instead of using
wildcard mapping to map ASP.NET to all
requests and then using
DefaultHttpHandler derived handlers to
pass the request back to IIS.
Hmmm, or this could be the issue.
ASP.NET modules in early request
processing stages will see requests
that previously may have been rejected
by IIS prior to entering ASP.NET,
which includes modules running in
BeginRequest seeing anonymous requests
for resources that require
authentication ASP.NET modules can run
in any pipeline stages that are
available to native IIS modules.
Because of this, requests that
previously may have been rejected in
the authentication stage (such as
anonymous requests for resources that
require authentication) or other
stages prior to entering ASP.NET may
run ASP.NET modules. This behavior is
by design in order to enable ASP.NET
modules to extend IIS in all request
processing stages. Workaround
Change application code to avoid
any application-specific problems that
arise from seeing requests that may be
rejected later on during request
processing. This may involve changing
modules to subscribe to pipeline
events that are raised later during
request processing.
http://learn.iis.net/page.aspx/381/aspnet-20-breaking-changes-on-iis-70/