I'm using a Grizzly HttpServer which has two HttpHandler instances registered:
under /api/* there is an Jersey REST - style application offering the API of the product, and
under /* there is an StaticHttpHandler which serves static HTML / JavaScript content (which, among other things, talks to the API under /api/
For authentication I'm currently securing only the API using a Jersey ContainerRequestFilter implementing HTTP Basic Auth, which looks quite similar to what is presented in another SO question.
But as requirements changed, now I'd like to require authentication for all requests hitting the server. So I'd like to move the authentication one level up, from Jersey to Grizzly. Unfortunately, I'm completely lost figuring out where I can hook up a "request filter" (or whatever it is called) in Grizzly. Can someone point me to the relevant API to accomplish this?
The easiest solution would leverage the Grizzly embedded Servlet support.
This of course would mean you'd need to do a little work to migrate your current HttpHandler logic over to Servlets - but that really shouldn't be too difficult as the HttpHandler API is very similar.
I'll give some high level points on doing this.
HttpServer server = HttpServlet.createSimpleServer(<docroot>, <host>, <port>);
// use "" for <context path> if you want the context path to be /
WebappContext ctx = new WebappContext(<logical name>, <context path>);
// do some Jersey initialization here
// Register the Servlets that were converted from HttpHandlers
ServletRegistration s1 = ctx.addServlet(<servlet name>, <Servlet instance or class name>);
s1.addMapping(<url pattern for s1>);
// Repeat for other Servlets ...
// Now for the authentication Filter ...
FilterRegistration reg = ctx.addFilter(<filter name>, <filter instance or class name>);
// Apply this filter to all requests
reg.addMapping(null, "/*");
// do any other additional initialization work ...
// "Deploy" ctx to the server.
ctx.deploy(server);
// start the server and test ...
NOTE: The dynamic registration of Servlets and Filters is based off the Servlet 3.0 API, so if you want information on how to deal with Servlet listeners, init parameters, etc., I would recommend reviewing the Servlet 3.0 javadocs.
NOTE2: The Grizzly Servlet implementation is not 100% compatible with the Servlet specification. It doesn't support standard Servlet annotations, or deployment of traditional Servlet web application archive deployment.
Lastly, there are examples of using the embedded Servlet API here
The "hookup" part can be done using a HttpServerProbe (tested with Grizzly 2.3.5):
srv.getServerConfiguration().getMonitoringConfig().getWebServerConfig()
.addProbes(new HttpServerProbe.Adapter() {
#Override
public void onRequestReceiveEvent(HttpServerFilter filter,
Connection connection, Request request) {
...
}
#Override
public void onRequestCompleteEvent(HttpServerFilter filter,
Connection connection, Response response) {
}
});
For the "linking" to the ContainerRequestFilter you might want to have a look at my question:
UnsupportedOperationException getUserPrincipal
Related
In a raw Spring WebSocket application (not using sockjs/STOMP or any other middleware), how can I have Spring inject beans that have been registered in the HTTP session scope so that they can be used by code in my WebSocketHandler bean?
Note that what I am not asking is any of these questions:
How do I create beans in a scope that is accessible to all handler invocations for the same WebSocket session (e.g. as described in the answer to Request or Session scope in Spring Websocket). The beans I need to access already exist in the scope for the HTTP session
How do I (programatically) access objects in the servlet container's HTTP session storage (I haven't tried to do this, but I'm pretty sure the answer involves using an HttpSessionHandshakeInterceptor), but that doesn't get me injection of Spring scoped dependencies.
How to use a ScopedProxy to pass beans between code in different scopes (e.g. as described here); I'm already familiar with how to do this, but attempting to do so for a WebSocketHandler causes an error because the session scope hasn't been bound to the thread at the point the object is accessed.
How to access the current security principal -- again, very useful, but not what I'm currently trying to achieve.
What I'm hoping to do is provide a simple framework that allows for the traditional HTTP-request initiated parts of an MVC application to communicate directly with a WebSocket protocol (for sending simple push updates to the client). What I want to be able to do is push data into a session scoped object from the MVC controller and pull it out in the websocket handler. I would like the simplest possible API for this from the MVC controller's perspective, which if it is possible to just use a session-scoped bean for this would be ideal. If you have any other ideas about very simple ways of sharing this data, I'd also like to hear those in case this approach isn't possible.
You can also use Java API for websocket. This link https://spring.io/blog/2013/05/23/spring-framework-4-0-m1-websocket-support
explains how to do this with Spring.
Ufortunately, something like this
#ServerEndpoint(value = "/sample", configurator = SpringConfigurator.class)
public class SampleEndpoint {
private SessionScopedBean sessionScopedBean;
#Autowired
public SampleEndpoint(SessionScopedBean sessionScopedBean) {
this.sessionScopedBean = sessionScopedBean;
}
}
causes exception (because we're trying to access bean outside its scope), but for singleton and prototype beans it works well.
To work with session attributes you can modify the hanshake and pass required attributes:
public class CustomWebSocketConfigurator extends SpringConfigurator {
#Override
public void modifyHandshake(ServerEndpointConfig config,
HandshakeRequest request,
HandshakeResponse response) {
//put attributes from http session to websocket session
HttpSession httpSession = (HttpSession) request.getHttpSession();
config.getUserProperties().put("some_attribute",
httpSession.getAttribute("some_attribute_in_http_session"));
}
}
P. S. More a comment than an answer. I just wanted to add another way of handling session attributes in websocket to your question-answer. I have been searching the web for exactly the same issue and the way showed above seems to me the most systematic approach to handling the session data in websocket.
Ok, my situation is much more complicated but there is an easy way to reproduce. Starting with a fresh new ASP.NET MVC 4 Web Application project and selecting Web API as a template I just add a second mvc action to the HomeController where I need to call Web API internally.
public async Task<string> TestAPI()
{
HttpServer server = new HttpServer(GlobalConfiguration.Configuration);
using (HttpMessageInvoker messageInvoker = new HttpMessageInvoker(server, false))
{
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, "http://localhost:58233/api/values");
request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var response = messageInvoker.SendAsync(request, new CancellationToken()).Result;
return await response.Content.ReadAsStringAsync();
}
//server.Dispose(); - if I do that on the second request I get a "Cannot access a disposed object." exception
}
that thing works only on the first request. On subsequent requests it throws with
The 'DelegatingHandler' list is invalid because the property
'InnerHandler' of 'RequestMessageHandlerTracer' is not null. Parameter
name: handlers
I really need to use the GlobalConfiguration.Configuration here since my system is very modular/plugin based, which makes it really hard to reconstruct that configuration within the action method(or anywhere else).
I would suggest trying to re-use the HttpServer instance on secondary requests. Creating and configuring a new server on every request is not an expected usage and you are likely hitting some edge case. Either setup a DI mechanism and inject into your controller a singleton of the HttpServer, or try accessing it from some static property.
I also would suggest using new HttpClient(httpServer) instead of HttpMessageInvoker.
The same issue can occur in Web API, if you have multiple HttpServers using the same configuration object, and the configuration contains a non-empty list of delegating handlers.
The error occurs because MVC/Web API builds a pipeline of handlers on first request, containing all the delegating handlers (eg RequestMessageHandlerTracer if request tracing is enabled) linked to each other, followed by the MVC server handler.
If you have multiple HttpServers using the same configuration object, and the config object contains delegating handlers, the first HttpServer will be successfully connected into a pipeline; but the second one won't, because the delegating handlers are already connected - instead it will throw this exception on first request/initialization.
More detail on the Web API case here (which is conceptually identical, but uses different classes and would have a slightly different fix):
webapi batching and delegating handlers
In my opinion, the MVC configuration classes should be pure config, and not contain actual delegating handlers. Instead, the configuration classes should create new delegating handlers upon initialization. Then this bug wouldn't exist.
I've spent a few days researching this, but haven't found a suitable answer for my situation. I have a Spring 3.1 MVC application. Currently, some vendors log into the application via a web client in which case the user information is stored in the session. I want to expose some services to other vendors via RESTFul web services, but have the vendor pass their vendor id as a part of the URI or via PARAMS. Is there a way to handle the vendor id in a single place that then forwards to the respective controller for request processing? Should the vendor id be a part of the URI or should the vendor id be passed in the request body? I've looked into Interceptors, but how would I do this with multiple URIs or for every controller for the RESTFul webservice? Any suggestion would be greatly appreciated
Having a custom header is the most clean option but parameters also work equally well.
In the interceptors preHandle method you could lookup the vendor by either a header or a parameter and attach it to the request by adding the object to it's attributes.
request.addAttribute("vendor", myVendorInstance);
From that point on the vendor can be retrieved from the request like:
Vendor vendor = (Vendor) request.getAttribute("vendor");
Interceptors can be mapped to any URL you like using a mapping e.g.
<mvc:interceptor>
<mvc:mapping path="/vendors/**" />
<bean class="my.package.VendorLookupInterceptor" />
</mvc:interceptor>
Another way of making the vendor object available to controllers is to inject it. For instance, say that controllers interested in the object should implement this interface.
public interface VendorAware {
public void setVendor(Vendor vendor);
}
Controllers implementing this interface could be handled by the interceptor and get the vendor injected.
if (handler instanceof HandlerMethod) {
Object bean = ((HandlerMethod) handler).getBean();
if (bean instanceof VendorAware) {
Vendor vendor = getVendor();
((VendorAware) bean).setVendor(vendor);
}
}
Obviously the problem with adding the vendor id to the URI is that it affects all your URL's, so cannot easily make the controller generic.
Another way is to have the vendor id passed as a header to the controllers. You could use the X-User header.
Then you can write some kind of handler to check for this header, possibilities:
spring interceptor
servlet filter
spring security
aspectj
I tried to access my application on CloudFoundry with the following configuration in the spring security xml
<intercept-url pattern="/signup*" access="permitAll" requires-channel="https" />
but it gives me error This webpage has a redirect loop
However when I changed it to requires-channel="http" I can see my page normally. In both cases I used https on my application. Is this the expected behavior ?
First of all, taking a step back, this (https://johnpfield.wordpress.com/2014/09/10/configuring-ssltls-for-cloud-foundry/) provides excellent context for the nature of the problem.
The key paragraph being
“The threat model here is that the entry point to the cloud is a high availability, secured proxy server. Once the traffic traverses that perimeter, it is on a trusted subnet. In fact, the actual IP address and port number where the Tomcat application server are running are not visible from outside of the cloud. The only way to get an HTTP request to that port is to go via the secure proxy. This pattern is a well established best practice amongst security architecture practitioners.”
Therefore, we may not want or need SSL all the way down, but read on to see how to avoid the https redirect issue when using Spring Security deployed on Cloud Foundry.
You will have a load balancer, HAProxy or some kind of proxy terminating SSL at the boundary of your Cloud Foundry installation. As a convention, whatever you are using should be configured to set X-Forwarded-For and X-Forwarded-Proto headers. The request header “X-Forwarded-Proto" contains the value http or https depending on the original request and you need to use this header parameter for your security decisions further down the stack.
The cleanest way to do this is at the container level, so that Spring Security behaves the same independent of deployment container. Some typical options to configure this are as follows
1) Tomcat
Tomcat should be configured with a RemoteIPValve as described nicely here
The good news is that the Java buildpack for Cloud Foundry already does this for you as seen here
2) Spring Boot (Embedded Tomcat)
Because Tomcat is embedded, the Tomcat config in the Java buildpack will not be activated (see the buildpack Detection Criterion), and therefore some internal Spring Boot configuration is required. Luckily, it’s pretty trivial to configure as you would expect with Spring Boot and you can switch on Tomcat’s RemoteIPValve as explained here by simply defining
server.tomcat.remote_ip_header=x-forwarded-for
server.tomcat.protocol_header=x-forwarded-proto
Both approaches lead to the same outcome of the Tomcat valve overriding the ServletRequest.isSecure() behaviour so that the application has no knowledge of the usage of any proxying. Note that the valve will only be used when the “X-Forwarded-Proto" header is set.
Alternatively, if you really want to go low-level you can dig into the guts of Spring Security, as demonstrated here. As part of that effort, there are some useful findings on how to make the “X-Forwarded-Proto" header available via the Servlet API for other containers (WebLogic, JBoss, Jetty, Glassfish) shared on the comments of https://github.com/BroadleafCommerce/BroadleafCommerce/issues/424
As an additional note, CloudFlare can also act as the SSL-terminating reverse proxy (this is the recommended approach via PWS as discussed here) and it does indeed forward the relevant headers.
References
https://stackoverflow.com/a/28300485/752167
http://experts.hybris.com/answers/33612/view.html
https://github.com/cloudfoundry/java-buildpack/commit/540633bc932299ef4335fde16d4069349c66062e
https://support.run.pivotal.io/entries/24898367-Spring-security-with-https
http://docs.spring.io/spring-boot/docs/current/reference/html/howto-embedded-servlet-containers.html#howto-use-tomcat-behind-a-proxy-server
I have the same issue when I tried to secure my pages with HTTPS using Spring Security.
From the discussion on CloudFoundry Support, seems they "terminate SSL connections at the router". See "Is it possible to visit my application via SSL (HTTPS)?".
And after more than a year, no further information I can find regarding this issue.
In case it's still useful ... I found this post gave the clue to solve something similar to this.
The problem was the org.springframework.security.web.access.channel.SecureChannelProcessor bean was using ServletRequest.isSecure() to decide whether to accept the connection or redirect, which was getting confused inside the cloud.
The following override to that bean seemed to do the job under BlueMix - not sure if the $WSSC request header will apply to all environments.
#Component
public class ChannelProcessorsPostProcessor implements BeanPostProcessor {
#Override
public Object postProcessAfterInitialization(final Object bean, final String beanName) throws BeansException {
if (bean instanceof SecureChannelProcessor) {
final SecureChannelProcessor scp = (SecureChannelProcessor) bean;
return new ChannelProcessor() {
#Override
public void decide(FilterInvocation invocation,
Collection<ConfigAttribute> config) throws IOException,
ServletException {
HttpServletRequest httpRequest = invocation.getHttpRequest();
// Running under BlueMix (CloudFoundry in general?), the
// invocation.getHttpRequest().isSecure() in SecureChannelProcessor
// was always returning false
if ("https".equals(httpRequest.getHeader("$WSSC"))) {
return;
}
scp.decide(invocation, config);
}
#Override
public boolean supports(ConfigAttribute attribute) {
return scp.supports(attribute);
}
};
}
return bean;
}
#Override
public Object postProcessBeforeInitialization(final Object bean, final String beanName) throws BeansException {
return bean;
}
}
At my workplace we are in the process of upgrading our Time and Attendance setup. Currently, we have physical terminals that employees use to check in and check out. These terminal communicate to a 3rd party T&A system via web service calls.
About the T&A web service:
Hosted on IIS 6
Communication is with WCF over HTTP
We're only interested in one of the exposed methods (let's call it Beep())
What I need to do:
Leave the original T&A system in place, untouched
Write a custom service that also reacts to calls to Beep()
So, essentially, I need to piggy-back on all the calls to Beep(), but I'm not sure what the best approach is.
What has been considered already:
Write a custom webservice that implements the exact same same contract as the T&A service and direct all the terminals to that custom service. The idea being that I can then invoke the original T&A service from my custom service, as well as applying any other logic required.
This seems overly invasive to me, and seems needlessly risky. We want to leave the original system as unmodified as possible.
Write a custom HTTP Handler to intercept calls to the original T&A service.
We've actually already done something like this in house, but our implementation takes the original HttpRequest, extracts the contents, invokes a custom service, and finally create a new HttpRequest based on the original request so that the original web service call to Beep() is made.
What I don't like about this approach is that the original HttpRequest is lost. Yes, a second, supposedly identical, request is created, but I don't know enough about HttpRequests to guarantee this is safe.
I prefer option 2, but it's still not perfect. Ideally we wouldn't need to destroy the original HttpRequest. Does anyone know if this is possible?
If not, can anyone suggest another way of doing this? Can IIS be configured to fork requests to two destinations?
Thanks
UPDATE #1
I have found a solution (documented here), but I'm still open to other options.
UPDATE #2
I like flup's solution (and justification). He gets the bounty :) Thanks flup!
You can configure the web service to use a custom operation invoker, an IOperationInvoker.
WCF deserializes the original HTTP request as always, but instead of calling Beep() on the existing web service class, it will call your invoker instead. The invoker does its special thing, and then calls Beep() on the original service.
Advantage over implementing an IHTTPModule would be that all things HTTP are still handled by the original web service's configuration, unchanged. You fork off at a higher level of abstraction, namely on the web service's interface, at the Beep() method.
The nitty gritty of setting up a custom operation invoker without changing the existing service class (which makes it harder):
Implement a custom IOperationBehavior which sets the custom IOperationInvoker on the service's Beep() method in its ApplyDispatchBehavior method.
Implement a custom IEndpointBehavior which sets the custom IOperationBehavior in its ApplyDispatchBehavior method.
Put these two behaviors, with your IOperationInvoker, in a class library and add it to the existing service
Then configure the service to use the IEndpointBehavior.
See When and where to set a custom IOperationInvoker? and http://blogs.msdn.com/b/carlosfigueira/archive/2011/05/17/wcf-extensibility-ioperationinvoker.aspx for the invoker bit.
See Custom Endpoint Behavior using Standard webHttpEndpoint on how to configure a custom endpoint.
Sounds actually like you want to integrate your system into an ESB pattern. Now the MS solution to the ESB problem is Biztalk. Biztalk is the thermonuclear warhead nut cracker in this case. You don't want Biztalk.
Check out the results here for lightweight alternatives
I have found a solution using a custom IHttpModule. See sample below:
using System;
using System.Text;
using System.Web;
namespace ForkHandles
{
public class ForkHandler : IHttpModule
{
public void Init(HttpApplication application)
{
application.BeginRequest += new EventHandler(application_BeginRequest);
}
void application_BeginRequest(object sender, EventArgs e)
{
var request = ((HttpApplication)sender).Request;
var bytes = new byte[request.InputStream.Length];
request.InputStream.Read(bytes, 0, bytes.Length);
request.InputStream.Position = 0;
var requestContent = Encoding.ASCII.GetString(bytes);
// vvv
// Apply my custom logic here, using the requestContent as input.
// ^^^
}
public void Dispose()
{
}
}
}
This will allow me to inspect the contents of a webservice request and react to it accordingly.
I'm open to other solutions that may be less invasive as this one will require changing the deployed 3rd party web service's configuration.
If you want to intercept the message to the T&A WCF service i would suggest using custom listener which can be plugged into the service call by making changes in the web.cofig.
This will be transparent.
Please look for WCF Extensibility – Message Inspectors.
system.diagnostics>
<sources>
<source name="System.ServiceModel.MessageLogging">
<listeners>
<add name="ServiceModelMessageLoggingListener">
<filter type=""/>
</add>
</listeners>
</source>
</sources>
</system.diagnostics>