How can a native Servlet Filter be used when using Spark web framework? - servlets

I'm playing around with Spark (the Java web framework, not Apache Spark).
I find it really nice and easy to define routes and filters, however I'm looking to apply a native servlet filter to my routes and can't seem to find a way to do that.
More specifically, I would like to use Jetty's DoSFilter which is a servlet filter (contrast with the Spark Filter definition). Since Spark is using embedded Jetty, I don't have a web.xml to register the DoSFilter. However, Spark doesn't expose the server instance so I can't find an elegant way of registering the filter programatically either.
Is there a way to apply a native servlet filter to my routes?
I thought of wrapping the DoSFilter in my own Spark Filter, but it seemed like a weird idea.

You can do it like this:
public class App {
private static Logger LOG = LoggerFactory.getLogger(App.class);
public static void main(String[] args) throws Exception {
ServletContextHandler mainHandler = new ServletContextHandler();
mainHandler.setContextPath("/base/path");
Stream.of(
new FilterHolder(new MyServletFilter()),
new FilterHolder(new SparkFilter()) {{
this.setInitParameter("applicationClass", SparkApp.class.getName());
}}
).forEach(h -> mainHandler.addFilter(h, "*", null));
GzipHandler compression = new GzipHandler();
compression.setIncludedMethods("GET");
compression.setMinGzipSize(512);
compression.setHandler(mainHandler);
Server server = new Server(new ExecutorThreadPool(new ThreadPoolExecutor(10,200,60000,TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(200),
new CustomizableThreadFactory("jetty-pool-"))));
final ServerConnector serverConnector = new ServerConnector(server);
serverConnector.setPort(9290);
server.setConnectors(new Connector[] { serverConnector });
server.setHandler(compression);
server.start();
hookToShutdownEvents(server);
server.join();
}
private static void hookToShutdownEvents(final Server server) {
LOG.debug("Hooking to JVM shutdown events");
server.addLifeCycleListener(new AbstractLifeCycle.AbstractLifeCycleListener() {
#Override
public void lifeCycleStopped(LifeCycle event) {
LOG.info("Jetty Server has been stopped");
super.lifeCycleStopped(event);
}
});
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
LOG.info("About to stop Jetty Server due to JVM shutdown");
try {
server.stop();
} catch (Exception e) {
LOG.error("Could not stop Jetty Server properly", e);
}
}
});
}
/**
* #implNote {#link SparkFilter} needs to access a public class
*/
#SuppressWarnings("WeakerAccess")
public static class SparkApp implements SparkApplication {
#Override
public void init() {
System.setProperty("spring.profiles.active", ApplicationProfile.readProfilesOrDefault("dev").stream().collect(Collectors.joining()));
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(ModocContext.class);
ctx.registerShutdownHook();
}
}}

Related

Is it possible to run Spring WebFlux and MVC (CXF, Shiro, etc.) services together in Undertow?

We are looking at implementing a few services using the new Spring 5 "Reactive" API.
We currently use, somewhat dependent on MVC, Apache CXF and Apache Shiro for our REST services and security. All of this runs in Undertow now.
We can get one or the other to work but not both together. It appears when we switch over to the reactive application it knocks out the servlets, filters, etc. Conversely, when we use the MVC-style application it does not see the reactive handlers.
Is it possible to run the Spring 5 Reactive services alongside REST/servlet/filter components or customize the SpringBoot startup to run REST and Reactive services on different ports?
Update:
I "seem" to be able to get the reactive handlers working doing this but I don't know if this is the right approach.
#Bean
RouterFunction<ServerResponse> routeGoodbye(TrackingHandler endpoint)
{
RouterFunction<ServerResponse> route = RouterFunctions
.route(GET("/api/rx/goodbye")
.and(accept(MediaType.TEXT_PLAIN)), endpoint::trackRedirect2);
return route;
}
#Bean
RouterFunction<ServerResponse> routeHello(TrackingHandler endpoint)
{
RouterFunction<ServerResponse> route = RouterFunctions
.route(GET("/api/rx/hello")
.and(accept(MediaType.TEXT_PLAIN)), endpoint::trackRedirect);
return route;
}
#Bean
ContextPathCompositeHandler servletReactiveRouteHandler(TrackingHandler handler)
{
final Map<String, HttpHandler> handlers = new HashMap<>();
handlers.put("/hello", toHttpHandler((this.routeHello(handler))));
handlers.put("/goodbye", toHttpHandler(this.routeGoodbye(handler)));
return new ContextPathCompositeHandler(handlers);
}
#Bean
public ServletRegistrationBean servletRegistrationBean(final ContextPathCompositeHandler handlers)
{
ServletRegistrationBean registrationBean = new ServletRegistrationBean<>(
new ReactiveServlet(handlers),
"/api/rx/*");
registrationBean.setLoadOnStartup(1);
registrationBean.setAsyncSupported(true);
return registrationBean;
}
#Bean
TrackingHandler trackingEndpoint(final TrackingService trackingService)
{
return new TrackingHandler(trackingService,
null,
false);
}
public class ReactiveServlet extends ServletHttpHandlerAdapter
{
ReactiveServlet(final HttpHandler httpHandler)
{
super(httpHandler);
}
}
Ok, after playing around with this for too long I finally seemed to be able to cobble together a solution that works for me. Hopefully this is the right way to do what I need to do.
Now, executing normal CXF RESTful routes shows me Undertow using a blocking task and executing my Reactive routes shows me undertow using NIO directly. When I tried using the ServletHttpHandler it looked like it was just invoking the service as a Servlet 3 async call.
The handlers are running completely separate from each other and allows me to run my REST services beside my reactive services.
1) Create an annotation that will be used to map the RouterFunction to an Undertow Handler
#Retention(RetentionPolicy.RUNTIME)
#Documented
#Target({ElementType.METHOD, ElementType.TYPE})
public #interface ReactiveHandler
{
String value();
}
2) Create an UndertowReactiveHandler "Provider" so that I can lazily get the injected RouterFunction and return the UndertowHttpHandler when I configure Undertow.
final class UndertowReactiveHandlerProvider implements Provider<UndertowHttpHandlerAdapter>
{
#Inject
private ApplicationContext context;
private String path;
private String beanName;
#Override
public UndertowHttpHandlerAdapter get()
{
final RouterFunction router = context.getBean(beanName, RouterFunction.class);
return new UndertowHttpHandlerAdapter(toHttpHandler(router));
}
public String getPath()
{
return path;
}
public void setPath(final String path)
{
this.path = path;
}
public void setBeanName(final String beanName)
{
this.beanName = beanName;
}
}
3) Create the NonBLockingHandlerFactory (implements BeanFactoryPostProcessor). This looks for any of my #Bean methods that have been annotated with "ReactiveHandler" and then dynamically creates a "UndertowReactiveHandlerProvider" bean for each annotated router function which is used later to provide the handlers to Undertow.
#Override
public void postProcessBeanFactory(final ConfigurableListableBeanFactory configurableListableBeanFactory) throws BeansException
{
final BeanDefinitionRegistry registry = (BeanDefinitionRegistry)configurableListableBeanFactory;
final String[] beanDefinitions = registry.getBeanDefinitionNames();
for (String name : beanDefinitions)
{
final BeanDefinition beanDefinition = registry.getBeanDefinition(name);
if (beanDefinition instanceof AnnotatedBeanDefinition
&& beanDefinition.getSource() instanceof MethodMetadata)
{
final MethodMetadata beanMethod = (MethodMetadata)beanDefinition.getSource();
final String annotationType = ReactiveHandler.class.getName();
if (beanMethod.isAnnotated(annotationType))
{
//Get the current bean details
final String beanName = beanMethod.getMethodName();
final Map<String, Object> attributes = beanMethod.getAnnotationAttributes(annotationType);
//Create the new bean definition
final GenericBeanDefinition rxHandler = new GenericBeanDefinition();
rxHandler.setBeanClass(UndertowReactiveHandlerProvider.class);
//Set the new bean properties
MutablePropertyValues mpv = new MutablePropertyValues();
mpv.add("beanName", beanName);
mpv.add("path", attributes.get("value"));
rxHandler.setPropertyValues(mpv);
//Register the new bean (Undertow handler) with a matching route suffix
registry.registerBeanDefinition(beanName + "RxHandler", rxHandler);
}
}
}
}
4) Create the Undertow ServletExtension. This looks for any UndertowReactiveHandlerProviders and adds it as an UndertowHttpHandler.
public class NonBlockingHandlerExtension implements ServletExtension
{
#Override
public void handleDeployment(DeploymentInfo deploymentInfo, final ServletContext servletContext)
{
deploymentInfo.addInitialHandlerChainWrapper(handler -> {
final WebApplicationContext ctx = getWebApplicationContext(servletContext);
//Get all of the reactive handler providers
final Map<String, UndertowReactiveHandlerProvider> providers =
ctx.getBeansOfType(UndertowReactiveHandlerProvider.class);
//Create the root handler
final PathHandler rootHandler = new PathHandler();
rootHandler.addPrefixPath("/", handler);
//Iterate the providers and add to the root handler
for (Map.Entry<String, UndertowReactiveHandlerProvider> p : providers.entrySet())
{
final UndertowReactiveHandlerProvider provider = p.getValue();
//Append the HttpHandler to the root
rootHandler.addPrefixPath(
provider.getPath(),
provider.get());
}
//Return the root handler
return rootHandler;
});
}
}
5) Under META-INF/services create a "io.undertow.servlet.ServletExtension" file.
com.mypackage.NonBlockingHandlerExtension
6) Create a SpringBoot AutoConfiguration that loads the post processor if Undertow is on the classpath.
#Configuration
#ConditionalOnClass(Undertow.class)
public class UndertowAutoConfiguration
{
#Bean
BeanFactoryPostProcessor nonBlockingHandlerFactoryPostProcessor()
{
return new NonBlockingHandlerFactoryPostProcessor();
}
}
7) Annotate any RouterFunctions that I want to map to an UndertowHandler.
#Bean
#ReactiveHandler("/api/rx/service")
RouterFunction<ServerResponse> routeTracking(TrackingHandler handler)
{
RouterFunction<ServerResponse> route = RouterFunctions
.nest(path("/api/rx/service"), route(
GET("/{cid}.gif"), handler::trackGif).andRoute(
GET("/{cid}"), handler::trackAll));
return route;
}
With this I can call my REST services (and Shiro works with them), use Swagger2 with my REST services, and call my Reactive services (and they do not use Shiro) in the same SpringBoot application.
In my logs, the REST call shows Undertow using the blocking (task-#) handler. The Reactive call shows Undertow using the non-blocking (I/O-# and nioEventLoopGroup) handler

Netty: What is the right way to share NioClientSocketChannelFactory among multiple Netty Clients

I am new to Netty. I am using “Netty 3.6.2.Final”. I have created a Netty Client (MyClient) that talks to a remote server (The server implements a custom protocol based on TCP). I create a new ClientBootstrap instance for each MyClient instance (within the constructor).
My question is if I share “NioClientSocketChannelFactory” factory object among all the instances of MyClient then when/how do I release all the resources associated with the “NioClientSocketChannelFactory”?
In other words, since my Netty Client runs inside a JBOSS container running 24x7, should I release all resources by calling “bootstrap.releaseExternalResources();” and when/where should I do so?
More Info: My Netty Client is called from two scenarios inside a JBOSS container. First, in an infinite for loop with each time passing the string that needs to be sent to the remote server (in effect similar to below code)
for( ; ; ){
//Prepare the stringToSend
//Send a string and receive a string
String returnedString=new MyClient().handle(stringToSend);
}
Another scenarios is my Netty Client is called within concurrent threads with each thread calling “new MyClient().handle(stringToSend);”.
I have given the skeleton code below. It is very similar to the TelnetClient example at Netty website.
MyClient
import org.jboss.netty.bootstrap.ClientBootstrap;
import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory;
public class MyClient {
//Instantiate this only once per application
private final static Timer timer = new HashedWheelTimer();
//All below must come from configuration
private final String host ="127.0.0.1";
private final int port =9699;
private final InetSocketAddress address = new InetSocketAddress(host, port);
private ClientBootstrap bootstrap;
//Timeout when the server sends nothing for n seconds.
static final int READ_TIMEOUT = 5;
public MyClient(){
bootstrap = new ClientBootstrap(NioClientSocketFactorySingleton.getInstance());
}
public String handle(String messageToSend){
bootstrap.setOption("connectTimeoutMillis", 20000);
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setOption("remoteAddress", address);
bootstrap.setPipelineFactory(new MyClientPipelineFactory(messageToSend,bootstrap,timer));
// Start the connection attempt.
ChannelFuture future = bootstrap.connect();
// Wait until the connection attempt succeeds or fails.
channel = future.awaitUninterruptibly().getChannel();
if (!future.isSuccess()) {
return null;
}
// Wait until the connection is closed or the connection attempt fails.
channel.getCloseFuture().awaitUninterruptibly();
MyClientHandler myClientHandler=(MyClientHandler)channel.getPipeline().getLast();
String messageReceived=myClientHandler.getMessageReceived();
return messageReceived;
}
}
Singleton NioClientSocketChannelFactory
public class NioClientSocketFactorySingleton {
private static NioClientSocketChannelFactory nioClientSocketChannelFactory;
private NioClientSocketFactorySingleton() {
}
public static synchronized NioClientSocketChannelFactory getInstance() {
if ( nioClientSocketChannelFactory == null) {
nioClientSocketChannelFactory=new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
}
return nioClientSocketChannelFactory;
}
protected void finalize() throws Throwable {
try{
if(nioClientSocketChannelFactory!=null){
// Shut down thread pools to exit.
nioClientSocketChannelFactory.releaseExternalResources();
}
}catch(Exception e){
//Can't do anything much
}
}
}
MyClientPipelineFactory
public class MyClientPipelineFactory implements ChannelPipelineFactory {
private String messageToSend;
private ClientBootstrap bootstrap;
private Timer timer;
public MyClientPipelineFactory(){
}
public MyClientPipelineFactory(String messageToSend){
this.messageToSend=messageToSend;
}
public MyClientPipelineFactory(String messageToSend,ClientBootstrap bootstrap, Timer timer){
this.messageToSend=messageToSend;
this.bootstrap=bootstrap;
this.timer=timer;
}
public ChannelPipeline getPipeline() throws Exception {
// Create a default pipeline implementation.
ChannelPipeline pipeline = pipeline();
// Add the text line codec combination first,
//pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.lineDelimiter()));
pipeline.addLast("decoder", new StringDecoder());
pipeline.addLast("encoder", new StringEncoder());
//Add readtimeout
pipeline.addLast("timeout", new ReadTimeoutHandler(timer, MyClient.READ_TIMEOUT));
// and then business logic.
pipeline.addLast("handler", new MyClientHandler(messageToSend,bootstrap));
return pipeline;
}
}
MyClientHandler
public class MyClientHandler extends SimpleChannelUpstreamHandler {
private String messageToSend="";
private String messageReceived="";
public MyClientHandler(String messageToSend,ClientBootstrap bootstrap) {
this.messageToSend=messageToSend;
this.bootstrap=bootstrap;
}
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e){
e.getChannel().write(messageToSend);
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e){
messageReceived=e.getMessage().toString();
//This take the control back to the MyClient
e.getChannel().close();
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
// Close the connection when an exception is raised.
e.getChannel().close();
}
}
You should only call releaseExternalResources() once you are sure you not need it anymore. This may be for example when the application gets stopped or undeployed.

how to load class from jar inside equinox server side application in jboss 7

I'm face a problem since few days and I can't get solution. below is my app structure:
I have ejbapp.jar inside MyearDeployedOnJboss7.ear at the same level of equinox-server-side-app.war (built using warproduct) and I want to load class from MyJarToLaoadForEjbapp.jar which is in iModernizeWebClient_1.0.0.jar which is in plugins folder of equinox-server-side-app.war (I want show image of app structure but I cannot send image because forum rules need 10 score to be able to do that)
My question is how to allow ejbapp.jar load classes from "MyJarToLaoadForEjbapp.jar" inside MyWebClient_1.0.0.jar's plugin folder which is in the equinox-server-side-app.war.
I think using servletbridge classloader but no idea how to use it.
in my launch.ini I've:
osgi.*=#null org.osgi.*=#null eclipse.*=#null osgi.parentClassloader=app osgi.contextClassLoaderParent=app
I resolved my proble using Servlet HttpServiceTracker from the OSGI spec. how to do it : write HttpServiceTracker liket that :
public class HttpServiceTracker extends ServiceTracker {
private static final Logger logger = Logger
.getLogger(HttpServiceTracker.class.getName());
public HttpServiceTracker(BundleContext context) {
super(context, HttpService.class.getName(), null);
}
public Object addingService(ServiceReference reference) {
HttpService httpService = (HttpService) context.getService(reference);
logger.info("default context path : "
+ org.eclipse.rap.ui.internal.servlet.HttpServiceTracker.ID_HTTP_CONTEXT);
try {
logger.info("will register servlet ");
httpService.registerServlet("/programLauncherServlet",
new ProgramLauncherServlet(), null, null);
logger.info("servlet has been registred with http context ");
// httpService.registerResources( "/", "/html", null );
} catch (Exception e) {
//e.printStackTrace();
logger.info("The alias '/programLauncherServlet' is already in use");
}
return httpService;
}
public void removedService(ServiceReference reference, Object service) {
logger.info("will unregister servlet ");
HttpService httpService = (HttpService) service;
httpService.unregister("/programLauncher");
super.removedService(reference, service);
logger.info("servlet has been unregistred");
}
in your plugin activator class at method start :
#Override
public void start(BundleContext context) throws Exception {
super.start(context);
Activator.plugin = this;
BundleContext osgiContext = BundleReference.class
.cast(AnyClassOfYourProject.class.getClassLoader()).getBundle()
.getBundleContext();
serviceTracker = new HttpServiceTracker(osgiContext);
serviceTracker.open();
LOGGER.info("servlet published !!");
LOGGER.info("Bundle started.");
}
and for unregister the servlet at the stop method :
public void stop(BundleContext context) throws Exception {
Activator.plugin = null;
serviceTracker.close();
serviceTracker = null;
LOGGER.info("servlet unregistered from context !!");
super.stop(context);
}
that's all. your servlet is accessible outside your eclipse bundle and you can call methods inside the bundle.

How to: Async Callbacks using Netty with Avro

I'm trying to implement Asynchronous Avro calls by using its NettyServer implementation. After digging the source code, I found an example on how to use NettyServer from TestNettyServerWithCallbacks.java
When running a few test, I realize that NettyServer never calls hello(Callback) method, instead it keeps calling the synchronous hello() method. The client program prints out "Hello" but I'm expecting "Hello-ASYNC" as a result. I really have no clue what's going on.
I hope someone can shine some light on me and perhaps point out the mistake. Below are the codes I use to perform a simple asynchronous avro test.
AvroClient.java - Client code.
public class AvroClient {
public static void main(String[] args) throws InterruptedException, ExecutionException, TimeoutException {
try {
NettyTransceiver transceiver = new NettyTransceiver(new InetSocketAddress(6666));
Chat.Callback client = SpecificRequestor.getClient(Chat.Callback.class, transceiver);
final CallFuture<CharSequence> future1 = new CallFuture<CharSequence>();
client.hello(future1);
System.out.println(future1.get());
transceiver.close();
} catch (IOException ex) {
System.err.println(ex);
}
}
}
AvroNetty.java - The Server Code
public class AvroNetty {
public static void main(String[] args) {
Index indexImpl = new AsyncIndexImpl();
Chat chatImpl = new ChatImpl();
Server server = new NettyServer(new SpecificResponder(Chat.class, chatImpl), new InetSocketAddress(6666));
server.start();
System.out.println("Server is listening at port " + server.getPort());
}
}
ChatImpl.java
public class ChatImpl implements Chat.Callback {
#Override
public void hello(org.apache.avro.ipc.Callback<CharSequence> callback) throws IOException {
callback.handleResult("Hello-ASYNC");
}
#Override
public CharSequence hello() throws AvroRemoteException {
return new Utf8("Hello");
}
}
This interface is auto-generated by avro-tool
Chat.java
#SuppressWarnings("all")
public interface Chat {
public static final org.apache.avro.Protocol PROTOCOL = org.apache.avro.Protocol.parse("{\"protocol\":\"Chat\",\"namespace\":\"avro.test\",\"types\":[],\"messages\":{\"hello\":{\"request\":[],\"response\":\"string\"}}}");
java.lang.CharSequence hello() throws org.apache.avro.AvroRemoteException;
#SuppressWarnings("all")
public interface Callback extends Chat {
public static final org.apache.avro.Protocol PROTOCOL = avro.test.Chat.PROTOCOL;
void hello(org.apache.avro.ipc.Callback<java.lang.CharSequence> callback) throws java.io.IOException;
}
}
Here is the Avro Schema
{
"namespace": "avro.test",
"protocol": "Chat",
"types" : [],
"messages": {
"hello": {
"request": [],
"response": "string"
}
}
}
The NettyServer implementation actually doesn't implement the Async style at all. It is a deficiency in the library. Instead you need to specify an asynchronous execution handler rather than try and chain services together through callbacks. Here is what I use to setup my NettyServer to allow for this:
ExecutorService es = Executors.newCachedThreadPool();
OrderedMemoryAwareThreadPoolExecutor executor = new OrderedMemoryAwareThreadPoolExecutor(Runtime.getRuntime().availableProcessors(), 0, 0);
ExecutionHandler executionHandler = new ExecutionHandler(executor);
final NettyServer server = new NettyServer(responder, addr, new NioServerSocketChannelFactory(es, es), executionHandler);

Wicket and responding with "not HTML" to requests

I'm sure this has been answered somewhere else - but I don't know where
I need to respond to HTTP requests from a partner, in our wicket website. The partner expected the response body to say "OK" or anything else in the case of an error
Is there a "nice" way to do this? ... or am I going to be stuck adding a servlet to my (previously) pretty Wicket application?
You can use resources for that:
class OkResource implements IResource {
#Override
public void respond(Attributes attributes) {
WebResponse resp = (WebResponse) attributes.getResponse();
resp.setContentType("text/plain");
resp.write("OK");
}
}
And register it in your Application class
#Override
protected void init() {
super.init();
getSharedResources().add("confirm", new OkResource());
mountResource("confirm", new SharedResourceReference("confirm"));
}
so that it can be accessed through something like http://host/app/confirm.
Just observe that here you registering a single instance of the resource, so it must be thread-safe, since multiple requests can call it simultaneously.
[EDIT]
In Wicket 1.4:
class OkResource extends Resource {
#Override
public IResourceStream getResourceStream() {
return new StringResourceStream("ok", "text/plain");
}
}
#Override
protected void init() {
super.init();
getSharedResources().add("confirm", new OkResource());
mountSharedResource("confirm", "confirm");
}

Resources