Wicket and responding with "not HTML" to requests - servlets

I'm sure this has been answered somewhere else - but I don't know where
I need to respond to HTTP requests from a partner, in our wicket website. The partner expected the response body to say "OK" or anything else in the case of an error
Is there a "nice" way to do this? ... or am I going to be stuck adding a servlet to my (previously) pretty Wicket application?

You can use resources for that:
class OkResource implements IResource {
#Override
public void respond(Attributes attributes) {
WebResponse resp = (WebResponse) attributes.getResponse();
resp.setContentType("text/plain");
resp.write("OK");
}
}
And register it in your Application class
#Override
protected void init() {
super.init();
getSharedResources().add("confirm", new OkResource());
mountResource("confirm", new SharedResourceReference("confirm"));
}
so that it can be accessed through something like http://host/app/confirm.
Just observe that here you registering a single instance of the resource, so it must be thread-safe, since multiple requests can call it simultaneously.
[EDIT]
In Wicket 1.4:
class OkResource extends Resource {
#Override
public IResourceStream getResourceStream() {
return new StringResourceStream("ok", "text/plain");
}
}
#Override
protected void init() {
super.init();
getSharedResources().add("confirm", new OkResource());
mountSharedResource("confirm", "confirm");
}

Related

How can a native Servlet Filter be used when using Spark web framework?

I'm playing around with Spark (the Java web framework, not Apache Spark).
I find it really nice and easy to define routes and filters, however I'm looking to apply a native servlet filter to my routes and can't seem to find a way to do that.
More specifically, I would like to use Jetty's DoSFilter which is a servlet filter (contrast with the Spark Filter definition). Since Spark is using embedded Jetty, I don't have a web.xml to register the DoSFilter. However, Spark doesn't expose the server instance so I can't find an elegant way of registering the filter programatically either.
Is there a way to apply a native servlet filter to my routes?
I thought of wrapping the DoSFilter in my own Spark Filter, but it seemed like a weird idea.
You can do it like this:
public class App {
private static Logger LOG = LoggerFactory.getLogger(App.class);
public static void main(String[] args) throws Exception {
ServletContextHandler mainHandler = new ServletContextHandler();
mainHandler.setContextPath("/base/path");
Stream.of(
new FilterHolder(new MyServletFilter()),
new FilterHolder(new SparkFilter()) {{
this.setInitParameter("applicationClass", SparkApp.class.getName());
}}
).forEach(h -> mainHandler.addFilter(h, "*", null));
GzipHandler compression = new GzipHandler();
compression.setIncludedMethods("GET");
compression.setMinGzipSize(512);
compression.setHandler(mainHandler);
Server server = new Server(new ExecutorThreadPool(new ThreadPoolExecutor(10,200,60000,TimeUnit.MILLISECONDS,
new ArrayBlockingQueue<>(200),
new CustomizableThreadFactory("jetty-pool-"))));
final ServerConnector serverConnector = new ServerConnector(server);
serverConnector.setPort(9290);
server.setConnectors(new Connector[] { serverConnector });
server.setHandler(compression);
server.start();
hookToShutdownEvents(server);
server.join();
}
private static void hookToShutdownEvents(final Server server) {
LOG.debug("Hooking to JVM shutdown events");
server.addLifeCycleListener(new AbstractLifeCycle.AbstractLifeCycleListener() {
#Override
public void lifeCycleStopped(LifeCycle event) {
LOG.info("Jetty Server has been stopped");
super.lifeCycleStopped(event);
}
});
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
LOG.info("About to stop Jetty Server due to JVM shutdown");
try {
server.stop();
} catch (Exception e) {
LOG.error("Could not stop Jetty Server properly", e);
}
}
});
}
/**
* #implNote {#link SparkFilter} needs to access a public class
*/
#SuppressWarnings("WeakerAccess")
public static class SparkApp implements SparkApplication {
#Override
public void init() {
System.setProperty("spring.profiles.active", ApplicationProfile.readProfilesOrDefault("dev").stream().collect(Collectors.joining()));
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(ModocContext.class);
ctx.registerShutdownHook();
}
}}

How to disable Redis Caching at run time if redis connection failed

We have rest api application. We use redis for API response caching and internal method caching. If redis connection then it is making our API down. We want to bypass the redis caching if that redis connection fails or any exception instead of making our API down.
There is a interface CacheErrorHandler but it handles the redis get set operation failures not redis connection problems. We are using Spring 4.1.2.
Let's boil this down a bit. Your application uses caching (implemented with Redis). If the Redis connection is stale/closed or otherwise, then you want the application to bypass caching and (presumably) go directly to an underlying data store (e.g. RDBMS). The application Service logic might look similar to...
#Service
class CustomerService ... {
#Autowired
private CustomerRepository customerRepo;
protected CustomerRepository getCustomerRepo() {
Assert.notNull(customerRepo, "The CustomerRepository was not initialized!");
return customerRepo;
}
#Cacheable(value = "Customers")
public Customer getCustomer(Long customerId) {
return getCustomerRepo().load(customerId);
}
...
}
All that matters in Spring core's Caching Abstraction to ascertain a Cache "miss" is that the value returned is null. As such, Spring Caching Infrastructure will then proceed in calling the actual Service method (i.e. getCustomer). Keep in mind on the return of the getCustomerRepo().load(customerId) call, you also need to handle the case where Spring's Caching Infrastructure attempts to now cache the value.
In the spirit of keeping it simple, we will do without AOP, but you should be able to achieve this using AOP as well (your choice).
All you (should) need is a "custom" RedisCacheManager extending the SDR CacheManager implementation, something like...
package example;
import org.springframework.cache.Cache;
import org.springframework.data.redis.cache.RedisCacheManager;
...
class MyCustomRedisCacheManager extends RedisCacheManager {
public MyCustomerRedisCacheManager(RedisTemplate redisTemplate) {
super(redisTemplate);
}
#Override
public Cache getCache(String name) {
return new RedisCacheWrapper(super.getCache(name));
}
protected static class RedisCacheWrapper implements Cache {
private final Cache delegate;
public RedisCacheWrapper(Cache redisCache) {
Assert.notNull(redisCache, "'delegate' must not be null");
this.delegate = redisCache;
}
#Override
public Cache.ValueWrapper get(Object key) {
try {
delegate.get(key);
}
catch (Exception e) {
return handleErrors(e);
}
}
#Override
public void put(Object key, Object value) {
try {
delegate.put(key, value);
}
catch (Exception e) {
handleErrors(e);
}
}
// implement clear(), evict(key), get(key, type), getName(), getNativeCache(), putIfAbsent(key, value) accordingly (delegating to the delegate).
protected <T> T handleErrors(Exception e) throws Exception {
if (e instanceof <some RedisConnection Exception type>) {
// log the connection problem
return null;
}
else if (<something different>) { // act appropriately }
...
else {
throw e;
}
}
}
}
So, if Redis is unavailable, perhaps the best you can do is log the problem and proceed to let the Service invocation happen. Clearly, this will hamper performance but at least it will raise awareness that a problem exists. Clearly, this could be tied into a more robust notification system, but it is a crude example of the possibilities. The important thing is, your Service remains available while the other services (e.g. Redis) that the application service depends on, may have failed.
In this implementation (vs. my previous explanation) I chose to delegate to the underlying, actual RedisCache implementation to let the Exception occur, then knowing full well a problem with Redis exists, and so that you can deal with the Exception appropriately. However, if you are a certain that the Exception is related to a connection problem upon inspection, you can return "null" to let Spring Caching Infrastructure proceed as if it were a Cache "miss" (i.e. bad Redis Connection == Cache miss, in this case).
I know something like this should help your problem as I built a similar prototype of a "custom" CacheManager implementation for GemFire and one of Pivotal's customers. In that particular UC, the Cache "miss" had to be triggered by an "out-of-date version" of the application domain object where production had a mix of newer and older application clients connecting to GemFire through Spring's Caching Abstraction. The application domain object fields would change in newer versions of the app for instance.
Anyway, hope this helps or gives you more ideas.
Cheers!
So, I was digging through the core Spring Framework Caching Abstraction source today addressing another question and it seems if a CacheErrorHandler is implemented properly, then perhaps a problematic Redis Connection could still result in the desired behavior, e.g. cache "miss" (triggered with the return of a null value).
See the AbstractCacheInvoker source for more details.
The cache.get(key) should result in an exception due to a faulty Redis Connection and thus Exception handler would be invoked...
catch (RuntimeException e) {
getErrorHandler().handleCacheGetError(e, cache, key);
return null; // If the exception is handled, return a cache miss
}
If the CacheErrorHandler properly handles the Cache "get" error (and does not re-throw the/an Exception), then a null value will be returned indicating a cache "miss".
Thank you #John Blum. My solution in Spring Boot is as follows.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cache.Cache;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.core.RedisOperations;
import org.springframework.util.Assert;
import java.util.concurrent.Callable;
class CustomRedisCacheManager extends RedisCacheManager {
private static Logger logger = LoggerFactory.getLogger(CustomRedisCacheManager.class);
public CustomRedisCacheManager(RedisOperations redisOperations) {
super(redisOperations);
}
#Override
public Cache getCache(String name) {
return new RedisCacheWrapper(super.getCache(name));
}
protected static class RedisCacheWrapper implements Cache {
private final Cache delegate;
public RedisCacheWrapper(Cache redisCache) {
Assert.notNull(redisCache, "delegate cache must not be null");
this.delegate = redisCache;
}
#Override
public String getName() {
try {
return delegate.getName();
} catch (Exception e) {
return handleException(e);
}
}
#Override
public Object getNativeCache() {
try {
return delegate.getNativeCache();
} catch (Exception e) {
return handleException(e);
}
}
#Override
public Cache.ValueWrapper get(Object key) {
try {
return delegate.get(key);
} catch (Exception e) {
return handleException(e);
}
}
#Override
public <T> T get(Object o, Class<T> aClass) {
try {
return delegate.get(o, aClass);
} catch (Exception e) {
return handleException(e);
}
}
#Override
public <T> T get(Object o, Callable<T> callable) {
try {
return delegate.get(o, callable);
} catch (Exception e) {
return handleException(e);
}
}
#Override
public void put(Object key, Object value) {
try {
delegate.put(key, value);
} catch (Exception e) {
handleException(e);
}
}
#Override
public ValueWrapper putIfAbsent(Object o, Object o1) {
try {
return delegate.putIfAbsent(o, o1);
} catch (Exception e) {
return handleException(e);
}
}
#Override
public void evict(Object o) {
try {
delegate.evict(o);
} catch (Exception e) {
handleException(e);
}
}
#Override
public void clear() {
try {
delegate.clear();
} catch (Exception e) {
handleException(e);
}
}
private <T> T handleException(Exception e) {
logger.error("handleException", e);
return null;
}
}
}
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.core.RedisTemplate;
#Configuration
public class RedisConfig {
#Bean
public RedisCacheManager redisCacheManager(RedisTemplate redisTemplate) {
CustomRedisCacheManager redisCacheManager = new CustomRedisCacheManager(redisTemplate);
redisCacheManager.setUsePrefix(true);
return redisCacheManager;
}
}
actually my response is directed to Mr. #Vivek Aditya - I faced the same problem: new spring-data-redis api and not constructing RedisCacheManager per RedisTemplate. The only option - based on #John Blum suggestions - was to use aspects. And below is my code.
#Aspect
#Component
public class FailoverRedisCacheAspect {
private static class FailoverRedisCache extends RedisCache {
protected FailoverRedisCache(RedisCache redisCache) {
super(redisCache.getName(), redisCache.getNativeCache(), redisCache.getCacheConfiguration());
}
#Override
public <T> T get(Object key, Callable<T> valueLoader) {
try {
return super.get(key, valueLoader);
} catch (RuntimeException ex) {
return valueFromLoader(key, valueLoader);
}
}
private <T> T valueFromLoader(Object key, Callable<T> valueLoader) {
try {
return valueLoader.call();
} catch (Exception e) {
throw new ValueRetrievalException(key, valueLoader, e);
}
}
}
#Around("execution(* org.springframework.cache.support.AbstractCacheManager.getCache (..))")
public Cache beforeSampleCreation(ProceedingJoinPoint proceedingJoinPoint) {
try {
Cache cache = (Cache) proceedingJoinPoint.proceed(proceedingJoinPoint.getArgs());
if (cache instanceof RedisCache) {
return new FailoverRedisCache((RedisCache) cache);
} else {
return cache;
}
} catch (Throwable ex) {
return null;
}
}
}
works fine for all reasonable scenarios:
app starts fine with redis down
app (still) works during (sudden) redis outage
when redis starts working again, app sees it
Edit: the code is more like a poc - only for "get", and I don't like reinstantiating FailoverRedisCache every single cache hit - there should be a map.
None of the above worked for us when using Spring Boot 2.3.9.release with Redis. We ended up creating and registering our own customized CacheErrorHandler named CustomCacheErrorHandler to override the default SimpleCacheErrorHandler provided by Spring Framework. This will work perfectly.
#Configuration
public class CachingConfiguration extends CachingConfigurerSupport {
#Override
public CacheErrorHandler errorHandler() {
return new CustomCacheErrorHandler();
}
}
class CustomCacheErrorHandler implements CacheErrorHandler {
Logger log = Logger.get(CustomCacheErrorHandler.class);
#Override
public void handleCacheGetError(RuntimeException e, Cache cache, Object o) {
log.error(e.getMessage(), e);
}
#Override
public void handleCachePutError(RuntimeException e, Cache cache, Object o, Object o1) {
log.error(e.getMessage(), e);
}
#Override
public void handleCacheEvictError(RuntimeException e, Cache cache, Object o) {
log.error(e.getMessage(), e);
}
#Override
public void handleCacheClearError(RuntimeException e, Cache cache) {
log.error(e.getMessage(), e);
}
}
I had same problem, but, unfortunately, none of the above solutions work for me. I checked for the problem and found out that the executed command never timed out if there was no connection to Redis. So I start to study lettuce library for a solution. I solve the problem by rejecting the command when there is no connection:
#Bean
public LettuceConnectionFactory lettuceConnectionFactory()
{
final SocketOptions socketOptions = SocketOptions.builder().connectTimeout(Duration.ofSeconds(10)).build();
ClientOptions clientOptions = ClientOptions.builder()
.socketOptions(socketOptions)
.autoReconnect(true)
.disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
.build();
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.commandTimeout(Duration.ofSeconds(10))
.clientOptions(clientOptions).build();
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration(this.host, this.port);
return new LettuceConnectionFactory(redisStandaloneConfiguration, clientConfig);
}
All the core Spring Framework Cache abstraction annotations (e.g. #Cacheable) along with the JSR-107 JCache annotations supported by the core SF delegate to the underlying CacheManager under-the-hood, and for Redis, that is the RedisCacheManager.
You would configure the RedisCacheManager in Spring XML configuration meta-data similar to here.
One approach would be to write an AOP Proxy for the (Redis)CacheManager that uses the RedisConnection (indirectly from the RedisTemplate) to ascertain the state of the connection on each (Redis)CacheManger operation.
If the connection has failed, or is closed, for standard cache ops, the (Redis)CacheManager could return an instance of RedisCache for getCache(String name) that always returns null (indicating a Cache miss on an entry), thus passing through to the underlying data store.
There maybe better ways to handle this as I am not an expert on all things Redis (or SDR), but this should work and perhaps give you a few ides of your own.
Cheers.
You can use CacheErrorHandler. But you should make sure to make
RedisCacheManager transactionAware to false in your Redis Cache Config(to make sure the transaction is committed early when executing the caching part and the error is caught by CacheErrorHandler and don't wait until the end of the execution which skips CacheErrorHandler part). The function to set transactionAware to false looks like this:
#Bean
public RedisCacheManager redisCacheManager(LettuceConnectionFactory lettuceConnectionFactory) {
JdkSerializationRedisSerializer redisSerializer = new JdkSerializationRedisSerializer(getClass().getClassLoader());
RedisCacheConfiguration redisCacheConfiguration = RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofHours(redisDataTTL))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(redisSerializer));
redisCacheConfiguration.usePrefix();
RedisCacheManager redisCacheManager = RedisCacheManager.RedisCacheManagerBuilder.fromConnectionFactory(lettuceConnectionFactory)
.cacheDefaults(redisCacheConfiguration)
.build();
redisCacheManager.setTransactionAware(false);
return redisCacheManager;
}

Capture ALL WebAPI requests

I would like to capture and save in a log file all the requests that my WebAPI should handle.
Just tried to save the Request.Content from the controller constructor but unfortunately,
the request object is null from the controller constructor scope.
Hope to learn an efficient way to do it.
I would just hook into web api tracing...
http://www.asp.net/web-api/overview/testing-and-debugging/tracing-in-aspnet-web-api
From the above article, you can implement ITraceWriter like so. This example uses System.Diagnostics.Trace.WriteLine, but you could plug in writing to a file here as well.
public class SimpleTracer : ITraceWriter
{
public void Trace(HttpRequestMessage request, string category, TraceLevel level,
Action<TraceRecord> traceAction)
{
TraceRecord rec = new TraceRecord(request, category, level);
traceAction(rec);
WriteTrace(rec);
}
protected void WriteTrace(TraceRecord rec)
{
var message = string.Format("{0};{1};{2}",
rec.Operator, rec.Operation, rec.Message);
System.Diagnostics.Trace.WriteLine(message, rec.Category);
}
}
As you can see from the Trace method, you get access to the HttpRequestMessage here.
I ended up implementing middleware to deal with it.
public class GlobalRequestLogger : OwinMiddleware
{
public override Task Invoke(IOwinContext context)
{
// Implement logging code here
}
}
Then in your Startup.cs:
app.Use<GlobalRequestLogger>();

how to load class from jar inside equinox server side application in jboss 7

I'm face a problem since few days and I can't get solution. below is my app structure:
I have ejbapp.jar inside MyearDeployedOnJboss7.ear at the same level of equinox-server-side-app.war (built using warproduct) and I want to load class from MyJarToLaoadForEjbapp.jar which is in iModernizeWebClient_1.0.0.jar which is in plugins folder of equinox-server-side-app.war (I want show image of app structure but I cannot send image because forum rules need 10 score to be able to do that)
My question is how to allow ejbapp.jar load classes from "MyJarToLaoadForEjbapp.jar" inside MyWebClient_1.0.0.jar's plugin folder which is in the equinox-server-side-app.war.
I think using servletbridge classloader but no idea how to use it.
in my launch.ini I've:
osgi.*=#null org.osgi.*=#null eclipse.*=#null osgi.parentClassloader=app osgi.contextClassLoaderParent=app
I resolved my proble using Servlet HttpServiceTracker from the OSGI spec. how to do it : write HttpServiceTracker liket that :
public class HttpServiceTracker extends ServiceTracker {
private static final Logger logger = Logger
.getLogger(HttpServiceTracker.class.getName());
public HttpServiceTracker(BundleContext context) {
super(context, HttpService.class.getName(), null);
}
public Object addingService(ServiceReference reference) {
HttpService httpService = (HttpService) context.getService(reference);
logger.info("default context path : "
+ org.eclipse.rap.ui.internal.servlet.HttpServiceTracker.ID_HTTP_CONTEXT);
try {
logger.info("will register servlet ");
httpService.registerServlet("/programLauncherServlet",
new ProgramLauncherServlet(), null, null);
logger.info("servlet has been registred with http context ");
// httpService.registerResources( "/", "/html", null );
} catch (Exception e) {
//e.printStackTrace();
logger.info("The alias '/programLauncherServlet' is already in use");
}
return httpService;
}
public void removedService(ServiceReference reference, Object service) {
logger.info("will unregister servlet ");
HttpService httpService = (HttpService) service;
httpService.unregister("/programLauncher");
super.removedService(reference, service);
logger.info("servlet has been unregistred");
}
in your plugin activator class at method start :
#Override
public void start(BundleContext context) throws Exception {
super.start(context);
Activator.plugin = this;
BundleContext osgiContext = BundleReference.class
.cast(AnyClassOfYourProject.class.getClassLoader()).getBundle()
.getBundleContext();
serviceTracker = new HttpServiceTracker(osgiContext);
serviceTracker.open();
LOGGER.info("servlet published !!");
LOGGER.info("Bundle started.");
}
and for unregister the servlet at the stop method :
public void stop(BundleContext context) throws Exception {
Activator.plugin = null;
serviceTracker.close();
serviceTracker = null;
LOGGER.info("servlet unregistered from context !!");
super.stop(context);
}
that's all. your servlet is accessible outside your eclipse bundle and you can call methods inside the bundle.

Using HttpModules to modify the response sent to the client

I have two production websites that have similar content. One of these websites needs to be indexed by search engines and the other shouldn't. Is there a way of adding content to the response given to the client using the HttpModule?
In my case, I need the HttpModule to add to the response sent to the when the module is active on that particular web.
You'd probably want to handle the PreRequestHandlerExecute event of the application as it is run just before the IHttpHandler processes the page itself:
public class NoIndexHttpModule : IHttpModule
{
public void Dispose() { }
public void Init(HttpApplication context)
{
context.PreRequestHandlerExecute += AttachNoIndexMeta;
}
private void AttachNoIndexMeta(object sender, EventArgs e)
{
var page = HttpContext.Current.CurrentHandler as Page;
if (page != null && page.Header != null)
{
page.Header.Controls.Add(new LiteralControl("<meta name=\"robots\" value=\"noindex, follow\" />"));
}
}
}
The other way of doing it, is to create your own Stream implementation and apply it through Response.Filters, but that's certainly trickier.

Resources