Reset redis cache expiry using spring data redis - spring-data-redis

I have a requirement to reset the expire time if the record is accessed before its initial expire time. I am using Spring data redis API to use Redis as Cache. I am using RediscacheManager's setDefaultExpiration(5000) to set default expiration. Unable to find any solutions or documentation about resetting the expiry time. Any guidance is appreciated.
Also, wondering, why couldn't this be a natural feature of Redis Cache, after all, it should get the most used records from cache.

Wrote this method and called from appropriate places. Worked like a charm for me.
public void resetExpire(String keyPattern) {
LOG.debug("Getting Multiple keys from cache with pattern: " + keyPattern);
Set<String> keylist = redisTemplate.keys(keyPattern);
redisTemplate.executePipelined(new RedisCallback<Object>() {
public Object doInRedis(RedisConnection connection) throws DataAccessException {
keylist.forEach(key->
redisTemplate.expire(key, 5000, TimeUnit.SECONDS));
return null;
}
});
}

Related

ASP.Net Core HTTP Request Connections getting stuck

We have a simple application in ASP.NET Core which calls a website and returns the content. The Controller method looks like this:
[HttpGet("test/get")]
public ActionResult<string> TestGet()
{
var client = new WebClient
{
BaseAddress = "http://v-dev-a"
};
return client.DownloadString("/");
}
The URL which we call is just the default page of an IIS. I am using Apache JMeter to test 1000 requests in 10 seconds. I have always the same issue, after about 300-400 requests it gets stuck for a few minutes and nothing works. The appplication which holds the controller is completely frozen.
In the performance monitor (MMC) I see that the connection are at 100%.
I tried the same code with ASP.NET 4.7.2 and it runs without any issues.
I also read about that the dispose of the WebClient does not work and I should make it static. See here
Also the deactivation of the KeepAlive did not help:
public class QPWebClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
var request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
((HttpWebRequest)request).KeepAlive = false;
}
return request;
}
}
The HttpClient hast the same issue, it does not change anything
With dependency injection like recommended here there is an exception throw that the web client can't handle more request at the same time.
Another unsuccessful try was to change ConnectionLimit and SetTcpKeepAlive in ServicePoint
So I am out of ideas how to solve this issue. Last idea is to move the application to ASP.NET. Does anyone of you have an idea or faced already the same issue?

ASP.Net Session State Provider Failover Scenario

We had implemented Redis session state provider to our web application and it works like a charm but i wonder what happens if redis server fails or web server couldn't connect to redis server.
Is there any way to use InProc Session State management as failover of Redis?
I cannot find any documentation about declaring multiple session state providers so if redis fails, system can continue to work with using inproc. (I accept to lose session states in redis and start from scratch in case of fail and lose again session states inproc and start from scratch again if redis become available)
You need to define slave for your redis-server and use redis sentinel to monitor your server
I have been having a similar issue with Redis failing as a backing for our session store and I can not find anything that allows for failover/failback to an other SessionStateProvider.
I was hoping there was something out there that would write to both Redis and SqlServer in mem table or similar and then read from 1, if fails read from 2. But, this does not seem to exist yet.
I'm using StackExchange library to connect to redis server.It's just a simple code which just shows how to subscribe to event and don't take it a final solution.Whenever sentinel chooses new server you will receive an event for that so you can select new server.
ConnectionMultiplexer multiplexer =
ConnectionMultiplexer.Connect(new ConfigurationOptions
{
CommandMap = CommandMap.Sentinel,
EndPoints = { { "127.0.0.1", 26379 }, { "127.0.0.1", 26380 } },
AllowAdmin = true,
TieBreaker = "",
ServiceName = "mymaster",
SyncTimeout = 5000
});
multiplexer.GetSubscriber().Subscribe("*", (c, m) =>
{
Debug.WriteLine("the message=" + m);
Debug.WriteLine("channel=" + c);
try
{
var sentinelServer = multiplexer.GetServer("127.0.0.1", 26379).SentinelGetMasterAddressByName("mymaster");
Debug.WriteLine("Current server=" + sentinelServer);
Debug.Flush();
}
catch (Exception)
{
var sentinelServer = multiplexer.GetServer("127.0.0.1", 26380).SentinelGetMasterAddressByName("mymaster");
Debug.WriteLine("Current server=" + sentinelServer );
Debug.Flush();
}
});

Keep application users logged in for 20 Hours

We have a asp.net + MSSQL Server DB web based application with approx 100 users. Its hosted on our intranet on IIS7.0. We are using Forms Authentication
We need to keep the users (anyone who is logged in ) to be logged in for 20 Hours exactly. Means no one should be kicked out ( session time out ) of the application before 20 Hours even if he is idle.
We tried many of the suggested approaches like web config changes etc but nothing is working.
Our main question is : Will we have to do some code changes to keep the user sessions alive for this ( or any duration). Can someone suggest or point us to a solution?
Whenever you make a request to the server the session timeout resets. So you can just make an ajax call to an empty HTTP handler on the server, but make sure the handler's cache is disabled, otherwise the browser will cache your handler and won't make a new request.
KeepSessionAlive.ashx.cs
public class KeepSessionAlive : IHttpHandler, IRequiresSessionState
{
public void ProcessRequest(HttpContext context)
{
context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
context.Response.Cache.SetExpires(DateTime.UtcNow.AddMinutes(-1));
context.Response.Cache.SetNoStore();
context.Response.Cache.SetNoServerCaching();
}
}
.JS:
window.onload = function () {
setInterval("KeepSessionAlive()", 60000)
}
function KeepSessionAlive() {
url = "/KeepSessionAlive.ashx?";
var xmlHttp = new XMLHttpRequest();
xmlHttp.open("GET", url, true);
xmlHttp.send();
}

WCF Client Proxies, Client/Channel Caching in ASP.Net - Code Review

long time ASP.Net interface developer being asked to learn WCF, looking for some education on more architecture related fronts - as its not my strong suit but I'm having to deal.
In our current ASMX world we adopted a model of creating ServiceManager static classes for our interaction with web services. We're starting to migrate to WCF, attempting to follow the same model. At first I was dealing with performance problems, but I've tweaked a bit and we're running smoothly now, but I'm questioning my tactics. Here's a simplified version (removed error handling, caching, object manipulation, etc.) of what we're doing:
public static class ContentManager
{
private static StoryManagerClient _clientProxy = null;
const string _contentServiceResourceCode = "StorySvc";
// FOR CACHING
const int _getStoriesTTL = 300;
private static Dictionary<string, GetStoriesCacheItem> _getStoriesCache = new Dictionary<string, GetStoriesCacheItem>();
private static ReaderWriterLockSlim _cacheLockStories = new ReaderWriterLockSlim();
public static Story[] GetStories(string categoryGuid)
{
// OMITTED - if category is cached and not expired, return from cache
// get endpoint address from FinderClient (ResourceManagement SVC)
UrlResource ur = FinderClient.GetUrlResource(_contentServiceResourceCode);
// Get proxy
StoryManagerClient svc = GetStoryServiceClient(ur.Url);
// create request params
GetStoriesRequest request = new GetStoriesRequest{}; // SIMPLIFIED
Manifest manifest = new Manifest{}; // SIMPLIFIED
// execute GetStories at WCF service
try
{
GetStoriesResponse response = svc.GetStories(manifest, request);
}
catch (Exception)
{
if (svc.State == CommunicationState.Faulted)
{
svc.Abort();
}
throw;
}
// OMITTED - do stuff with response, cache if needed
// return....
}
internal static StoryManagerClient GetStoryServiceClient(string endpointAddress)
{
if (_clientProxy == null)
_clientProxy = new StoryManagerClient(GetServiceBinding(_contentServiceResourceCode), new EndpointAddress(endpointAddress));
return _clientProxy;
}
public static Binding GetServiceBinding(string bindingSettingName)
{
// uses Finder service to load a binding object - our alternative to definition in web.config
}
public static void PreloadContentServiceClient()
{
// get finder location
UrlResource ur = FinderClient.GetUrlResource(_contentServiceResourceCode);
// preload proxy
GetStoryServiceClient(ur.Url);
}
}
We're running smoothly now with round-trip calls completing in the 100ms range. Creating the PreloadContentServiceClient() method and adding to our global.asax got that "first call" performance down to that same level. And you might want to know we're using the DataContractSerializer, and the "Add Service Reference" method.
I've done a lot of reading on static classes, singletons, shared data contract assemblies, how to use the ChannelFactory pattern and a whole bunch of other things that I could do to our usage model...admittedly, some of its gone over my head. And, like I said, we seem to be running smoothly. I know I'm not seeing the big picture, though. Can someone tell me what I've ended up here with regards to channel pooling, proxy failures, etc. and why I should head down the ChannelFactory path? My gut says to just do it, but my head can't comprehend why...
Thanks!
ChannelFactory is typically used when you aren't using Add Service Reference - you have the contract via a shared assembly not generated via a WSDL. Add Service Reference uses ClientBase which is essentially creating the WCF channel for you behind the scenes.
When you are dealing with REST-ful services, WebChannelFactory provides a service-client like interface based off the shared assembly contract. You can't use Add Service Reference if your service only supports a REST-ful endpoint binding.
The only difference to you is preference - do you need full access the channel for custom behaviors, bindings, etc. or does Add Service Reference + SOAP supply you with enough of an interface for your needs.

Advantages/Disadvantages of increasing AppPool Timeout on Azure

I am just about to launch my ASP.NET MVC3 web app to production, however, as a complex app, it takes a LONG time to start up. Obviously, I don't want my users waiting over a minute for their first request to go through after the AppPool has timed out.
From my research, i've found that there are two ways to combat this:
Run a worker role or other process - which poll's the website every 19 minutes preventing the warm up.
Change the timeout from the default 20 minutes - To something much larger.
As Solution 2 seems like the better idea, i just wondered what the disadvantages would be of this, will I run out of memory etc.?
Thanks.
Could you use the auto-start feature of IIS? There is a post here that presents this idea.
You'd have IIS 7.5 and Win2k8 R2 with Azure OS family 2. You'd just need to be able to script/automate any setup steps and configuration.
I do this with a background thread that requests a keepalive URL every 15 minutes. Not only does this keep the app from going idle, but it also warms up the app right away anytime the web role or virtual machine restarts or is rebuilt.
This is all possible because Web Roles really are just Worker Roles that also do IIS stuff. So you can still use all the standard Worker Role startup hooks in a Web Role.
I got the idea from this blog post but tweaked the code to do a few extra warmup tasks.
First, I have a class that inherits from RoleEntryPoint (it does some other things besides this warm up task and I removed them for simplicity):
public class WebRole : RoleEntryPoint
{
// other unrelated member variables appear here...
private WarmUp _warmUp;
public override bool OnStart()
{
// other startup stuff appears here...
_warmUp = new WarmUp();
_warmUp.Start();
return base.OnStart();
}
}
All the actual warm up logic is in this WarmUp class. When it first runs it hits a handful of URLs on the local instance IP address (vs the public, load balanced hostname) to get things in memory so that the first people to use it get the fastest possible response time. Then, it loops and hits a single keepalive URL (again on the local role instance) that doesn't do any work and just serves to make sure that IIS doesn't shut down the application pool as idle.
public class WarmUp
{
private Thread worker;
public void Start()
{
worker = new Thread(new ThreadStart(Run));
worker.IsBackground = true;
worker.Start();
}
private void Run()
{
var endpoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["http"]; // "http" has to match the endpointName in your ServiceDefinition.csdef file.
var pages = new string[]
{
"/",
"/help",
"/signin",
"/register",
"/faqs"
};
foreach (var page in pages)
{
try
{
var address = String.Format("{0}://{1}:{2}{3}",
endpoint.Protocol,
endpoint.IPEndpoint.Address,
endpoint.IPEndpoint.Port,
page);
var webClient = new WebClient();
webClient.DownloadString(address);
Debug.WriteLine(string.Format("Warmed {0}", address));
}
catch (Exception ex)
{
Debug.WriteLine(ex.ToString());
}
}
var keepalive = String.Format("{0}://{1}:{2}{3}",
endpoint.Protocol,
endpoint.IPEndpoint.Address,
endpoint.IPEndpoint.Port,
"/keepalive");
while (true)
{
try
{
var webClient = new WebClient();
webClient.DownloadString(keepalive);
Debug.WriteLine(string.Format("Pinged {0}", keepalive));
}
catch (Exception ex)
{
//absorb
}
Thread.Sleep(900000); // 15 minutes
}
}
}
Personally I'd change the timeout, but both should work: effectively they would both have the same effect of preventing the worker processes from shutting down.
I believe the timeout is there to avoid IIS retaining resources that aren't needed for servers with lots of Web sites that are lightly used. Given that heavily used sites (like this one!) don't shut down their worker processes I don't think you'll see any memory issues.

Resources