I am using JBOSS Datagrid (RED HAT XPAAS DATA GRID IMAGE) for distributed caching. Running into issues in getting the cache expiration to work. Based on the documentation (https://access.redhat.com/documentation/en/red-hat-xpaas/0/single/red-hat-xpaas-data-grid-image/), it looks like all I have to do is specify <CACHE_NAME>_CACHE_EXPIRATION_LIFESPAN environment variable with the time in milliseconds.
Looks like this isn't working. The cache is never expiring which is the default behavior. Wondering if anyone has run into similar issue or knows what I'm missing here.
What I realized is that irrespective of how we define the cache name, we need to define the environment variable that manages the cache settings in ALL CAPS. All I had to do to get the cache expiration to work is to rename the env variable to all upper case and it worked. Isnt too clear from the documentation. Hope this helps someone else
Related
I've got a performance problem in my theme. I like to add MANY functions to change colors, fonts and so on. In this way I need for every option a setting in the $wp_customize array. But adding one setting will take 0.004 - 0.005 seconds (recorded local and not on a webserver). Multiplied by 50-100+ add_settings will cost performance and time of page-loading. In this way I got a warning for "not answering scripts". Yes, I could extend the time allowed for scripts to take, but this won't solve my primary performance problem
Is there any posibility to store this settings in a file or database, so while reloading the page, not every setting have to be recreated? I didn't find a way to get all settings or to set them all at once. So I couldn't save it like the defaults for my theme_mods.
I just found the possibility to add on setting at time via add_setting( $id, $args). Are there any possibilities for caching or something like this?
Do you have any other suggestions getting a better performance by using many settings in my theme options?
Thank you
Under the condition of "allow_versions" set to "FALSE" or "TRUE", for both cases, how Swift response for the scenario that a file is under overwriting while delete request come in simultaneously(with the order of overwriting first then deletion)?
Please share your thoughts.
Many thanks!
The timestamp assigned to the request coming in to the proxy is what will ultimately decide which "last" write wins.
If you have a long running upload and issue the delete during, the tombstone will have a later timestamp and will eventually take precedence even if the upload finishes after.
When using the container versioning feature, overwriting in a versioned container will cause the object data to be COPY'd off the current tip before the PUT data is sent to the storage node with the assigned timestamp. For deletes in a versioned container the "previous version" is discovered at the time the overwriting request is made and subject to eventual consistency in the container listing, but is only deleted once it has been copied into the current location for the object.
More information about object versioning is available here:
http://docs.openstack.org/developer/swift/overview_object_versioning.html
Well, a quick summary comes, though still a very high level view but hope it helps understanding how it works under the hood.
The diagram(below link) sets two simultaneous scenarios(A and B) against the enable/disable of the Swift object versioning feature. The outcome for each scenario is shown in the diagram.
Download the diagram.
Please share your thoughts if any.
On a new website, I've an huge formular(meaning really big, needs at least 15-20min to finish it), that configure the whole website for one client for the next year.
It's distributed between several tabs(it's a wizard). Every time we go to the next tab, it makes a regular(non ajax) call to the server that generate the next "page". The previous informations are stored in the session(an object with a custom binder).
Everything was working fine until we test it today with all real data. Real data needs reflexion, work to find correct elements, ... And it takes times.
The problem we got is that the View receive a Model partialy empty. The session duration is set to 1440 minutes(and in IIS too). For now what I know is that I get a NullException the first time I try to access the Model into my view.
I'm checking the controller since something like 1 hour, but it's just impossible it gives a null model. If I put all those data very fast, I don't have any problem(but it's random data).
For now I did only manage to reproduce this problem on the IIS server, and I'm checking elmah logs to debug it, so it's not so easy to reproduce it.
Have you just any idea about how should I debug this? I'm a little lost here
I think you should assume session does not offer reliable persistence. I am not sure about details but I guess it will start freeing some elements when it exceeds its memory limit.
You will be safer if you use database to store that information or you could introduce your own implementation for persisting state.
in addition to ans provided by #Ufuk
you can easily send an ajax request every 1 minute which would actually do nothing but by doing this the session wont get expired and site will continue to run in extended periods
The problem was that the sessions wasn't having enough space I think. I resolved temporary my problem by restarting the application pool. Still searching a solution that will not implies to changes all this code. Maybe with other mode of session states, but I need to make my models serializable.
I have a very strange problem with a website where several objects are cached.
We have a lot of DataTables, strings, booleans and other stuff that are cached for quick fetching in later requests.
Sometimes we get a periodic error where it looks like some of the cache items have been mixed up.
An example of how this shows itself is when a piece of code fetches a DataTable from the cache and then tries to access a certain column of that DataTable.
We then see a yellow screen of death with the exception "Cannot find column [ColumnName]", where "ColumnName" of course is some column name that was supposed to be in the DataTable.
When I inspect the cache item with a little home made tool, I see that a completely different DataTable is in the cache item. It is almost like some of the cache items have been mixed up.
Does anybody have an idea how this happens?
We are not able to reproduce the error. It occurs at apparently random intervals.
Whats the issue
When you add items to the cache, you need to lock the process that you create them and added to the cache.
First lets clarify that the cache is keep a reference to your data, is not clone them, nether knows whats is not that data ! reference: http://msdn.microsoft.com/en-us/library/6hbbsfk6(VS.71).aspx
Second clarify that the default session of a page is lock the pages and is by make most of the request safe because all users lock until a page fully load and send.
When its appear
So the lock issue may appear when you try to make cache by a thread, or by a handler, or by a page that have session off.
How to lock
If you use only one pool, then the simple lock(object){} can work, if you use many pools then you need to use mutex() for lock
You need to lock the full process of making your data if you change them later and still existing on cache, or only the cache reference if you make clone of them.
For example, if you read some data that you have get from the cache, then the time you edit them, if some other read the same cache it will get corrupted data, because the cache is give you a reference to them.
Hope that all this helps.
We use ViewState to store data that is being edited by users. However, ViewState just got too large so we really want something that is faster.
We considered session, but it has to be manually collected when users travel among pages.
Does anybody has suggestions?
Update:
Actually I am using Asp.net. They reason we do not want to use session is:
1. We don't need to carry our data among pages.
2. When developer put something into session, he has to remember to delete it if it is no longer useful. Otherwise, the session will get bigger and bigger. This is kind of trival.
You say you want the data stored server side and its should be automatically available?
You could trick the viewstate into storing its data in session rather than a hidden field by using this technique:
http://aspadvice.com/blogs/robertb/archive/2005/11/16/13835.aspx
You might find this article interesting as well which shows another technique to store your viewstate server side:
http://msdn.microsoft.com/en-us/magazine/cc188774.aspx#S6
Despite the initial complexity of getting this set up I think it would be the best solution because then you don't have to change your code throughout, it can still use ViewState as normal without realising this is now saved on the server.
You can look into Conversations.
http://evolutionarygoo.com/blog/?p=68
http://code.google.com/p/google-guice/issues/detail?id=5
You can use the database as a alternate to the session store. This would scale in terms of the size of the data stored, and if you use an appropriate caching strategy you can reduce the overhead of retrieving the data a great deal.