I am editing a class that is mean to be placed into the session of a servlet use as a key for a hastable of other objects. I do not know what the minimal requirements for an object which can be placed into the HttpSession. what are the minimal requirements for an object which can be placed into the HttpSession?
It should be thread-safe (or at least you should be aware that it can be used by several threads concurrently).
If you plan to save the session to disk or to share it among a cluster of servers, then it should also be Serializable.
And if that object is supposed to be used as a key of a HashMap, then it should override hashCode() and equals() properly, and it would be a good idea to make it immutable.
All objects that are placed in a HttpSession should implement java.io.Serializable.
That's really the only "minimal" requirement.
For scalability you generally want to minimise the overall size of objects that you place in the session as well.
Related
I create JavaFX project that includes a few controllers and different windows. I want to transfer data from one object to another controller. At the time when I get another window wants to data previously entered remain in their fields. Is the will use the database and continuous upload and download data from it is a good solution? Perhaps the creation of JSON and the object of it in each controller is the better option? Can someone say something about this? Or propose a better solution?
You have some options:
Using a database as a middle man: It's a very bad idea in my opinion. Databases should hold the data that must be persisted and
their persistence actually represents a concept from problem domain.
Some temporary data that can be held in RAM are not good fits for
that (even if they are going to be significant and good fits in the
near future). In addition, it introduces problems like performance
decrease and a constant need to check data integrity everywhere (i.e.
you always have to be sure every time the data in RAM changes, the
database gets updated too.)
Singleton pattern for storing state: you can have a singleton class that holds all your temporary data. This approach is a lot like
database approach as you have some data source (as a middle man) that
can be accessed from multiple points in your program, but instead of
a database it's stored on RAM. So you are going to have similar
problems but it's more efficient than a database and by holding
references to singleton data objects, you can handle data integrity
problem much easier (because when you alter your data object
references you know that it's the original object that actually gets
altered and you don't have to be worried about that.)
BUT it's a very bad idea either! Using singleton pattern for storing state is an antipattern. It's not how this pattern
intended to be used. Read more here: Why is Singleton considered an anti-pattern?
Using dependency injection frameworks like Spring: you can hold your data in spring AppicationContext with (singleton scope) and
inject it wherever you want. Again this approach is essentially
similar to previous approaches but it's a little bit cleaner because
you're not using a static singleton class and so it may enhance
testability of your application.
Using third-party JavaFx frameworks: There are some JavaFx frameworks that can handle problem of data sharing among many
controllers. You can see some examples by reading my answer to a similar question here. Here is an example from DataFx samples which
represents data sharing among two separate sender and receiver views
with distinct controllers:
by pressing send, the sender sends the data and receiver receives it. You can see the details in the jar or in my answer.
Further reading:
Passing Parameters JavaFX FXML
I'm working on an app that uses XMLSerialization and SQLite. Both require public accessors. However, there are many instances where I want accessors to return conditional data or only have read access. With SQLite the accessors must both be public, so I can't even use protected.
What's the best way to handle this? Do I really need a secondary class that is basically a copy of the serializable class? With XML serialization I could possibly construct my own serialization process, but this is painful and probably worse than a shadow class.
Ideas?
After a lot of exploration it seems like this is an unfortunate YES. Really there should be the objects filled by SQLite queries and XML serialization in the Data Access layer. Then in the Business layer there should be a conversion of those objects to what is used by the app layer.
Hopefully this makes sense to others searching for the same.
When using the HttpRuntime.Cache in an ASP.NET application, any item retrieved from the cache that is then updated will result in the cached object being updated too (by reference). Subsequent reads from the cache will get the updated value, which may not be desirable.
There are multiple posts on this subject, for example:
Read HttpRuntime.Cache item as read-only
And the suggested solution is to create a deep-copy clone using binary serialization.
The problem with binary serialization is that it's slow (incredibly slow), and I can't afford any potential performance bottlenecks. I have looked at deep-copying using reflection and whilst this appears to be better performing, it's not trivial to implement with our complex DTOs. Anyone interested in this may want to have a look at the following brief article:
Fast Deep Cloning
Does anyone have any experience of caching solutions such as AppFrabric / NCache etc and know whether they would solve this problem directly?
Thanks in advance
Griff
Products like NCache and AppFabric also perform serialization before storing the object in an out-of-process caching service. So you'd still take that serialization hit, plus you'd get slowed down even further by going out-of-process (or maybe even over the network) to access the serialized object in the caching service.
Implementing ICloneable on your classes to perform hand-tuned deep copies will avoid reflection and will outperform binary serialization, but this may not be practical if your DTOs are very complex.
Updated to provide specifics:
AppFabric usese the NetDataContractSerializer for serialization (as described here). The NetDataContractSerializer can be a little faster than the BinaryFormatter, but its performance is usually in the same ballpark: http://blogs.msdn.com/b/youssefm/archive/2009/07/10/comparing-the-performance-of-net-serializers.aspx
NCache rolled their own serializer, called "Compact Serialization". You need to either implement their ICompactSerializable interface on your DTO classes and read/write all members by hand, or else let their client libraries examine your class and then emit its own serialization code at runtime to do that work for you (it's a one-time hit when your app starts up, where they have to reflect over your class and emit their own MSIL). I don't have data on their performance, but it's safe to assume that it's faster than serializers that perform reflection (BinaryFormatter/DataContractSerializer) and probably somewhere in the same performance realm as protobuf, MessagePack, and other serializers that avoid excessive reflection. More detail is here.
(I work for a company (ScaleOut Software) that's in the same space as NCache, so I should probably know more about how they do things. ScaleOut lets you plug in whatever serializer you want--we usually recommend Protobuf-net or MessagePack, since they're generally considered to be the reigning champions for .NET serialization performance--definitely take a close look at those two if you decide to use a serializer to make your deep copies.)
Most cache frameworks rely on serialization.
You should consider invalidating the cache each time you change an object.
For example:
object Get(string key)
{
if (!Cache.Contains(key))
{
Cache[key] = GetDataFromDatabase(key);
}
return Cache[key];
}
void Invalidate(string key)
{
Cache.Remove(key);
}
So you can do:
var myDto = Get(id);
myDto.SomeProperty = "changed";
myDto.Save();
Invalidate(id);
For my global variables and data I find myself in a dilema as to whether to use HttpApplicationState or Static variables - What's the best approach?
This document states that one should use static variables over httpapplicationstate:
http://support.microsoft.com/default.aspx?scid=kb;en-us;Q312607
However, one thing I like about HttpApplicationState (and System.Web.Caching.Cache), is that one can easily enumerate the entries and select which items to remove (I've created a global CacheManager.axd for this purpose), whereas I don't believe there's an easy way with Static variables (and even then it's not clear what to do to "re-initialise" them), without recycling the app pool.
Any suggestions on a neat general-purpose way to handle and manage global objects?
Thanks, Mark.
Your instincts are correct. Use System.Web.Caching. The built-in cache management takes care of all the heavy lifting with respect to memory allocation and expiring stale or low priority objects.
Make sure to use a naming convention, for your cache keys, that makes sense down the road. If you start relying heavily on caching, you'll need to be able to target/filter different cache keys by name.
As a general practice, it's good to try to avoid global state in web applications when possible. ASP.NET is a multithreaded environment where multiple requests can get serviced in parallel. Unless your global state is immutable (readonly), you will have to deal with the challenges managing shared mutable state.
If your shared state is immutable, and you don't need to enumerate it, then I see no problem with static variables.
If your shared state is volatile/mutable, then you probably want to create an abstraction on top of whichever underlyig mechanism you choose to store the data to ensure that access and modification of that shared state is consistent and complies with the expectations of the code that consumes it. I would probably use the system cache in such a design as well, just to be able to leverage the expiration and dependency features built in to the caching service (if necessary).
What is the best way to implement DTOs?
My understanding is that they are one way to transfer data between objects. For example, in an ASP.Net app, you might use a DTO to send data from the code-behind to the business logic layer component.
What about other options, like just sending the data as method parameters? (Would this be easiest in asces wher there is less data to send?)
What about a static class that just holds data, that can be referenced by other objects (a kind of global asembly data storage class)? (Does this break encapsulation too much?)
What about a single generic DTO used for every transfer? It may be a bit more trouble to use, but reduces the number of classes needed to work with (reduces object clutter).
Thanks for sharing your thoughts.
I've used DTO's to:
Pass data between the UI and service tier's of a standard 3-tier app.
Pass data as method parameters to encapsulate a large number (5+) of parameters.
The 'one DTO to rule them all' approach could get messy, best bet is to go with specific DTO's for each feature/feature group, taking care to name them so they're easy to match between the features they're used in.
I've never seen static DTO's in the way you mention and would hesitate at creating DTO singletons like you describe.
I keep it simple and map one DTO class to one db table. They are lightweight so I can send them everywhere, including over the wire.
I wish it could be as simple. Though DTO originated due to network distribution tiers of a system there can be whole load of issues if domain objects are returned to View layers. Here are some of them:
1.By exposing Domain objects to View layer, Views become aware of structure of domain objects, which lets view makes some assumptions about how related objects are available. For example if a domain object "Person" was retunrned to a view to which it is "bound" and on some other view, "Address" of Person is to be bound, there would be a tendency for Application layer to use a semantic like person.getAddresse() which woukd fail since at that point Address domain object might have not been loaded at point. In essence, with domain objects becoming available to View layers, views can always make assumptions about how data is made available.
2.) when domain objects are bound to views (more so in Thick clients), there will alwyas be a tendency for View centric logic to creep inside these objects making them logically corrupt.
Basically from my experience I have seen that making domain objects available to Views create architectural issues but there are issues with use of DTO's also since use of DTO creates additional work in terms of creation of Assemblers (DTO to Domain objects and reverse), Proliferation of analogous objects like Patient domain object, Patient DTO and perhaps Patient bean bound to view.
Clearly there are no right answers for this specially in a thick client system.
I borrowed this short and not complete but true answer to DTO cliché from:
http://www.theserverside.com/discussions/thread.tss?thread_id=32389#160505
I think it's pretty common to use DataSet/DataTable as the "one DTO to rule them all". It's easy to load them from the database, and persist the values back, and they can be easily serialized.
I would definitely say they are more trouble to use. They do provide all of the plumbing, but programming against them is a pain (lots of casting, null checks, magic strings, etc). It would be interesting to see a good set of extension methods to make working with them a little more "natural".
DTOs are used to send data over the wire, not between objects. Check out this post:
POCO vs DTO
Thanks for all the helpful ideas...
A summary + my take on this:
--If there is a small amount of data to move and not too many places to move it, regular parameters may suffice
--If there is a lot of data and/or many objects to move it to, a specially created object may be easiest (DTO object).
--A global data object that can be referenced (rather than passed) by various objects would seem to be frowned on...however, I wonder if there isn't sometimes a place for it within a particular sub-system? It is one way to reduce the amount of data passing. It does push the limits of "good encapsulation", however in specific instances within specific layers, perhaps it could add simplicity to a particluar assemply of classes. Thus one would lose class-level encapsulation, but could still have assembly-level encapsulation.