Code design - Should serializable classes have a duplicate class to protect access? - sqlite

I'm working on an app that uses XMLSerialization and SQLite. Both require public accessors. However, there are many instances where I want accessors to return conditional data or only have read access. With SQLite the accessors must both be public, so I can't even use protected.
What's the best way to handle this? Do I really need a secondary class that is basically a copy of the serializable class? With XML serialization I could possibly construct my own serialization process, but this is painful and probably worse than a shadow class.
Ideas?

After a lot of exploration it seems like this is an unfortunate YES. Really there should be the objects filled by SQLite queries and XML serialization in the Data Access layer. Then in the Business layer there should be a conversion of those objects to what is used by the app layer.
Hopefully this makes sense to others searching for the same.

Related

Transmission of data by the JSON or Database?

I create JavaFX project that includes a few controllers and different windows. I want to transfer data from one object to another controller. At the time when I get another window wants to data previously entered remain in their fields. Is the will use the database and continuous upload and download data from it is a good solution? Perhaps the creation of JSON and the object of it in each controller is the better option? Can someone say something about this? Or propose a better solution?
You have some options:
Using a database as a middle man: It's a very bad idea in my opinion. Databases should hold the data that must be persisted and
their persistence actually represents a concept from problem domain.
Some temporary data that can be held in RAM are not good fits for
that (even if they are going to be significant and good fits in the
near future). In addition, it introduces problems like performance
decrease and a constant need to check data integrity everywhere (i.e.
you always have to be sure every time the data in RAM changes, the
database gets updated too.)
Singleton pattern for storing state: you can have a singleton class that holds all your temporary data. This approach is a lot like
database approach as you have some data source (as a middle man) that
can be accessed from multiple points in your program, but instead of
a database it's stored on RAM. So you are going to have similar
problems but it's more efficient than a database and by holding
references to singleton data objects, you can handle data integrity
problem much easier (because when you alter your data object
references you know that it's the original object that actually gets
altered and you don't have to be worried about that.)
BUT it's a very bad idea either! Using singleton pattern for storing state is an antipattern. It's not how this pattern
intended to be used. Read more here: Why is Singleton considered an anti-pattern?
Using dependency injection frameworks like Spring: you can hold your data in spring AppicationContext with (singleton scope) and
inject it wherever you want. Again this approach is essentially
similar to previous approaches but it's a little bit cleaner because
you're not using a static singleton class and so it may enhance
testability of your application.
Using third-party JavaFx frameworks: There are some JavaFx frameworks that can handle problem of data sharing among many
controllers. You can see some examples by reading my answer to a similar question here. Here is an example from DataFx samples which
represents data sharing among two separate sender and receiver views
with distinct controllers:
by pressing send, the sender sends the data and receiver receives it. You can see the details in the jar or in my answer.
Further reading:
Passing Parameters JavaFX FXML

Passing dataset to different layers(design related)

i read in one article that its not a good practice to pass dataset between different layers of .net web application.(DAL->BAL->Pages vice versa).Is that correct?
please give your suggestions.
Thanks
SNA
On the one hand, the problem with datasets and datatables is that they expose database implementation details like column names and types outside of your data access layer. Change a column name in your database or query and odds are that change is propogated to your dataset as well, forcing a re-compile of any tier that uses the dataset. So if you retrieve data into a dataset you should convert it to use strongly-typed business objects before passing it on.
On the other hand, a dataset doesn't care what kind of database it belongs to. You can use them with access, oracle, sql server, mysql, anything. So there is some generic-ness there that can make them useful when passing data between tiers. And just like the business layer shouldn't care about database details the data layer shouldn't really need to know what the the business objects are, so there's a good argument that you should use them for data interchange at that level.
My normal procedure is to have a sort of one-way "translation" tier between the business and data access layers, so that the business layer only deals with business objects and the data layer only returns generic data. This currently takes one of two forms:
I'll write my data access methods to return datatables or datareaders, the the translation tier will use a factory pattern to convert those rows into the desired strongly-typed business objects.
or
I'll use C# iterator blocks to convert a datareader into an IEnumerable<IDataRecord> in the data access layer and the translation tier will use them to change that IEnumerable<IDataRecor> into an IEnumerable<MyBusinessObject>, such that the code only ever iterates over the result set one time.
There is nothing wrong with passing around datasets but it's not a great practice.
Pros:
Easy to pass around and use in .NET apps
No having to code wrapper classes
Lots of functionality built into DataSets
Cons:
Data type that is not really type safe.
Your data field names can change all parts of your app will compile fine until they blow up at runtime.
Heavy object. Dataset does a ton of stuff and you probably don't need 90% of it.
Having non .NET apps talk to your DAL or BAL is going to be very clean.
There's nothing wrong about passing DataSets from your DAL to your BAL.
I think this stackoverflow question on DAL best practices sums up the two schools of thought pretty well.
I am in the middle of a "discussion"
with a colleague about the best way to
implement the data layer in a new
application.
One viewpoint is that the data layer
should be aware of business objects
(our own classes that represent an
entity), and be able to work with that
object natively.
The opposing viewpoint is that the
data layer should be object-agnostic,
and purely handle simple data types
(strings, bools, dates, etc.)
There is no problem with passing dataset across layers. If you observe, you will notice that passing dataset is by reference and not by value.So there is no issue of performance here.
Now what you read is also right, but you have to understand the context. If you are passing the dataset across remote boundaries, that is not a recommended practice.
There's nothing fundamentally wrong with that doing that. Although the basic idea of having a DAL, BLL and UI layer is so that each layer can abstract what's beneath it. E.g. the BLL shouldn't have any knowledge of how the database is structured because the DAL abstracts that away. If a dataset is being loaded in the DAL then passed straight through the BLL to the pages, it kind of sounds like the BLL is pointless.
The strongest statements often seen about DataSet is not to pass it into or out of a web service. That goes beyond exposing implementation details, and includes exposing details of the platform (.NET).
Although it's possible to change "table" and "column" names in a DataSet from those in the underlying database, you're still largely stuck with the underlying structure of the database. To abstract that, I would use Entity Framework. It allows you, for instance, to define a "Customer" entity which takes data from multiple tables and puts it into a single entity. Code using the entity doesn't need to know whether it is implemented as one table, two tables, or whatever.
Even there, you should not pass these entities outside of a web service boundary. They still pass implementation details outside of the implementation. For instance, properties of the base classes get serialized, even though these are just implementation details.
As far as I've understood, the DataSet requires the db connection to be open, for as long as it is used, which will reduce performance in your application as it keeps the connection open until the content is rendered.
Instead, I recommend using generic collections, such as IEnumerable<myType> or IQueryable<myType>, where myType is a custom type which you fill with your data.

In Memory DataContext

Is it possible to get a LINQ to SQL DataContext to run completely in-memory? Without it touching the database?
I am doing some very rapid prototyping, and want to minimize the surface area for major changes since the UI is changing so fast. However, the data model already exists.
Data access is handled through the use of I[Model]Repository classes that return the actual LINQ to SQL data classes, so I currently have some concrete InMemory[Model]Repository classes that shove stuff in cache. The implementation is a little cumbersome however.
So... is it possible to simply override enough of the DataContext behavior to have it run in-memory and never touch the database. My assumption is that it is not possible, but I thought I would go fishing anyway.
You can only do this if you are prepared to wrap access to the datacontext with your own interface. Then for rapid prototyping you can write your own datacontext alternative that implements this interface and instead uses lists and LINQ to Objects to perform in-memory queries.

Data mapping code or reflection code?

Getting data from a database table to an object in code has always seemed like mundane code. There are two ways I have found to do it:
have a code generator that reads a database table and creates the
class and controller to map the datafields to the class fields or
use reflection to take the database field and find it on the class.
The problems noted with the above 2 methods are as noted below
Method 1 seems to me like I'm missing something because I have to create a controller for every table.
Method 2 seems to be too labor intensive once you get into heavy data
access code.
Is there a third route that I should try to get data from a database onto my objects?
You normally use OR (Object-Relational) mappers in such situations. A good framework providing OR functionality is Hibernate. Does this answer your question?
I think the answer to this depends on the available technologies for the language you are going to use.
I for one am very successful with the use of an ORM (NHibernate) so naturally I may recommend option one.
There are other options that you may wish to take though:
If you are using .NET, you may opt to use attributes for your class properties to serve either as a mapping within a class, or as data that can be reflected
If you are using .NET, Fluent NHibernate will make it quite easy to make type-safe mappings within your code.
You can use generics so that you will not need to make a controller for every table, although I admit that it will be likely that you will do the latter anyway. However the generics can contain most of the general CRUD methods that is common to all tables, and you will only need to code specific quirks.
I use reflection to map data back and forth and it works well even under heavy data access. The "third route" is to do everything by hand, which may be faster to run but really slow to write.
I agree with lewap, an ORM (object-relational mapper) really helps in these situations. You may also want to consider the Active Record pattern (discussed in Fowler's Patterns of Enterprise Architecture book). It can really speed up creation of the DAL in simple apps.

How to effectively use DTO objects (Data Transfer Objects)?

What is the best way to implement DTOs?
My understanding is that they are one way to transfer data between objects. For example, in an ASP.Net app, you might use a DTO to send data from the code-behind to the business logic layer component.
What about other options, like just sending the data as method parameters? (Would this be easiest in asces wher there is less data to send?)
What about a static class that just holds data, that can be referenced by other objects (a kind of global asembly data storage class)? (Does this break encapsulation too much?)
What about a single generic DTO used for every transfer? It may be a bit more trouble to use, but reduces the number of classes needed to work with (reduces object clutter).
Thanks for sharing your thoughts.
I've used DTO's to:
Pass data between the UI and service tier's of a standard 3-tier app.
Pass data as method parameters to encapsulate a large number (5+) of parameters.
The 'one DTO to rule them all' approach could get messy, best bet is to go with specific DTO's for each feature/feature group, taking care to name them so they're easy to match between the features they're used in.
I've never seen static DTO's in the way you mention and would hesitate at creating DTO singletons like you describe.
I keep it simple and map one DTO class to one db table. They are lightweight so I can send them everywhere, including over the wire.
I wish it could be as simple. Though DTO originated due to network distribution tiers of a system there can be whole load of issues if domain objects are returned to View layers. Here are some of them:
1.By exposing Domain objects to View layer, Views become aware of structure of domain objects, which lets view makes some assumptions about how related objects are available. For example if a domain object "Person" was retunrned to a view to which it is "bound" and on some other view, "Address" of Person is to be bound, there would be a tendency for Application layer to use a semantic like person.getAddresse() which woukd fail since at that point Address domain object might have not been loaded at point. In essence, with domain objects becoming available to View layers, views can always make assumptions about how data is made available.
2.) when domain objects are bound to views (more so in Thick clients), there will alwyas be a tendency for View centric logic to creep inside these objects making them logically corrupt.
Basically from my experience I have seen that making domain objects available to Views create architectural issues but there are issues with use of DTO's also since use of DTO creates additional work in terms of creation of Assemblers (DTO to Domain objects and reverse), Proliferation of analogous objects like Patient domain object, Patient DTO and perhaps Patient bean bound to view.
Clearly there are no right answers for this specially in a thick client system.
I borrowed this short and not complete but true answer to DTO cliché from:
http://www.theserverside.com/discussions/thread.tss?thread_id=32389#160505
I think it's pretty common to use DataSet/DataTable as the "one DTO to rule them all". It's easy to load them from the database, and persist the values back, and they can be easily serialized.
I would definitely say they are more trouble to use. They do provide all of the plumbing, but programming against them is a pain (lots of casting, null checks, magic strings, etc). It would be interesting to see a good set of extension methods to make working with them a little more "natural".
DTOs are used to send data over the wire, not between objects. Check out this post:
POCO vs DTO
Thanks for all the helpful ideas...
A summary + my take on this:
--If there is a small amount of data to move and not too many places to move it, regular parameters may suffice
--If there is a lot of data and/or many objects to move it to, a specially created object may be easiest (DTO object).
--A global data object that can be referenced (rather than passed) by various objects would seem to be frowned on...however, I wonder if there isn't sometimes a place for it within a particular sub-system? It is one way to reduce the amount of data passing. It does push the limits of "good encapsulation", however in specific instances within specific layers, perhaps it could add simplicity to a particluar assemply of classes. Thus one would lose class-level encapsulation, but could still have assembly-level encapsulation.

Resources