Many answers (and comments) to questions relating to PropertyValueFactory recommend avoiding that class and others like it. What is wrong with using this class?
TL;DR:
You should avoid PropertyValueFactory and similar classes because they rely on reflection and, more importantly, cause you to lose helpful compile-time validations (such as if the property actually exists).
Replace uses of PropertyValueFactory with lambda expressions. For example, replace:
nameColumn.setCellValueFactory(new PropertyValueFactory<>("name"));
With:
nameColumn.setCellValueFactory(data -> data.getValue().nameProperty());
(assumes you're using Java 8+ and you've defined the model class to expose JavaFX properties)
PropertyValueFactory
This class, and others like it, is a convenience class. JavaFX was released during the era of Java 7 (if not earlier). At that time, lambda expressions were not part of the language. This meant JavaFX application developers had to create an anonymous class whenever they wanted to set the cellValueFactory of a TableColumn. It would look something like this:
// Where 'nameColumn' is a TableColumn<Person, String> and Person has a "name" property
nameColumn.setCellValueFactory(new Callback<TableColumn.CellDataFeatures<Person>, ObservableValue<String>>() {
#Override
public ObservableValue<String> call(TableColumn.CellDataFeatures<Person> data) {
return data.getValue().nameProperty();
}
});
As you can see, this is pretty verbose. Imagine doing the same thing for 5 columns, 10 columns, or more. So, the developers of JavaFX added convenience classes such as PropertyValueFactory, allowing the above to be replaced with:
nameColumn.setCellValueFactory(new PropertyValueFactory<>("name"));
Disadvantages of PropertyValueFactory
However, using PropertyValueFactory and similar classes has its own disadvantages. Those disadvantages being:
Relying on reflection, and
Losing compile-time validations.
Reflection
This is the more minor of the two disadvantages, though it directly leads to the second one.
The PropertyValueFactory takes the name of the property as a String. The only way it can then invoke the methods of the model class is via reflection. You should avoid relying on reflection when you can, as it adds a layer of indirection and slows things down (though in this case, the performance hit is likely negligible).
The use of reflection also means you have to rely on conventions not enforceable by the compiler. In this case, if you do not follow the naming conventions for JavaFX properties exactly, then the implementation will fail to find the needed methods, even when you think they exist.
Reflection also requires opening packages to loosen security in a modular application, otherwise you will receive error messages such as this:
java.lang.RuntimeException: java.lang.IllegalAccessException: module javafx.base cannot access class application.Item (in module ProjectReviewerCollection) because module ProjectReviewerCollection does not open application to javafx.base
No Compile-time Validations
Since PropertyValueFactory relies on reflection, Java can only validate certain things at run-time. More specifically, the compiler cannot validate that the property exists, or if the property is the right type, during compilation. This makes developing the code harder.
Say you had the following model class:
/*
* NOTE: This class is *structurally* correct, but the method names
* are purposefully incorrect in order to demonstrate the
* disadvantages of PropertyValueFactory. For the correct
* method names, see the code comments above the methods.
*/
public class Person {
private final StringProperty name = new SimpleStringProperty(this, "name");
// Should be named "setName" to follow JavaFX property naming conventions
public final void setname(String name) {
this.name.set(name);
}
// Should be named "getName" to follow JavaFX property naming conventions
public final String getname() {
return name.get();
}
// Should be named "nameProperty" to follow JavaFX property naming conventions
public final StringProperty nameproperty() {
return name;
}
}
Having something like this would compile just fine:
TableColumn<Person, Integer> nameColumn = new TableColumn<>("Name");
nameColumn.setCellValueFactory(new PropertyValueFactory<>("name"));
nameColumn.setCellFactory(tc -> new TableCell<>() {
#Override
public void updateItem(Integer item, boolean empty) {
if (empty || item == null) {
setText(null);
} else {
setText(item.toString());
}
}
});
But there will be two issues at run-time.
The PropertyValueFactory won't be able to find the "name" property and will throw an exception at run-time. This is because the methods of Person do not follow the property naming conventions. In this case, they failed to follow the camelCase pattern. The methods should be:
getname → getName
setname → setName
nameproperty → nameProperty
Fixing this problem will fix this error, but then you run into the second issue.
The call to updateItem(Integer item, boolean empty) will throw a ClassCastException, saying a String cannot be cast to an Integer. We've "accidentally" (in this contrived example) created a TableColumn<Person, Integer> when we should have created a TableColumn<Person, String>.
What Should You Use Instead?
You should replace uses of PropertyValueFactory with lambda expressions, which were added to the Java language in version 8.
Since Callback is a functional interface, it can be used as the target of a lambda expression. This allows you to write this:
// Where 'nameColumn' is a TableColumn<Person, String> and Person has a "name" property
nameColumn.setCellValueFactory(new Callback<TableColumn.CellDataFeatures<Person>, ObservableValue<String>>() {
#Override
public ObservableValue<String> call(TableColumn.CellDataFeatures<Person> data) {
return data.getValue().nameProperty();
}
});
As this:
nameColumn.setCellValueFactory(data -> data.getValue().nameProperty());
Which is basically as concise as the PropertyValueFactory approach, but with neither of the disadvantages discussed above. For instance, if you forgot to define Person#nameProperty(), or if it did not return an ObservableValue<String>, then the error would be detected at compile-time. This forces you to fix the problem before your application can run.
The lambda expression even gives you more freedom, such as being able to use expression bindings.
Disadvantage
There is a couple of disadvantages, though they are both small.
The "number properties", such as IntegerProperty and DoubleProperty, all implement ObservableValue<Number>. This means you either have to:
Use Number instead of e.g., Integer as the column's value type. This is not too bad, since you can call e.g., Number#intValue() if and as needed.
Or use e.g., IntegerProperty#asObject(), which returns an ObjectProperty<Integer>. The other "number properties" have a similar method.
column.setCellValueFactory(data -> data.getValue().someIntegerProperty().asObject());
The implementation of the Callback cannot be defined in FXML. By contrast, a PropertyValueFactory can be declared in FXML.
Kotlin
If you're using Kotlin, then the lambda may look something like this:
nameColumn.setCellValueFactory { it.value.nameProperty }
Assuming you defined the appropriate Kotlin properties in the model class. See this Stack Overflow answer for details.
Records
If the data is your TableView is read-only then you can use a record, which is a special kind of class.
For a record, you cannot use a PropertyValueFactory and must use a custom cell value factory (e.g. a lambda).
The naming strategy for record accessor methods differs from the standard java beans naming strategy. For example, for a member called name the standard java beans accessor name used by PropertyValueFactory would be getName(), but for a record, the accessor for the name member is just name(). Because records don't follow the naming conventions required by a PropertyValueFactory, a PropertyValueFactory cannot be used to access data stored in records.
However, the lambda approach detailed in this answer will be able to access the data in the record just fine.
Further information and an example of using a record with a cell value factory for a TableView can be found at:
How do you use a JavaFX TableView with java records?
Related
I am new to Flutter and Dart, coming from native Android.
Android has a very nice database abstraction architecture called the Room Persistence Library. As far as I am aware, no such database abstraction architecture exists for Flutter using the MVVM / MVC design patterns.
My solution was to create a Dart version of it myself. I got it pretty much done after a few headaches, but I cannot seem to get LiveData to work properly using generics.
I set up my class like this:
class LiveData<T> {
...
}
Now when I want to return some data, it can either be an Object or List<Object>. I found a neat hack for differentiating the two from T:
...
// Parse response
// This checks if the type is an instance of a single entity or a list.
if (entity is T) {
cachedData = rawData.isEmpty ? null : entity.fromMap(rawData.first) as T;
} else {
cachedData = rawData.map((e) => entity.fromMap(e)).toList() as T;
}
...
The problem lies in the second block:
cachedData = rawData.map((e) => entity.fromMap(e)).toList() as T;
With the error:
- Unhandled Exception: type 'List<Entity>' is not a subtype of type 'List<Vehicle>' in type cast
The question then becomes: How can I cast Entity to Vehicle when I do not have access to the Vehicle class. Only an instance of it is assigned to an Entity entity variable.
Here's a snippet to demonstrate my access to Vehicle:
final Entity entity;
...assign Vehicle instance to entity...
print(entity is Vehicle) // True
I've tried using .runtimeType to no avail. I have also thought about splitting LiveData into two classes, the second one being LiveDataList. Although this seems to be the easiest solution to not bug the code- it would bug me (bad pun is intentional) and break the otherwise pretty direct port of Room.
As a temporary solution, I have abstracted out the build logic into a generic function to be passed to the LiveData in the constructor.
final T Function(List<Map<String, dynamic>> rawData) builder;
And now I call that instead of the previous code to build the cachedData.
// Parse response
cachedData = builder(rawData);
With the constructor for the LiveData<List<Vehicle>> called when accessing all vehicles in the Dao<Vehicle> being:
class VehicleDao implements Dao<Vehicle> {
...
static LiveData<List<Vehicle>> get() {
return LiveData<List<Vehicle>>(
...
(rawData) => rawData.map((e) => Vehicle.fromMap(e)).toList(),
...
);
}
}
In Dart (and indeed in many languages) generics screws with the concept of inheritance. You would think that if Bar inherits from Foo, that List<Bar> would also be castable to List<Foo>.
This is not actually going to be the case because of how generics work. When you have a generic class, every time you use that class with a different type, that type is treated as a completely separate class. This is because when the compiler compiles those types, class MyGenericType<Foo> extends BaseClass and class MyGenericType<Bar> extends BaseClass are basically converted to something like class MyGenericType_Foo extends BaseClass and class MyGenericType_Bar extends BaseClass.
Do you see the problem? MyGenericType_Foo and MyGenericType_Bar are not descendants of one another. They are siblings of each other, both extending from BaseClass. This is why when you try to convert a List<Entity> to List<Vehicle>, the cast doesn't work because they are sibling types, not a supertype and subtype.
With all this being said, while you cannot directly cast one generic type to another based on the relationship of the generic type parameter, in the case of List there is a way to convert one List type to another: the cast method.
List<Entity> entityList = <Entity>[...];
List<Vehicle> vehicleList = entityList.cast<Vehicle>(); // This cast will work
One thing to note though, if you are casting from a supertype generic to a sub-type generic and not all the elements of the list are that new type, this cast will throw an error.
I'm writing a JFace dialog, and I'd like to use databing to a model object.
Looking at code I can see that there are times when I find a PojoProperties used to build the binding, while other time it is used a PojoObservables.
Looking at the Javadoc I can read:
PojoObservables: A factory for creating observable objects for POJOs (plain old java objects) that conform to idea of an object with getters and setters but does not provide property change events on change.
PojoProperties: A factory for creating properties for POJOs (plain old Java objects) that conform to idea of an object with getters and setters but does not provide property change events on change.
The same question applies to the difference that exists between BeansObservables and BeansProperties
The (obvious) difference sems to be that the observable allows to observe objects and the properties allows to observe properties, but since a Pojo has a getter and a setter for its data, what is the difference between them? And which of them should I choose for my dialog?
Here follows a code excerpt:
The POJO:
public class DataObject {
private String m_value;
public String getValue() {
return m_value;
}
public void setValue(String i_value) {
m_value = i_value;
}
}
The DIALOG (relevant part):
#Override
protected Control createDialogArea(Composite parent) {
Composite container = (Composite) super.createDialogArea(parent);
m_combo = new Combo(container, SWT.BORDER);
m_comboViewer = new ComboViewer(container, SWT.NONE);
}
The BINDING (relevant part):
// using PojoObservable
IObservableValue observeValue = PojoObservables.observeValue(m_dataObject, "value");
IObservableValue observeWidget = SWTObservables.observeSelection(m_combo);
// using PojoProperties
IObservableValue observeValue = PojoProperties.value("value").observe(m_dataObject);
IObservableValue observeWidget = ViewerProperties.singleSelection().observe(m_comboViewer);
I understand that one time I'm using a combo and another I'm using a ComboViewer, but I can get the combo from the viewer and bind the other way if I need...
Also, can I mix the two, for example use the observeValue with the ViewerProperties?
IObservableValue observeValue = PojoObservables.observeValue(m_dataObject, "value");
IObservableValue observeWidget = ViewerProperties.singleSelection().observe(m_comboViewer);
I am playing around a little with JFace viewers (especially ComboViewer) & databinding and discovered that if I use
SWTObservables.observeSelection(comboViewer.getCombo());
then databinding is not working correctly.
However, if I use
ViewersObservables.observeSingleSelection(comboViewer);
Then everything is working as expected.
Maybe this is a special for my case, so to get it a better overview I'll describe my set up in following paragraph.
I have modelObject with field named selectedEntity and entities and bind this ComboViewer to the modelObject.
I want to display all "entities" in model object, if I add any entity to the modelObject.entities collection then I want to this entity be added to combo automatically.
If user selects some item in combo I want to modelObject.selectedEntity be set automatically.
If I set modelObject.selectedEntity I want to combo selection be set automatically.
Source code can be found at: https://gist.github.com/3938502
Since Eclipse Mars, PojoObservables is deprecated in favor of PojoProperties and BeansObservables is deprecated in favor of BeanProperties so the answer to which one should be used has now become evident.
I am solidifying my understanding of the relationship between Liskov Substitutional Principal and Open Close Principal. If anybody could confirm my deductions and answer my questions below that would be great.
I have the following classes. As you can see, B is derived from A and it is overriding the DisplayMessage function in order to alter the behavior.
public class A
{
private readonly string _message;
public A(string message)
{
_message = message;
}
public virtual void DisplayMessage()
{
Console.WriteLine(_message);
}
}
public class B : A
{
public B(string message) : base(message){}
public override void DisplayMessage()
{
Console.WriteLine("I'm overwriting the expected behavior of A::DisplayMessage() and violating LSP >:-D");
}
}
Now in my bootstrap program, ShowClassTypeis expecting an object of Type A which should helpfully write out what class Type it is. However B is violating LSP so when it's DisplayMessage function is called it prints a completely unexpected message and essentially interferes with the intended purpose of ShowClassType.
class Program
{
static void Main(string[] args)
{
A a = new A("I am A");
B b = new B("I am B");
DoStuff(b);
Console.ReadLine();
}
private static void ShowClassType(A model)
{
Console.WriteLine("What Class are you??");
model.DisplayMessage();
}
}
So my question is, am I right to conclude that ShowClassType is now violating the Open Close Principal because now that Type B can come in and change the expected function of that method, it is no longer closed for modification (ie. to ensure it maintains it's expected behaviour you would have to alter it so that it first checks to make sure we are only working with an original A object)?
Or, inversely is this just a good example to show that ShowClassType is closed for modification and that by passing in a derived type (albeit a LSP violating one) we have extended what it is meant to do?
Lastly, is it bad practice to create virtual functions on Base classes if the base class is not abstract? By doing so, are we not just inviting derived classes to violate the Liskov Substitution principal?
Cheers
I'd say it's not ShowClassType that is violating the Open/Closed Principle.
It's only class B that is violating the Liskov Substitution Principle. A is Open for extension, but closed for modification.
From Wikipedia,
an entity can allow its behaviour to be modified without altering its source code.
It's obvious that the source code of A is not modified. Nor are private members of A being used (which would also be a violation of the Open/Closed principle in my book). B strictly uses the public interface of A, so although the Open/Closed principle is obeyed the Liskov Substitution Principle is violated.
The last question is worth a discussion in and of itself. A related question on SO is here.
I think it is not violate not LSP and not OCP in THIS context of using.
For my opinion, ShowClassType not violation OCP:
1. Function can not break OCP, only class architecture can do this.
2. You can add new behaviours to derived classes from A - so it do not break OCP
What about LSP? Your reason - user not expected get this message? But he got some message! If function overriding returns some message i think is ok in THIS context of your code.
If function, that add two numbers is overrides, and 1+1 returns 678 it not expectable for me and is bad. BUT, if for scientist of Physics from Mars planet it can be good answer.
DO NOT ANALYSE PROBLEM WITHOUT ALL CONTEXT!!! You must get whole picture of problem. And, of course
I am using POCO classes on an EF4 CTP5 project and I am having trouble deleting child properties. Here's my example (hopefully not too long).
Relevant Portions of the Tour Class
public partial class Tour
{
public Guid TourId { get; private set; }
protected virtual List<Agent> _agents { get; set; }
public void AddAgent(Agent agent)
{
_agents.Add(agent);
}
public void RemoveAgent(Guid agentId)
{
var a = Agents.Single(x => x.AgentId == agentId);
_agents.Remove(Agents.Single(x => x.AgentId == agentId));
}
}
Command Handler
public class DeleteAgentCommandHandler : ICommandHandler<DeleteAgentCommand>
{
private readonly IRepository<Core.Domain.Tour> _repository;
private readonly IUnitOfWork _unitOfWork;
public DeleteAgentCommandHandler(
IRepository<Core.Domain.Tour> repository,
IUnitOfWork unitOfWork
)
{
_repository = repository;
_unitOfWork = unitOfWork;
}
public void Receive(DeleteAgentCommand command)
{
var tour = _repository.GetById(command.TourId);
tour.RemoveAgent(command.AgentId);
// The following line just ends up calling
// DbContext.SaveChanges(); on the current context.
_unitOfWork.Commit();
}
}
Here's the error that I get when my UnitOfWork calls DbContext.SaveChanges()
The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted.
This is happening because EF wont just automatically delete the an Agent entity from the database just because it has been removed from the Agents collection in my Tour class.
I need to explicitly call dbContext.Agents.DeleteObject(a);, but my problem is, I don't have access to the dbContext from within my POCO.
Is there any way to handle this scenario?
With your current architecture I am afraid you need to feed your DeleteAgentCommandHandler with a second repository (IRepository<Core.Domain.Agent>, I guess) and then call something like Delete(command.AgentId) on that second repository.
Or you could extend your IUnitOfWork to be a factory of repositories, so the interface would get an additional method like T CreateRepository<T>() which allows you to pull any instance of your generic repository from the unit of work. (Then you only need to inject IUnitOfWork into the DeleteAgentCommandHandler, and not the repositories anymore.)
Or stay away from generic repositories in your business/UI layer. If Agent is completely dependent on Tour it doesn't need to have a repository at all. A non-generic ITourRepository could have methods to handle the case of removing an agent from a tour in the database layer appropriately.
This does seem like something that should work. I've found this post which suggests this feature is being investigated for future versions:
http://social.msdn.microsoft.com/Forums/en-US/adonetefx/thread/58a31f34-9d2c-498d-aff3-fc96988a3ddc/
I've also found another post (somewhere - unfortunately I lost it) which suggested adding the parent entity's key to the child entity in your DbContext OnModelCreating method like this:
modelBuilder.Entity<Agent>()
.HasKey(AgentId)
.HasKey(TourId);
Currently this throws an exception at runtime using code-first, although I have got this working when using an EDMX file by hacking the XAML to include the parent key in the store data model as well as the conceptual data model. I think this difference in behaviour is because in the case of the EDMX file, EF trusts that the store metadata it holds is accurate, whereas code-first checks the database to see whether it's model matches.
Another way which may work although I haven't yet tried it yet, is to include the parent key as a compound key in the child table so that code-first is happy. Obviously changing the database or hacking the XAML are both less than ideal and workarounds at best.
Pardon me if this question has already been asked. HttpContext.Current.Session["key"] returns an object and we would have to cast it to that particular Type before we could use it. I was looking at various implementations of typed sessions
http://www.codeproject.com/KB/aspnet/typedsessionstate.aspx
http://weblogs.asp.net/cstewart/archive/2008/01/09/strongly-typed-session-in-asp-net.aspx
http://geekswithblogs.net/dlussier/archive/2007/12/24/117961.aspx
and I felt that we needed to add some more code (correct me if I was wrong) to the SessionManager if we wanted to add a new Type of object into session, either as a method or as a separate wrapper. I thought we could use generics
public static class SessionManager<T> where T:class
{
public void SetSession(string key,object objToStore)
{
HttpContext.Current.Session[key] = objToStore;
}
public T GetSession(string key)
{
return HttpContext.Current.Session[key] as T;
}
}
Is there any inherent advantage in
using
SessionManager<ClassType>.GetSession("sessionString")
than using
HttpContext.Current.Session["sessionString"] as ClassType
I was also thinking it would be nice
to have something like
SessionManager["sessionString"] = objToStoreInSession,
but found that a static class cannot have an indexer. Is there any other way to achieve this ?
My thought was create a SessionObject which would store the Type and the object, then add this object to Session (using a SessionManager), with the key. When retrieving, cast all objects to SessionObject ,get the type (say t) and the Object (say obj) and cast obj as t and return it.
public class SessionObject { public Type type {get;set;} public Object obj{get;set;} }
this would not work as well (as the return signature would be the same, but the return types will be different).
Is there any other elegant way of saving/retrieving objects in session in a more type safe way
For a very clean, maintainable, and slick way of dealing with Session, look at this post. You'll be surprised how simple it can be.
A downside of the technique is that consuming code needs to be aware of what keys to use for storage and retrieval. This can be error prone, as the key needs to be exactly correct, or else you risk storing in the wrong place, or getting a null value back.
I actually use the strong-typed variation, since I know what I need to have in the session, and can thus set up the wrapping class to suit. I've rather have the extra code in the session class, and not have to worry about the key strings anywhere else.
You can simply use a singleton pattern for your session object. That way you can model your entire session from a single composite structure object. This post refers to what I'm talking about and discusses the Session object as a weakly typed object: http://allthingscs.blogspot.com/2011/03/documenting-software-architectural.html
Actually, if you were looking to type objects, place the type at the method level like:
public T GetValue<T>(string sessionKey)
{
}
Class level is more if you have the same object in session, but session can expand to multiple types. I don't know that I would worry about controlling the session; I would just let it do what it's done for a while, and simply provide a means to extract and save information in a more strongly-typed fashion (at least to the consumer).
Yes, indexes wouldn't work; you could create it as an instance instead, and make it static by:
public class SessionManager
{
private static SessionManager _instance = null;
public static SessionManager Create()
{
if (_instance != null)
return _instance;
//Should use a lock when creating the instance
//create object for _instance
return _instance;
}
public object this[string key] { get { .. } }
}
And so this is the static factory implementation, but it also maintains a single point of contact via a static reference to the session manager class internally. Each method in sessionmanager could wrap the existing ASP.NET session, or use your own internal storage.
I posted a solution on the StackOverflow question is it a good idea to create an enum for the key names of session values?
I think it is really slick and contains very little code to make it happen. It needs .NET 4.5 to be the slickest, but is still possible with older versions.
It allows:
int myInt = SessionVars.MyInt;
SessionVars.MyInt = 3;
to work exactly like:
int myInt = (int)Session["MyInt"];
Session["MyInt"] = 3;