default Stream<E> stream() is added to Collection interface to support streams in Java 8. Similar functionality is supported by public static<T> Stream<T> of(T t) in Stream class. What different purpose is static of() method solving ?
The method stream() of the Collection interface can be called on an existing collection. Being an interface method, it can be overridden by actual collection implementations to return a Stream adapted to the specific collection type.
The standard collections of the JRE don’t take this opportunity, as the default implementation’s strategy of delegating to spliterator(), which is also an overridable method, suits their needs. But the documentation even mentions scenarios in which the collection should override stream():
This method should be overridden when the spliterator() method cannot return a spliterator that is IMMUTABLE, CONCURRENT, or late-binding.
In contrast, the static factory method(s) Stream.of(…) are designed for the case when you have a fixed number of elements without a specific collection. There is no need to create a temporary Collection when all you want is a single Stream over the elements you can enumerate.
Without collections of potentially different type, there is no need for overridable behavior, hence, a static factory method is sufficient.
Note that even if you don’t have a enumerable fixed number of elements, there is an optimized solution for the task of creating a single stream, when no reusable collection is needed:
Stream.Builder<MyType> builder=Stream.builder();
builder.add(A);
if(condition) builder.add(B);
builder.add(C);
builder.build()./* stream operations */
As far as I can tell, of is just an utility method to create Streams on the fly, without the need to wrap your elements inside a collection first.
Generally static factory methods as of are provided to skip the array creation because of var-args. For example java-9 immutable collections provide many overloads of these methods like:
Set.of()
Set.of(E)
Set.of(E e1, E e2)
.... so on until 11
Set.of(E... elem)
Even the description of those methods is:
While this introduces some clutter in the API, it avoids array allocation, initialization, and garbage collection overhead that is incurred by varargs calls
Since there are only two methods in Stream:
Stream.of(T t)
Streamm.of(T ... values)
I consider that small utility methods that can create Streams from var-args.
But they still provide a method that creates a Stream with a single element (instead of leaving just the var-args method), so for a single element this is already optimized.
A interface can add static methods/default methods since jdk-8. and you can see some examples of applying patterns on interface in this question.
First, they are both calling StreamSupport.stream to create a stream.
// Collection.stream()
default Stream<E> stream() {
return StreamSupport.stream(spliterator(), false);
}
// Stream.of(E)
public static<T> Stream<T> of(T t) {
return StreamSupport.stream(new Streams.StreamBuilderImpl<>(t), false);
}
stream() added to Collection interface is a good example to applying Template-Method Pattern in interface by default methods. and you can see the source code that stream method call a method spliterator which implements by default in Collection interface.
default Stream<E> stream() {
return StreamSupport.stream(spliterator(), false);
}
default Spliterator<E> spliterator() {
return Spliterators.spliterator(this, 0);
}
AND class derived from Collection can be override spliterator implements different algorithm that will using highest performance algorithm instead. for example spliterator in ArrayList:
public Spliterator<E> spliterator() {
return new ArrayListSpliterator<>(this, 0, -1, 0);
}
Finally, Stream.of() method is a good example to applying Factory Method in interface by static method. it's a factory method for create a stream from an object instance.
The other answers clearly explain the differences between Collection.stream and Stream.of, when to use one or the other, which design patterns are being applied, etc. #Holger goes even further and shows a sample usage of Stream.Builder, which I think is highly under-used.
Here I want to complement the other answers by showing a mixed usage of both Stream.of and Collection.stream methods. I hope that this example would be clear enough to show that even if both Stream.of and Collection.stream are completely different methods, they can also be used together to fulfil a more complex requirement.
Suppose you have N lists, all of them containing elements of the same type:
List<A> list1 = ...;
List<A> list2 = ...;
...
List<A> listN = ...;
And you want to create one stream with the elements of all lists.
You could create a new empty list and add the elements of all the lists into this new list:
int newListSize = list1.size() + list2.size() + ... + listN.size();
List<A> newList = new ArrayList<>(newListSize);
newList.addAll(list1);
newList.addAll(list2);
...
newList.addAll(listN);
Then, you could call stream() on this list and you would be done:
Stream<A> stream = newList.stream();
However, you would be creating an intermediate, pointless list, with the sole purpose of streaming the elements of the original list1, list2, ..., listN lists.
A much better approach is to use Stream.of:
Stream<A> stream = Stream.of(list1, list2, ..., listN)
.flatMap(Collection::stream);
This first creates a stream of lists by enumerating each one of them and then flat-maps this stream of lists into a stream of all lists' elements by means of the Stream.flatMap operation. Thus, Collection.stream is called when flat-mapping the original stream.
Related
I am new to Flutter and Dart, coming from native Android.
Android has a very nice database abstraction architecture called the Room Persistence Library. As far as I am aware, no such database abstraction architecture exists for Flutter using the MVVM / MVC design patterns.
My solution was to create a Dart version of it myself. I got it pretty much done after a few headaches, but I cannot seem to get LiveData to work properly using generics.
I set up my class like this:
class LiveData<T> {
...
}
Now when I want to return some data, it can either be an Object or List<Object>. I found a neat hack for differentiating the two from T:
...
// Parse response
// This checks if the type is an instance of a single entity or a list.
if (entity is T) {
cachedData = rawData.isEmpty ? null : entity.fromMap(rawData.first) as T;
} else {
cachedData = rawData.map((e) => entity.fromMap(e)).toList() as T;
}
...
The problem lies in the second block:
cachedData = rawData.map((e) => entity.fromMap(e)).toList() as T;
With the error:
- Unhandled Exception: type 'List<Entity>' is not a subtype of type 'List<Vehicle>' in type cast
The question then becomes: How can I cast Entity to Vehicle when I do not have access to the Vehicle class. Only an instance of it is assigned to an Entity entity variable.
Here's a snippet to demonstrate my access to Vehicle:
final Entity entity;
...assign Vehicle instance to entity...
print(entity is Vehicle) // True
I've tried using .runtimeType to no avail. I have also thought about splitting LiveData into two classes, the second one being LiveDataList. Although this seems to be the easiest solution to not bug the code- it would bug me (bad pun is intentional) and break the otherwise pretty direct port of Room.
As a temporary solution, I have abstracted out the build logic into a generic function to be passed to the LiveData in the constructor.
final T Function(List<Map<String, dynamic>> rawData) builder;
And now I call that instead of the previous code to build the cachedData.
// Parse response
cachedData = builder(rawData);
With the constructor for the LiveData<List<Vehicle>> called when accessing all vehicles in the Dao<Vehicle> being:
class VehicleDao implements Dao<Vehicle> {
...
static LiveData<List<Vehicle>> get() {
return LiveData<List<Vehicle>>(
...
(rawData) => rawData.map((e) => Vehicle.fromMap(e)).toList(),
...
);
}
}
In Dart (and indeed in many languages) generics screws with the concept of inheritance. You would think that if Bar inherits from Foo, that List<Bar> would also be castable to List<Foo>.
This is not actually going to be the case because of how generics work. When you have a generic class, every time you use that class with a different type, that type is treated as a completely separate class. This is because when the compiler compiles those types, class MyGenericType<Foo> extends BaseClass and class MyGenericType<Bar> extends BaseClass are basically converted to something like class MyGenericType_Foo extends BaseClass and class MyGenericType_Bar extends BaseClass.
Do you see the problem? MyGenericType_Foo and MyGenericType_Bar are not descendants of one another. They are siblings of each other, both extending from BaseClass. This is why when you try to convert a List<Entity> to List<Vehicle>, the cast doesn't work because they are sibling types, not a supertype and subtype.
With all this being said, while you cannot directly cast one generic type to another based on the relationship of the generic type parameter, in the case of List there is a way to convert one List type to another: the cast method.
List<Entity> entityList = <Entity>[...];
List<Vehicle> vehicleList = entityList.cast<Vehicle>(); // This cast will work
One thing to note though, if you are casting from a supertype generic to a sub-type generic and not all the elements of the list are that new type, this cast will throw an error.
I created the following code.
My objective is to check the use of iterator to remove an element while reading the collection.
Can anyone please explain why does a concurrentModificationException is thrown when collection of values of a hashmap is added to a linked list and after creating the iterator for that list, but the same is not thrown when the iterator is obtained after adding the collection to the list?
I know the reason would be very simple and would like something easily available, but yet I just want to confirm what I am thinking is correct or not.
There are 2 other points.
1. As hashmap is not threadsafe, so is it why, that we cant add any element to it while iterator is reading?
2. If yes, then how are we able to remove elements from the map?
package com.Users;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedList;
public class Temp{
public static void main(String arg[]){
HashMap<String, String> map=new HashMap();
map.put("A", "B");
map.put("C", "D");
LinkedList<String> list=new LinkedList<>();
//****** concurent exception is thrown when ever line 1 is swapped with line 2 ******
Iterator<String> iterator=list.iterator(); //line no. 1
list.addAll(map.values()); //line no. 2
//******************
while(iterator.hasNext()){
if(iterator.next().equals("B")){
iterator.remove();
}
}
System.out.println(list);
}
}
Iterator over HashMap is fail-fast in nature that essentially means they abort operation as-soon-as-possible exposing failures immediately.
Collections maintain an internal counter called modCount. Whenever an item is added or removed from the Collection, this counter gets modified.
When iterating, on each next() call, the current value of modCount gets compared with the initial value. If there’s a mismatch, it throws ConcurrentModificationException which aborts the entire operation.
As hashmap is not threadsafe, so is it why that we can't add any element to it while the iterator is reading?
We can't modify the collection as we are iterating over the same because most of the Collections such as ArrayList, HashMap have fail-fast iterators by default.
If yes, then how are we able to remove elements from the map?
We are able to remove elements in your example as we are using iterator's remove() method in iterator.remove().
If we would have used collection's remove() method, this would have thrown ConcurrentModificationException as well.
Do read this for detailed explaination.
In Vaadin 8 Framework, and Vaadin 10 Flow, the data-binding capability lets us provide a Converter to mediate between the widget’s expected data type (such as String for a TextField) and the data type of the backing bean property (such as Integer number).
In this example, the built-in Converter implementation StringToIntegerConverter is used.
binder
.forField( this.phaseField )
.withConverter(
new StringToIntegerConverter( "Must enter an integer number" )
)
.bind( Panel::getPhase , Panel::setPhase ) ;
But what about defining a Converter for other types? How can I easily define a short-and-sweet Converter? For example, a String-to-UUID converter. I want to show the canonical 36-character hex string in a TextField, and going the other direction, parse that string back into a UUID.
// String to UUID
UUID uuid = UUID.fromString( myString ) ;
// UUID to String
String myString = uuid.toString() ;
I see that Binder.BindingBuilder offers the pair of methods withConverter that both take a pair of SerializableFunction objects.
Binder.BindingBuilder::withConverter(SerializableFunction<TARGET,NEWTARGET> toModel, SerializableFunction<NEWTARGET,TARGET> toPresentation)
Binder.BindingBuilder::withConverter(SerializableFunction<TARGET,NEWTARGET> toModel, SerializableFunction<NEWTARGET,TARGET> toPresentation, String errorMessage)
➥ So how do I define the pair of SerializableFunction objects/classes?
I noticed that this interface lists a known subinterface ValueProvider<SOURCE,TARGET>. That looks familiar, and I have a hunch it is the key to easily defining a short simple converter. But I do not quite comprehend the syntax with lambdas and all that is going on here.
I am not asking how to write a class implementing Converter. I am asking how to write the pair of SerializableFunction arguments to pass to the Binder.BindingBuilder::withConverter methods listed above as bullet items.
Quoting that JavaDoc:
Interface Binder.BindingBuilder<BEAN,TARGET>
…
withConverter
default <NEWTARGET> Binder.BindingBuilder<BEAN,NEWTARGET> withConverter(SerializableFunction<TARGET,NEWTARGET> toModel, SerializableFunction<NEWTARGET,TARGET> toPresentation)
Maps the binding to another data type using the mapping functions and a possible exception as the error message.
The mapping functions are used to convert between a presentation type, which must match the current target data type of the binding, and a model type, which can be any data type and becomes the new target type of the binding. When invoking bind(ValueProvider, Setter), the target type of the binding must match the getter/setter types.
For instance, a TextField can be bound to an integer-typed property using appropriate functions such as: withConverter(Integer::valueOf, String::valueOf);
Type Parameters:
NEWTARGET - the type to convert to
Parameters:
toModel - the function which can convert from the old target type to the new target type
toPresentation - the function which can convert from the new target type to the old target type
Returns:
a new binding with the appropriate type
Throws:
IllegalStateException - if bind has already been called
You can do it by passing two lambda expressions to withConverter, so something like this:
binder.forField(textField)
.withConverter(text -> UUID.fromString(text), uuid -> uuid.toString())
.bind(/* ... */);
If you need a more complicated conversion, then the right-hand side of the lambda can be surrounded with brackets, e.g.
binder.forField(textField).withConverter( text -> {
if ( text == null ) {
return something;
} else {
return somethingElse;
}
}, uuid -> { return uuid.toString(); } )
.bind(/* ... */);
If you need your converter multiple times, I recommend creating a separate class implementing interface com.vaadin.data.Converter. However, using lambdas is possible, too, as you already know (see answer of #ollitietavainen). But this is not Vaadin specific, it's a Java 8+ feature you can read about e.g. here. Basically, you can use lambdas whereever an object implementing an interface with only one method is required.
This is a Java 8 lower-intermediate question:
I have the following code in Java 6:
List <ViewWrapperContentElementTypeProperty> vwPropertyList = getFromDao();
TreeMap <Long, ArrayList<ViewWrapperContentElementTypeProperty>> mappedProperties = new TreeMap<Long, ArrayList<ViewWrapperContentElementTypeProperty>> ();
for (ViewWrapperContentElementTypeProperty vwCetP:vwPropertyList)
{
if(null==mappedProperties.get(vwCetP.getContentElementTypeId()))
{
ArrayList<ViewWrapperContentElementTypeProperty> list = new ArrayList<ViewWrapperContentElementTypeProperty>());
list.add(vwCetP);
mappedProperties.put(vwCetP.getContentElementTypeId(), list);
}
else
{
mappedProperties.get(vwCetP.getContentElementTypeId()).add(vwCetP);
}
}
Can I use vwPropertyList.stream().map() to implement this more efficiently?
It seems like you are looking for a grouping by operation. Fortunately, the Collectors class provide a way to do this:
import static java.util.stream.Collectors.groupingBy;
import static java.util.stream.Collectors.toCollection;
...
TreeMap<Long, ArrayList<ViewWrapperContentElementTypeProperty>> mappedProperties =
vwPropertyList.stream()
.collect(groupingBy(ViewWrapperContentElementTypeProperty::getContentElementTypeId,
TreeMap::new,
toCollection(ArrayList::new)));
I used the overloaded version of groupingBy where you can provide a specific map implementation (if you really need a TreeMap).
Also the toList() collector returns a List (which is an ArrayList but it's an implementation details). Since you apparently need to specify a concrete implementation as you want ArrayLists as values, you can do it with toCollection(ArrayList::new).
With regard to using Streams and lambda expressions, of course... This should look like the following:
Map<Long, List<ViewWrapperContentElementTypeProperty>> mappedProperties =
vwPropertyList.stream()
.collect(Collectors.groupingBy(ViewWrapperContentElementTypeProperty::getContentElementTypeId));
Please note that using Stream API methods like above forces using interfaces (Map, List), which is a good practice anyway.
When it comes to performance, it should be roughly the same as using a traditional loop.
I'm using Flex 3.3, with hamcrest-as3 used to test for item membership in a list as part of my unit tests:
var myList: IList = new ArrayCollection(['a', 'b', 'c']).list;
assertThat(myList, hasItems('a', 'b', 'c'));
The problem is that apparently the IList class doesn't support for each iteration; for example, with the above list, this will not trace anything:
for each (var i: * in myList) { trace (i); }
However, tracing either an Array or an ArrayCollection containing the same data will work just fine.
What I want to do is (without having to tear apart my existing IList-based interface) be able to treat an IList like an Array or an ArrayCollection for the purposes of testing, because that's what hamcrest does:
override public function matches(collection:Object):Boolean
{
for each (var item:Object in collection)
{
if (_elementMatcher.matches(item))
{
return true;
}
}
return false;
}
Is this simply doomed to failure? As a side note, why would the IList interface not be amenable to iteration this way? That just seems wrong.
You will have to create a custom Matcher that's able to iterate over an IList. More specifically, extend and override the matches method of IsArrayContainingMatcher that you reference above (and you'll probably want to create IList specific versions of hasItem and hasItems as well). A bit of a pain, but perhaps it's worth it to you.
Longer term, you could file an issue with hamcrest-as3 (or fork) to have array iteration abstracted using the Iterator pattern. The right Iterator could then be chosen automatically for the common types (Proxy-subclasses, IList) with perhaps an optional parameter to supply a custom Iterator.
For the main issue: Instead of passing the ArrayCollection.list to assertThat(), pass the ArrayCollection itself. ArrayCollection implements IList and is iterable with for each.
var myList:IList = new ArrayCollection(['a', 'b', 'c']);
assertThat(myList, hasItems('a', 'b', 'c'));
In answer to part two: ArrayCollection.list is an instance of ArrayList which does not extend Proxy and does not implement the required methods in order to iterate with for each. ArrayCollection extends ListCollectionView which does extends Proxy and implements the required methods.
HTH.
I find myself coming back to this every once in a while. Rather than writing new Matchers, I find that the easiest solution is always to just call toArray() on the IList and match against the resulting array.