Why is the HashMap transient inside HashSet implementation? - collections

HashSet class itself implements Serializable, but the Hashmap it uses to store all of it's data is kept transient. Wouldn't all the data be lost on deserializing it? What's the point of serializing it then?

A HashSet serializes as a basic set. Upon deserialization, a new HashMap is constructed to hold the data. There's no need to serialize the map, with key-value pairs, when all you care about is the keys.

First, you are right ,transient hashmap in hashset will not be serialized, but hashset store data in its field hashmap, you think when do serialize, hashset may lost it's data, by read HashSet source code you will find there two methods called writeObject and readObject in HashSet, when do serializes, will invoke writeObject method in hashset to store data,and when do deserialize,will invoke readObject method in hashset to restore data,so serialize it's field hashmap is not necessary
private void writeObject(java.io.ObjectOutputStream s)
throws java.io.IOException {
// Write out any hidden serialization magic
s.defaultWriteObject();
// Write out HashMap capacity and load factor
s.writeInt(map.capacity());
s.writeFloat(map.loadFactor());
// Write out size
s.writeInt(map.size());
// Write out all elements in the proper order.
for (E e : map.keySet())
s.writeObject(e);
}
private void readObject(java.io.ObjectInputStream s)
throws java.io.IOException, ClassNotFoundException {
// Read in any hidden serialization magic
s.defaultReadObject();
// Read capacity and verify non-negative.
int capacity = s.readInt();
if (capacity < 0) {
throw new InvalidObjectException("Illegal capacity: " +
capacity);
}
// Read load factor and verify positive and non NaN.
float loadFactor = s.readFloat();
if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
throw new InvalidObjectException("Illegal load factor: " +
loadFactor);
}
// Read size and verify non-negative.
int size = s.readInt();
if (size < 0) {
throw new InvalidObjectException("Illegal size: " +
size);
}
// Set the capacity according to the size and load factor ensuring that
// the HashMap is at least 25% full but clamping to maximum capacity.
capacity = (int) Math.min(size * Math.min(1 / loadFactor, 4.0f),
HashMap.MAXIMUM_CAPACITY);
// Constructing the backing map will lazily create an array when the first element is
// added, so check it before construction. Call HashMap.tableSizeFor to compute the
// actual allocation size. Check Map.Entry[].class since it's the nearest public type to
// what is actually created.
SharedSecrets.getJavaOISAccess()
.checkArray(s, Map.Entry[].class, HashMap.tableSizeFor(capacity));
// Create backing HashMap
map = (((HashSet<?>)this) instanceof LinkedHashSet ?
new LinkedHashMap<E,Object>(capacity, loadFactor) :
new HashMap<E,Object>(capacity, loadFactor));
// Read in all elements in the proper order.
for (int i=0; i<size; i++) {
#SuppressWarnings("unchecked")
E e = (E) s.readObject();
map.put(e, PRESENT);
}
}

Related

C# Marshal byte array to struct

I find many answers to my question and they all work. My question is are they all equal in speed and memory. How can I tell what is faster and uses less memory. I don't normally use the Marshal and GCHandle classes. So I am totally green.
public static object RawDeserializer(byte[] rawData, int position, Type anyType)
{
int rawsize = Marshal.SizeOf(anyType);
if (rawsize > rawData.Length)
return null;
IntPtr buffer = Marshal.AllocHGlobal(rawsize);
Marshal.Copy(rawData, position, buffer, rawsize);
object retobj = Marshal.PtrToStructure(buffer, anyType);
Marshal.FreeHGlobal(buffer);
return retobj;
}
public static T RawDeserializer<T>(byte[] rawData, int position = 0)
{
int rawsize = Marshal.SizeOf(typeof(T));
if (rawsize > rawData.Length)
{
throw new DataMisalignedException("byte array is not the correct size for the requested type");
}
IntPtr buffer = Marshal.AllocHGlobal(rawsize);
Marshal.Copy(rawData, position, buffer, rawsize);
T retobj = (T)Marshal.PtrToStructure(buffer, typeof(T));
Marshal.FreeHGlobal(buffer);
return retobj;
}
public static T RawDeserializer<T>(byte[] bytes) where T : struct
{
T stuff;
GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned);
try
{
stuff = Marshal.PtrToStructure<T>(handle.AddrOfPinnedObject());
}
finally
{
handle.Free();
}
return stuff;
}
I am getting the desired results form all 3 implementations.
First and second are almost identical: difference is that you do not unbox (cast to T:struct) the result in the first example, I'd assume that you'll unbox it later though.
Third option does not copy memory to the unmanaged heap, it just pins it in manageed heap, so I'd assume it will allocate less memory and will be faster. I don't pretend to be a golden source of truth though, so just go and make performance testing of these options :) BenchmarkDotNet is a great framework for performance testing and may help you a lot.
Also third option could be more concise:
public static unsafe T RawDeserializer<T>(byte[] bytes) where T : struct
{
fixed (byte* p = bytes)
return Marshal.PtrToStructure<T>((IntPtr)p);
}
You need to change project settings to allow unsafe code though:
To do not be totally green, I'd strongly recommend to read a book CLR via C#, Chapter 21 'The Managed Heap and Garbage Collection'.

Intersection between two HashSets in Java 8

I have an object as following :
public Class MyObjDTO {
private Long id;
private Boolean checked;
//getter and setters
#Override
public final int hashCode() {
Long id = getId();
return (id == null ? super.hashCode() : id.hashCode());
}
#Override
public boolean equals(final Object obj) {
if (this == obj)
return true;
if (!(obj instanceof MyObjDTO))
return false;
Long id = getId();
Long objId = ((MyObjDTO) obj).getId();
if (id.equals(objId)) {
return true;
} else {
return false;
}
}
}
And I have two hash sets containing some instances from this object :
HashSet oldSet = new HashSet();
oldSet.add(new MyObjDTO(1,true));
oldSet.add(new MyObjDTO(2,true));
oldSet.add(new MyObjDTO(3,false));
HashSet newSet = new HashSet();
newSet.add(new MyObjDTO(1,false));
newSet.add(new MyObjDTO(2,true));
newSet.add(new MyObjDTO(4,true));
So what I want to do here is to select objects that are in the newSet and not in the oldSet, in this case its : new MyObjDTO(4,true) which I did using this :
Stream<MyObjDTO> toInsert = newSet.stream().filter(e -> !oldSet.contains(e));
Then I want to select objects that are in the oldSet and not in the newSet, in this case its :new MyObjDTO(3,false) which I did using this :
Stream<MyObjDTO> toRemove = oldSet.stream().filter(e -> !newSet.contains(e));
The last step is that I want to select the objects that are in both newSet and oldSet but they have a different value for the attribute checked , in this case it's : new MyObjDTO(1,false).
What I tried is this :
Stream<MyObjDTO> toUpdate = oldSet.stream().filter(newSet::contains);
But this one will return both new MyObjDTO(1,false) and new MyObjDTO(2,true).
How can I solve this ?
One way is to first use a map and then adjust your filter condition:
Map<MyObjDTO, Boolean> map = newSet.stream()
.collect(Collectors.toMap(Function.identity(), MyObjDTO::getChecked));
Stream<MyObjDTO> toUpdate = oldSet.stream()
.filter(old -> newSet.contains(old) && old.getChecked() != map.get(old));
Firstly, your equals() and hashCode() methods violate their basic contract. As per the javadoc of hashCode():
If two objects are equal according to the equals(Object) method, then calling the hashCode method on each of the two objects must produce the same integer result.
Your implementation of hashCode() does not follow this contract. Your first step should be to fix that.
Secondly, since Java 1.2 (nearly 20 years ago), java has provided the method removeAll() that does exactly what you want to do for the first part:
// Given these 2 sets:
HashSet<MyObjDTO> oldSet = new HashSet<>();
HashSet<MyObjDTO> newSet = new HashSet<>();
HashSet<MyObjDTO> onlyInNew = new HashSet<>(newSet);
onlyInNew.removeAll(oldSet);
// similar for onlyInOld
For the second part, you'll need to create a Map to find and get the object out:
Map<MyObjDTO, MyObjDTO> map = new HashMap<>O;
oldSet.forEach(o -> map.put(o, o);
HashSet<MyObjDTO> updated = new HashSet<>(newSet);
updated.removeIf(o -> oldSet.contains(o) && o.getChecked()() != map.get(o).getChecked());
In the last step, you rely on the equals() method of the DTO :
Stream<FonctionnaliteDTO> toUpdate = oldSet.stream().filter(newSet::contains);
The method uses only the id field to determinate object equality.
You don't want to do that.
You want to filter on a specific field : checked.
Besides, you should perform the operation on the result of the intersection of the two Sets.
Note that you should use simply Collection.retainAll() to compute the intersection between two collections:
Set<MyObjDTO> set = ...
Set<MyObjDTO> setTwo = ...
set.retainAll(setTwo);
Then you can filter objects that have both same id and checked value by using a double loop : for + iterator.
for (MyObjDTO dto : set){
for (Iterator<MyObjDTO> it = set.iterator(); it.hasNext();){
MyObjDTO otherDto = it.next();
if (otherDto.getId().equals(dto.getId()) &&
otherDto.getChecked() == dto.getChecked()){
it.remove();
}
}
}
You could do that with Stream but IHMO it could be less readable.

ActionScript-3 : Array vs. ArrayList

Can anybody say me what is faster: Array or ArrayList? (ActionScript3)
I tried to find a page about this but didn't find anything.
Thank you.
The ArrayList class is a simple implementation of IList that uses a backing Array as the source of the data. Items in the backing Array can be accessed and manipulated using the methods and properties of the IList interface. Operations on an ArrayList instance modify the data source; for example, if you use the removeItemAt() method on an ArrayList, you remove the item from the underlying Array.
Apparently ArrayList class wraps an Array object - hence a plain Array would be faster than an ArrayList object.
As already stated, Array is faster. Actually it is orders of magnitude faster.
The equivalents of array access are getItemAt and setItemAt.
Implementation:
public function getItemAt(index:int, prefetch:int = 0):Object
{
if (index < 0 || index >= length)
{
var message:String = resourceManager.getString(
"collections", "outOfBounds", [ index ]);
throw new RangeError(message);
}
return source[index];
}
and:
public function setItemAt(item:Object, index:int):Object
{
if (index < 0 || index >= length)
{
var message:String = resourceManager.getString(
"collections", "outOfBounds", [ index ]);
throw new RangeError(message);
}
var oldItem:Object = source[index];
source[index] = item;
stopTrackUpdates(oldItem);
startTrackUpdates(item);
//dispatch the appropriate events
if (_dispatchEvents == 0)
{
var hasCollectionListener:Boolean =
hasEventListener(CollectionEvent.COLLECTION_CHANGE);
var hasPropertyListener:Boolean =
hasEventListener(PropertyChangeEvent.PROPERTY_CHANGE);
var updateInfo:PropertyChangeEvent;
if (hasCollectionListener || hasPropertyListener)
{
updateInfo = new PropertyChangeEvent(PropertyChangeEvent.PROPERTY_CHANGE);
updateInfo.kind = PropertyChangeEventKind.UPDATE;
updateInfo.oldValue = oldItem;
updateInfo.newValue = item;
updateInfo.property = index;
}
if (hasCollectionListener)
{
var event:CollectionEvent =
new CollectionEvent(CollectionEvent.COLLECTION_CHANGE);
event.kind = CollectionEventKind.REPLACE;
event.location = index;
event.items.push(updateInfo);
dispatchEvent(event);
}
if (hasPropertyListener)
{
dispatchEvent(updateInfo);
}
}
return oldItem;
}
There's a LOT of calls and checks involved here. Please note, that _dispatchEvents == 0 is true by default (unless you disableEvents), thus writing in fact is an immense operation.
However ArrayList does provide a lot of feature, that are usefull within flex. A good compormise is to grab the underlying Array (accessible as ArrayList::source), peform your operations, and then reassign it (supposing you have listeners observing that Array).
Also, if you go with Flash Player 10, then Vector will outperform Array.
greetz
back2dos
Array is probably slightly faster or they are equal. All an ArrayList is, is an implementation of iList that uses an... Array as a backing object.

Flex looping through object

Im trying to extend the flex ArrayCollection to be able to search for an object containing specific data and give it back.
Here is my function:
public function getItemContaining(value: String): Object {
//Loop through the collection
for each(var i: Object in this) {
//Loop through fields
for(var j: String in i) {
//If field value is equal to input value
if(i[j] == value) {
return i;
}
}
}
//If not found
return null;
}
Problem is j is always null so the second loop never works. So I read flex loop descriptions and actually it should work just fine. What can possibly be the problem?
Try it like this:
for (var name:String in myObject){
trace(name + ":" + myObject[name];
}
Okay that was actually the same you were doing. The error must be in this line:
for each(var i: Object in this) {
Try using this:
for each(var i: Object in this.source) {
My first instinct would be to have a look at data type. You're setting up a loop declaring j:String and the symptom is that j is always null. This suggests to me that Flex is failing to interpret the elements of i as strings. If Flex only recognizes the elements of i as Objects (because all Strings are Objects, and Objects are the lowest common denominator), it would return null for j:String.
Try this for your inner loop:
for(var j: Object in i) {
//If field value is equal to input value
if(i[j] is String && (i[j] as String) == value) {
return i;
}
}
if you are using ArrayCollection as your datasource, you should look at using the IViewCursor interface. You can supply a custom compare function, or supply the fields top compare to. This interface is well documented with examples in adobe/livedocs
var _cursor:IViewCursor;
var _idSortField:SortField;
var _idSort:Sort = new Sort();
_idSortField = new SortField();
_idSortField.compareFunction = this.myCompareFunction;
_idSort.fields = [_idSortField];
myArrayCollection.sort = _idSort;
myArrayCollection.refresh();
_cursor = myArrayCollection.createCursor();
if (_cursor.findAny(search))
return _cursor;
if you are search for a value in a specific property, then its even easier. Here's the link to adobe livedocs on this topic

How to deal with Number precision in Actionscript?

I have BigDecimal objects serialized with BlazeDS to Actionscript. Once they hit Actionscript as Number objects, they have values like:
140475.32 turns into 140475.31999999999998
How do I deal with this? The problem is that if I use a NumberFormatter with precision of 2, then the value is truncated to 140475.31. Any ideas?
This is my generic solution for the problem (I have blogged about this here):
var toFixed:Function = function(number:Number, factor:int) {
return Math.round(number * factor)/factor;
}
For example:
trace(toFixed(0.12345678, 10)); //0.1
Multiply 0.12345678 by 10; that gives us 1.2345678.
When we round 1.2345678, we get 1.0,
and finally, 1.0 divided by 10 equals 0.1.
Another example:
trace(toFixed(1.7302394309234435, 10000)); //1.7302
Multiply 1.7302394309234435 by 10000; that gives us 17302.394309234435.
When we round 17302.394309234435 we get 17302,
and finally, 17302 divided by 10000 equals 1.7302.
Edit
Based on the anonymous answer below, there is a nice simplification for the parameter on the method that makes the precision much more intuitive. e.g:
var setPrecision:Function = function(number:Number, precision:int) {
precision = Math.pow(10, precision);
return Math.round(number * precision)/precision;
}
var number:Number = 10.98813311;
trace(setPrecision(number,1)); //Result is 10.9
trace(setPrecision(number,2)); //Result is 10.98
trace(setPrecision(number,3)); //Result is 10.988 and so on
N.B. I added this here just in case anyone sees this as the answer and doesn't scroll down...
Just a slight variation on Frasers Function, for anyone who is interested.
function setPrecision(number:Number, precision:int) {
precision = Math.pow(10, precision);
return (Math.round(number * precision)/precision);
}
So to use:
var number:Number = 10.98813311;
trace(setPrecision(number,1)); //Result is 10.9
trace(setPrecision(number,2)); //Result is 10.98
trace(setPrecision(number,3)); //Result is 10.988 and so on
i've used Number.toFixed(precision) in ActionScript 3 to do this: http://livedocs.adobe.com/flex/3/langref/Number.html#toFixed%28%29
it handles rounding properly and specifies the number of digits after the decimal to display - unlike Number.toPrecision() that limits the total number of digits to display regardless of the position of the decimal.
var roundDown:Number = 1.434;
// will print 1.43
trace(roundDown.toFixed(2));
var roundUp:Number = 1.436;
// will print 1.44
trace(roundUp.toFixed(2));
I converted the Java of BigDecimal to ActionScript.
We had no choices since we compute for financial application.
http://code.google.com/p/bigdecimal/
You can use property: rounding = "nearest"
In NumberFormatter, rounding have 4 values which you can choice: rounding="none|up|down|nearest". I think with your situation, you can chose rounding = "nearest".
-- chary --
I discovered that BlazeDS supports serializing Java BigDecimal objects to ActionScript Strings as well. So if you don't need the ActionScript data to be Numbers (you are not doing any math on the Flex / ActionScript side) then the String mapping works well (no rounding weirdness). See this link for the BlazeDS mapping options: http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=serialize_data_2.html
GraniteDS 2.2 has BigDecimal, BigInteger and Long implementations in ActionScript3, serialization options between Java / Flex for these types, and even code generation tools options in order to generate AS3 big numbers variables for the corresponding Java ones.
See more here: http://www.graniteds.org/confluence/display/DOC22/2.+Big+Number+Implementations.
guys, just check the solution:
protected function button1_clickHandler(event:MouseEvent):void
{
var formatter:NumberFormatter = new NumberFormatter();
formatter.precision = 2;
formatter.rounding = NumberBaseRoundType.NEAREST;
var a:Number = 14.31999999999998;
trace(formatter.format(a)); //14.32
}
I ported the IBM ICU implementation of BigDecimal for the Actionscript client. Someone else has published their nearly identical version here as a google code project. Our version adds some convenience methods for doing comparisons.
You can extend the Blaze AMF endpoint to add serialization support for BigDecimal. Please note that the code in the other answer seems incomplete, and in our experience it fails to work in production.
AMF3 assumes that duplicate objects, traits and strings are sent by reference. The object reference tables need to be kept in sync while serializing, or the client will loose sync of these tables during deserialization and start throwing class cast errors, or corrupting the data in fields that don't match, but cast ok...
Here is the corrected code:
public void writeObject(final Object o) throws IOException {
if (o instanceof BigDecimal) {
write(kObjectType);
if(!byReference(o)){ // if not previously sent
String s = ((BigDecimal)o).toString();
TraitsInfo ti = new TraitsInfo("java.math.BigDecimal",false,true,0);
writeObjectTraits(ti); // will send traits by reference
writeUTF(s);
writeObjectEnd(); // for your AmfTrace to be correctly indented
}
} else {
super.writeObject(o);
}
}
There is another way to send a typed object, which does not require Externalizable on the client. The client will set the textValue property on the object instead:
TraitsInfo ti = new TraitsInfo("java.math.BigDecimal",false,false,1);
ti.addProperty("textValue");
writeObjectTraits(ti);
writeObjectProperty("textValue",s);
In either case, your Actionscript class will need this tag:
[RemoteClass(alias="java.math.BigDecimal")]
The Actionscript class also needs a text property to match the one you chose to send that will initialize the BigDecimal value, or in the case of the Externalizable object, a couple of methods like this:
public function writeExternal(output:IDataOutput):void {
output.writeUTF(this.toString());
}
public function readExternal(input:IDataInput):void {
var s:String = input.readUTF();
setValueFromString(s);
}
This code only concerns data going from server to client. To deserialize in the other direction from client to server, we chose to extend AbstractProxy, and use a wrapper class to temporarily store the string value of the BigDecimal before the actual object is created, due to the fact that you cannot instantiate a BigDecimal and then assign the value, as the design of Blaze/LCDS expects should be the case with all objects.
Here's the proxy object to circumvent the default handling:
public class BigNumberProxy extends AbstractProxy {
public BigNumberProxy() {
this(null);
}
public BigNumberProxy(Object defaultInstance) {
super(defaultInstance);
this.setExternalizable(true);
if (defaultInstance != null)
alias = getClassName(defaultInstance);
}
protected String getClassName(Object instance) {
return((BigNumberWrapper)instance).getClassName();
}
public Object createInstance(String className) {
BigNumberWrapper w = new BigNumberWrapper();
w.setClassName(className);
return w;
}
public Object instanceComplete(Object instance) {
String desiredClassName = ((BigNumberWrapper)instance).getClassName();
if(desiredClassName.equals("java.math.BigDecimal"))
return new BigDecimal(((BigNumberWrapper)instance).stringValue);
return null;
}
public String getAlias(Object instance) {
return((BigNumberWrapper)instance).getClassName();
}
}
This statement will have to execute somewhere in your application, to tie the proxy object to the class you want to control. We use a static method:
PropertyProxyRegistry.getRegistry().register(
java.math.BigDecimal.class, new BigNumberProxy());
Our wrapper class looks like this:
public class BigNumberWrapper implements Externalizable {
String stringValue;
String className;
public void readExternal(ObjectInput arg0) throws IOException, ClassNotFoundException {
stringValue = arg0.readUTF();
}
public void writeExternal(ObjectOutput arg0) throws IOException {
arg0.writeUTF(stringValue);
}
public String getStringValue() {
return stringValue;
}
public void setStringValue(String stringValue) {
this.stringValue = stringValue;
}
public String getClassName() {
return className;
}
public void setClassName(String className) {
this.className = className;
}
}
We were able to reuse one of the available BigDecimal.as classes on the web and extended blazeds by sublassing from AMF3Output, you'll need to specify your own endpoint class in the flex xml files, in that custom endpoint you can insert your own serializer that instantiates an AMF3Output subclass.
public class EnhancedAMF3Output extends Amf3Output {
public EnhancedAMF3Output(final SerializationContext context) {
super(context);
}
public void writeObject(final Object o) throws IOException {
if (o instanceof BigDecimal) {
write(kObjectType);
writeUInt29(7); // write U290-traits-ext (first 3 bits set)
writeStringWithoutType("java.math.BigDecimal");
writeAMFString(((BigDecimal)o).toString());
} else {
super.writeObject(o);
}
}
}
as simple as that! then you have native BigDecimal support using blazeds, wooohoo!
Make sure your BigDecimal as3 class implements IExternalizable
cheers, jb
Surprisingly the round function in MS Excel gives us different values then you have presented above.
For example in Excel
Round(143,355;2) = 143,36
So my workaround for Excel round is like:
public function setPrecision(number:Number, precision:int):Number {
precision = Math.pow(10, precision);
const excelFactor : Number = 0.00000001;
number += excelFactor;
return (Math.round(number * precision)/precision);
}
If you know the precision you need beforehand, you could store the numbers scaled so that the smallest amount you need is a whole value. For example, store the numbers as cents rather than dollars.
If that's not an option, how about something like this:
function printTwoDecimals(x)
{
printWithNoDecimals(x);
print(".");
var scaled = Math.round(x * 100);
printWithNoDecimals(scaled % 100);
}
(With however you print with no decimals stuck in there.)
This won't work for really big numbers, though, because you can still lose precision.
You may vote and watch the enhancement request in the Flash PLayer Jira bug tracking system at https://bugs.adobe.com/jira/browse/FP-3315
And meanwhile use the Number.toFixed() work-around see :
(http://livedocs.adobe.com/flex/3/langref/Number.html#toFixed%28%29)
or use the open source implementations out there : (http://code.google.com/p/bigdecimal/) or (http://www.fxcomps.com/money.html)
As for the serialization efforts, well, it will be small if you use Blazeds or LCDS as they do support Java BigDecimal serialization (to String) cf. (http://livedocs.adobe.com/livecycle/es/sdkHelp/programmer/lcds/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=serialize_data_3.html)
It seems more like a transport problem, the number being correct but the scale ignored. If the number has to be stored as a BigDecimal on the server you may want to convert it server side to a less ambiguous format (Number, Double, Float) before sending it.

Resources