We are using version 3.6. We call a rounding function to clean up the decimal part. Something like this...
private function ceilingRounding(value:Number, power:Number):Number
{
var scale:Number = Math.pow(10, power);
return (Math.ceil(value * scale) / scale);
}
The function result is unexpected for the following values:
value = 76.7549, scale = 10000.
The result should be 76.7549 but we get 76.7550
Using the debugger, we see that value * scale = 767549.0000000001. Of course this would be rounded up to 76.7550, but why are we getting .0000000001 and how can we fix this?
The "NUMBERS" values hold this approximation operation, you can modify your function and add the following into operation, plus I let the function that I use regularly to do the rounding.
public static function roundToPrecision(numberVal:Number, precision:int = 0):Number
{
var decimalPlaces:Number = Math.pow(10, precision);
return Math.round(decimalPlaces * numberVal) / decimalPlaces;
}
If you need Fixed decimal Number try
var myNum:Number = 1.123556789
myNum.toFixed(3);
trace(myNum); // 1.123
If you need Precision decimal Number try
myNum.toPrecision(3);
trace(myNum); // 1.124;
For more options see rounding in link.
Reference NumberFormatter
Related
I have constructed my crossfilter-setup a bit different than in most examples I can find, namely:
I have data-array d with multiple data-sources included, among which is data1.
var cf = crossfilter(d3.range(0, d.data1.length));
Then I construct my dims like:
var dim = cf.dimension(function(i) { return d.data1[i].id; });
And I construct my groups like:
var group = dim.group().reduceSum(function(i) { return d.data1[i].total;});
This all works fine, but when I want to create custom reduce functions, the extra parameter i is giving me trouble.
var reduceAddPerc = function(p,v) {
p.sumOfSub += d.data1[i].var1;
p.sumOfTotal += d.data1[i].total;
p.finalVal = p.sumOfSub / p.sumOfTotal;
return p;
};
var reduceRemovePerc = function(p,v) {
p.sumOfSub -= d.data1[i].var1;
p.sumOfTotal -= d.data1[i].total;
p.finalVal = p.sumOfSub / p.sumOfTotal;
return p;
};
var reduceInitialPerc = function() {
return {sumOfSub:0, sumOfTotal:0, finalVal:0 };
};
And then defining the group with:
var group = dim.group().reduce(reduceAddPerc,reduceRemovePerc,reduceInitialPerc);
This doesn't work obviously, since the parameter i is now not known within the function. But I've tried adding the parameter (p,v,i), or nesting the functions by creating an additional function with parameter i around the (p,v) function, and also creating an additionao function(i) within the (p,v) function, but I cannot get this to work.
Does anyone have any help to offer?
In the custom reduce functions, the v parameter is the record currently being "reduced". In this case, it should be your counter, so just use it where you would normally use i. Is that not working?
var clientString = "{\"max\":1214.704958677686}";
JObject o = JObject.Parse(clientString);
var jsonString = o.ToString();
contents of jsonString:
{
"max": 1214.7049586776859
}
this is both in visualizing the object and in doing ToString(). Note that the 686 has mysteriously been expanded to 6859 (precision added). This is a problem for us because the numbers are not exactly the same, and a hash function over the json later does not match.
#Ilija Dimov is correct--JSON.NET parses JSON floats as doubles by default. If you still want to use JObject instead of creating a full blown POCO for deserialization, you can use a JsonTextReader and set the FloatParseHandling option:
var reader = new JsonTextReader(new StringReader(clientString));
reader.FloatParseHandling = FloatParseHandling.Decimal;
JObject obj = JObject.Load(reader);
Console.WriteLine(obj["max"].Value<decimal>()); // 1214.704958677686
The reason your value is changed is because of the nature of floating point numbers in .NET. The JObject.Parse(clientString) method at some point executes the following line:
double d;
double.TryParse("1214.704958677686", NumberStyles.Float | NumberStyles.AllowThousands, CultureInfo.InvariantCulture, out d);
where d represents the number that you get in the JObject.
As d is of type double and double is floating point number, you didn't get the value you expect. Read more about Binary floating point and .NET.
There is an option in JSON.NET for parsing floating point numbers as decimals and get the precision you need, but to do that you need to create custom class that matches your json string and deserialize the json. Something like this:
public class MyClass
{
[JsonProperty("max")]
public decimal Max { get; set; }
}
var obj = JsonConvert.DeserializeObject<MyClass>(clientString, new JsonSerializerSettings
{
FloatParseHandling = FloatParseHandling.Decimal
});
By using this code sample, the value of max property won't be changed.
You can experiment this behaviour just by parsing to float, double and decimal:
Assert.AreEqual(1214.705f,float.Parse("1214.704958677686"));
Assert.AreEqual(1214.7049586776859, double.Parse("1214.704958677686"));
Assert.AreEqual(1214.704958677686, decimal.Parse("1214.704958677686"));
So json.net is using double as an intermediate type. You can change this by setting FloatParseHandling option.
Or putting it more accurately, I want to be able to get the distance between the top of a control to the top of one of its children (and adding the height member of all the above children yields specious results!) but the process of getting the absolute coordinates, and comparing them, looks really messed up.
I use this function to calculate the height between the tops of 2 tags:
private static function GetRemainingHeight(oParent:Container, oChild:Container,
yParent:Number, yChild:Number):Number {
const ptParent:Point = oParent.localToGlobal(new Point(0, yParent));
const ptChild:Point = oChild.localToGlobal(new Point(0, yChild));
const nHeightOfEverythingAbove:Number = ptChild.y - ptParent.y;
trace(ptChild.y.toString() + '[' + yChild.toString() + '] - ' +
ptParent.y.toString() + '[' + yParent.toString() + '] = ' + nHeightOfEverythingAbove.toString() + ' > ' + oParent.height.toString());
return nHeightOfEverythingAbove;
}
Note that oParent.y == yParent and oChild.y == yChild but I did it this way for binding reasons.
The result I get is very surprising:
822[329] - 124[0] = 698 > 439
which is impossible, because the top of oChild does not disappear below oParent. The only figure I find unexpected is ptChild.y. All the other numbers look quite sane. So I'm assuming that my mistake was in subtracting two figures that are not supposed to be comparable.
Of course, if anyone has a method of calculating the difference between two points that doesn't involve localToGlobal(), that'd be fine, too.
I'm using the 3.5 SDK.
I found a partial answer by looking to http://rjria.blogspot.ca/2008/05/localtoglobal-vs-contenttoglobal-in.html (including the comments). It dithers on whether or not I should be using localToGlobal() or contentToGlobal(), but it filled in some blanks that Adobe's documentation left, which is that you get the global coordinates by feeding the function new Point(0, 0). In the end, I used this:
public static function GetRemainingHeight(oParent:DisplayObject, oChild:DisplayObject,
yParent:Number, yChild:Number):Number {
const ptParent:Point = oParent.localToGlobal(new Point(0, 0));
const ptChild:Point = oChild.localToGlobal(new Point(0, 0));
const nHeightOfEverythingAbove:Number = ptChild.y - ptParent.y;
return nHeightOfEverythingAbove;
}
See question for an explanation for the seemingly unnecessary parameters, which now seem like they might really be irrelevant.
However, I didn't need this function as often as I thought, and I'm not terribly happy w/the way it works anyway. I've learned that the way I've done it, it isn't possible to just make all those parameters to the function Bindable and expect this function to be called when changes to oChild are made. In one case I had to call this function in the handler for the updateComplete event.
I have implemented the same function "distancebetween" as in Nerddinner. I created an airport repository and have these methods:
public IQueryable<AllAirports> ReturnAllAirportWithIn50milesOfAPoint(double lat, double lon)
{
var airports = from d in im.AllAirports
where DistanceBetween(lat, lon, (double)d.Lat, (double)d.Lon) < 1000.00
select d;
return airports;
}
[EdmFunction("AirTravelModel.Store", "DistanceBetween")]
public static double DistanceBetween(double lat1, double long1, double lat2, double long2)
{
throw new NotImplementedException("Only call through LINQ expression");
}
When I tested it, it shows:
base {"The specified method 'Double DistanceBetween(Double, Double, Double, Double)' on the type 'AirTravelMVC3.Models.Repository.AirportRepository' cannot be translated into a LINQ to Entities
store expression because no overload matches the passed arguments."} System.SystemException {System.NotSupportedException}
Do you have any ideas on why this happen? The only different between my work and nerddinner is that I used a POCO plugin in entity framework.
The SQL UDF is as follows, it works very well in the database:
CREATE FUNCTION [dbo].[DistanceBetween](#Lat1 as real,
#Long1 as real, #Lat2 as real, #Long2 as real)
RETURNS real
AS
BEGIN
DECLARE #dLat1InRad as float(53);
SET #dLat1InRad = #Lat1 * (PI()/180.0);
DECLARE #dLong1InRad as float(53);
SET #dLong1InRad = #Long1 * (PI()/180.0);
DECLARE #dLat2InRad as float(53);
SET #dLat2InRad = #Lat2 * (PI()/180.0);
DECLARE #dLong2InRad as float(53);
SET #dLong2InRad = #Long2 * (PI()/180.0);
DECLARE #dLongitude as float(53);
SET #dLongitude = #dLong2InRad - #dLong1InRad;
DECLARE #dLatitude as float(53);
SET #dLatitude = #dLat2InRad - #dLat1InRad;
/* Intermediate result a. */
DECLARE #a as float(53);
SET #a = SQUARE (SIN (#dLatitude / 2.0)) + COS (#dLat1InRad)
* COS (#dLat2InRad)
* SQUARE(SIN (#dLongitude / 2.0));
/* Intermediate result c (great circle distance in Radians). */
DECLARE #c as real;
SET #c = 2.0 * ATN2 (SQRT (#a), SQRT (1.0 - #a));
DECLARE #kEarthRadius as real;
/* SET kEarthRadius = 3956.0 miles */
SET #kEarthRadius = 6376.5; /* kms */
DECLARE #dDistance as real;
SET #dDistance = #kEarthRadius * #c;
return (#dDistance);
END
In order to do this using POCOs with the existing NerdDinner source I had to add this to the DinnerRepository class:
public IQueryable<Dinner> FindByLocation(float latitude, float longitude)
{
List<Dinner> resultList = new List<Dinner>();
var results = db.Database.SqlQuery<Dinner>("SELECT * FROM Dinners WHERE EventDate >= {0} AND dbo.DistanceBetween({1}, {2}, Latitude, Longitude) < 1000", DateTime.Now, latitude, longitude);
foreach (Dinner result in results)
{
resultList.Add(db.Dinners.Where(d => d.DinnerID == result.DinnerID).FirstOrDefault());
}
return resultList.AsQueryable<Dinner>();
}
Assuming DistanceBetween is implemented in c# The issue (as hinted at by #p.campbell) is that the query generator doesn't know how to calculate DistanceBetween
To get your code to work as is, you might need to do something like
public IQueryable<AllAirports> ReturnAllAirportWithIn50milesOfAPoint(double lat, double lon)
{
var airports = from d in im.AllAirports.ToList()
where DistanceBetween(lat, lon, (double)d.Lat, (double)d.Lon) < 1000.00
select d;
return airports;
}
the ToList() will force the AllAirports to evaluate to a list, then it can be evaluated in memory, using your c# function. Obviously this won't scale to a huge number of airports. If that was an issue, you might want to do a rough "box" query where you just do a cheap within-a-square expression to return a small number of airports, ToList that, and then call your distancebetween to refine the results.
I got this same exception after I made several changes to the DB schema and "updated the model from database".
After comparing my EDMX XML with the original from Nerd Dinner, I saw that mine had changed all of the types for the DistanceBetween function to "real", where Nerd Dinner had "float". Changing them back to float resolved the issue.
Could someone explain why the FLEX 4.5 XMLDecoder does this to my XML-data?
var decoder:XMLDecoder = new XMLDecoder;
var $object:Object = decoder.decode( <xmltag>08.00</xmltag> );
// object = "08.00"
var decoder:XMLDecoder = new XMLDecoder;
var $object:Object = decoder.decode( <xmltag>11.00</xmltag> );
// Object = "11" (HEY! Where did my '.00' part of the string go?)
var decoder:XMLDecoder = new XMLDecoder;
var $object:Object = decoder.decode( <xmltag>11.30</xmltag> );
// Object = "11.3" (HEY! Where did my '0' part of the string go?)
The Flex deserializer also gave me issues with this. It may be interpreting them as Number objects and thus they will return short representations when toString() is called.
Try using .toFixed(2) whenever you need to print a value such as 11.00
var $object:Object = decoder.decode( <xmltag>11.00</xmltag> );
trace($object); //11
trace($object.toFixed(2)); //11.00
So, to the answer the original question of why this is happening:
In the source code for SimpleXMLDecoder (which I'm guessing has similar functionality to XMLDecoder), there's a comment in the function simpleType():
//return the value as a string, a boolean or a number.
//numbers that start with 0 are left as strings
//bForceObject removed since we'll take care of converting to a String or Number object later
numbers that start with 0 are left as strings - I guess they thought of phone numbers but not decimals.
Also, because of some hacky implicit casting, you actually have three different types -
"0.800" : String
11 : int
11.3: Number