parsing integer array on JSONcpp - jsoncpp

I'm having trouble parsing integer arrays using JsonCpp.
I am trying to read an array of integers from json input.
I'm getting the error:
ambiguous overload for 'operator[]' in 'dataArray[0]'
I've tried:
Json::Value c_val;
const Json::Value dataArray = root["data"];
c_val = dataArray[0]; int a = c_val.asInt();
c_val = dataArray[1]; int b = c_val.asInt();
and I've also tried
int a = dataArray[0];
To no avail. Sample input json file:
{
"data" : [ 1047, 140, 60, 60 ]
}

For future reference:
Force integer input with '0u':
c_val = dataArray[0u]; int a = c_val.asInt();
solves it.

Related

(Godot Engine) How do I know which exported enum flags are enabled in script

By using the Godot engine and writing in the GDScript language,
let's say I have an enum declared as:
enum eTextMode {CHAR, NUMBER, SYMBOLS_TEXT, SYMBOLS_ALL}
And an export variable as:
export(eTextMode, FLAGS) var _id: int = 0
In the inspector panel I can see which flag is selected or not, but how can I know in code which specifically flag is selected?
By selecting in the inspector, for example: the NUMBER and SYMBOLS_TEXT flags, the _id variable will be set as 5
My approach is the following hard-coded dictionary:
var _selected_flags: Dictionary = {
CHAR = _id in [1, 3, 5, 7, 9, 11, 13, 15],
NUMBER = _id in [2, 3, 6, 7, 10, 11, 14, 15],
SYMBOLS_TEXT = _id in [4, 5, 6, 7, 12, 13, 14, 15],
SYMBOLS_ALL = _id in [8, 9, 10, 11, 12, 13, 14, 15]
}
Resulting in:
{CHAR:True, NUMBER:False, SYMBOLS_ALL:False, SYMBOLS_TEXT:True}
The above result is exactly what I'm expecting (a dictionary with string keys as they are defined in the enum with a boolean value representing the selection state).
How could I manage to do this dynamically for any enum regardless of size?
Thank you very much,
One tacky solution that I could manage is by not using an enum at all, but instead a dictionary like the following example:
const dTextMode: Dictionary = {CHAR = false, NUMBER = false, SYMBOLS_TEXT = false, SYMBOLS_ALL = false}
export(Dictionary) var m_dTextMode: Dictionary = dTextMode setget Set_TextMode, Get_TextMode
func Get_TextMode() -> Dictionary: return m_dTextMode
func Set_TextMode(_data: Dictionary = m_dTextMode) -> void: m_dTextMode = _data
An exported dictionary is not as good-looking as an exported enum with FLAGS, and by following this approach it kind of invalidates my initial problem.
By selecting CHAR and SYMBOLS_TEXT in the exported dictionary from the inspector, and then calling print(self.Get_TextMode()) the result is indeed what I expected:
{CHAR:True, NUMBER:False, SYMBOLS_ALL:False, SYMBOLS_TEXT:True}
I still can't figure out though how to achieve this result by using the export(*enum, FLAGS) aproach.
Edit: also, the setter function is not feasible to be used in script since the user must know to duplicate the dTextMode constant first, edit it and set is as an argument.
Thanks to the comments from #Thearot from my first answer, I have managed to figure out the following solution which meets all expectations, with one caveat: it seems like an overkill solution...
enum eTestFlags {FLAG_1, FLAG_2, FLAG_3, FLAG_5, FLAG_6}
export(eTestFlags, FLAGS) var m_iTestFlags: int = 0 setget Set_TestFlags
func Get_TestFlags() -> Dictionary: return self._get_enum_flags(m_iTestFlags, eTestFlags)
func Set_TestFlags(_id: int = m_iTestFlags) -> void: m_iTestFlags = _id
func _get_enum_flags(_val_selected: int, _enum: Dictionary, _bit_check_limit: int = 32) -> Dictionary:
var _enum_keys: Array = _enum.keys() ; _enum_keys.invert()
var _bin_string: String = ""
var _val_temp: int = 0
var _val_count: int = _bit_check_limit - int(_is_pow2(_bit_check_limit))
while(_val_count >= 0):
_val_temp = _val_selected >> _val_count
_bin_string += "1" if _val_temp & 1 else "0"
_val_count -= 1
var _bin_string_padded: String = "%0*d" % [_enum_keys.size(), int(_bin_string)]
var _result_dict: Dictionary = {}
for _str_id in range(_bin_string_padded.length(), 0, -1):
_result_dict[_enum_keys[_str_id - 1]] = bool(_bin_string_padded[_str_id - 1] == "1")
return _result_dict
func _is_pow2(_value: int) -> bool:
return _value && (not (_value & (_value - 1)))
Now, if I print(self.Get_TestFlags()) after selecting FLAG_2 and FLAG_6 the result is:
{FLAG_1:False, FLAG_2:True, FLAG_3:False, FLAG_5:False, FLAG_6:True}
You're on the right track but overcomplicating things. Without going too much into the math (see Wikipedia), here's what you'd do in Godot:
enum eTextMode {CHAR, NUMBER, SYMBOLS_TEXT, SYMBOLS_ALL}
export(eTextMode, FLAGS) var _id: int = 0
func _ready() -> void:
for modeName in eTextMode:
var bit_flag_value: int = int(pow(2, eTextMode[modeName]))
if _id & bit_flag_value:
printt("Flagged", modeName)
You can access the named fields of your enum like elements in an Array/Dictionary by default (iterate through the keys, get their 0-based index as values). The above math trick turns the 0-based index into the correct bit flag number, and if you (single) '&' it with the combined bit-flags value you can check whether or not that flag is set.

How to get/build a JavaRDD[DataSet]?

When I use deeplearning4j and try to train a model in Spark
public MultiLayerNetwork fit(JavaRDD<DataSet> trainingData)
fit() need a JavaRDD parameter,
I try to build like this
val totalDaset = csv.map(row => {
val features = Array(
row.getAs[String](0).toDouble, row.getAs[String](1).toDouble
)
val labels = Array(row.getAs[String](21).toDouble)
val featuresINDA = Nd4j.create(features)
val labelsINDA = Nd4j.create(labels)
new DataSet(featuresINDA, labelsINDA)
})
but the tip of IDEA is No implicit arguments of type:Encode[DataSet]
it's a error and I dont know how to solve this problem,
I know SparkRDD can transform to JavaRDD, but I dont know how to build a Spark RDD[DataSet]
DataSet is in import org.nd4j.linalg.dataset.DataSet
Its construction method is
public DataSet(INDArray first, INDArray second) {
this(first, second, (INDArray)null, (INDArray)null);
}
this is my code
val spark:SparkSession = {SparkSession
.builder()
.master("local")
.appName("Spark LSTM Emotion Analysis")
.getOrCreate()
}
import spark.implicits._
val JavaSC = JavaSparkContext.fromSparkContext(spark.sparkContext)
val csv=spark.read.format("csv")
.option("header","true")
.option("sep",",")
.load("/home/hadoop/sparkjobs/LReg/data.csv")
val totalDataset = csv.map(row => {
val features = Array(
row.getAs[String](0).toDouble, row.getAs[String](1).toDouble
)
val labels = Array(row.getAs[String](21).toDouble)
val featuresINDA = Nd4j.create(features)
val labelsINDA = Nd4j.create(labels)
new DataSet(featuresINDA, labelsINDA)
})
val data = totalDataset.toJavaRDD
create JavaRDD[DataSet] by Java in deeplearning4j official guide:
String filePath = "hdfs:///your/path/some_csv_file.csv";
JavaSparkContext sc = new JavaSparkContext();
JavaRDD<String> rddString = sc.textFile(filePath);
RecordReader recordReader = new CSVRecordReader(',');
JavaRDD<List<Writable>> rddWritables = rddString.map(new StringToWritablesFunction(recordReader));
int labelIndex = 5; //Labels: a single integer representing the class index in column number 5
int numLabelClasses = 10; //10 classes for the label
JavaRDD<DataSet> rddDataSetClassification = rddWritables.map(new DataVecDataSetFunction(labelIndex, numLabelClasses, false));
I try to create by scala:
val JavaSC: JavaSparkContext = new JavaSparkContext()
val rddString: JavaRDD[String] = JavaSC.textFile("/home/hadoop/sparkjobs/LReg/hf-data.csv")
val recordReader: CSVRecordReader = new CSVRecordReader(',')
val rddWritables: JavaRDD[List[Writable]] = rddString.map(new StringToWritablesFunction(recordReader))
val featureColnum = 3
val labelColnum = 1
val d = new DataVecDataSetFunction(featureColnum,labelColnum,true,null,null)
// val rddDataSet: JavaRDD[DataSet] = rddWritables.map(new DataVecDataSetFunction(featureColnum,labelColnum, true,null,null))
// can not reslove overloaded method 'map'
debug error infomations:
A DataSet is just a pair of INDArrays. (inputs and labels)
Our docs cover this in depth:
https://deeplearning4j.konduit.ai/distributed-deep-learning/data-howto
For stack overflow sake, I'll summarize what's here since there's no "1" way to create a data pipeline. It's relative to your problem. It's very similar to how you you would create a dataset locally, generally you want to take whatever you do locally and put that in to spark in a function.
CSVs and images for example are going to be very different. But generally you use the datavec library to do that. The docs summarize the approach for each kind.

How efficiently to convert one dimensional array to two dimensional array in swift3

What is the efficient way to convert an array of pixelValues [UInt8] into two dimensional array of pixelValues rows - [[UInt8]]
You can write something like this:
var pixels: [UInt8] = [0,1,2,3, 4,5,6,7, 8,9,10,11, 12,13,14,15]
let bytesPerRow = 4
assert(pixels.count % bytesPerRow == 0)
let pixels2d: [[UInt8]] = stride(from: 0, to: pixels.count, by: bytesPerRow).map {
Array(pixels[$0..<$0+bytesPerRow])
}
But with the value semantics of Swift Arrays, all attempt to create new nested Array requires copying the content, so may not be "efficient" enough for your purpose.
Re-consider if you really need such nested Array.
This should work
private func convert1Dto2DArray(oneDArray:[String], stringsPerRow:Int)->[[String]]?{
var target = oneDArray
var outOfIndexArray:[String] = [String]()
let reminder = oneDArray.count % stringsPerRow
if reminder > 0 && reminder <= stringsPerRow{
let suffix = oneDArray.suffix(reminder)
let list = oneDArray.prefix(oneDArray.count - reminder)
target = Array(list)
outOfIndexArray = Array(suffix)
}
var array2D: [[String]] = stride(from: 0, to: target.count, by: stringsPerRow).map {
Array(target[$0..<$0+stringsPerRow])}
if !outOfIndexArray.isEmpty{
array2D.append(outOfIndexArray)
}
return array2D
}

Convert the dataype of VALUES in Maps Go language

I have a map in GO as :
var userinputmap = make(map[string]string)
and the values in it are of type :
[ABCD:30 EFGH:50 PORS:60]
Not that the 30,50,60 are strings over here.
I wish to have a same map but the numeric values should have float64 type instead of string type.
Desired output :
var output = make(map[string]float64)
I tried to do it but I get an error : cannot use <placeholder_name> (type string) as type float64 in assignment
You cannot do this by simple typecasting; the two maps have different representations in memory.
To solve this, you will have to iterate over every entry of the first map, convert the string representation of the float to a float64, then store the new value in the other map:
import "strconv"
var output = make(map[string]float64)
for key, value := range userinputmap {
if converted, err := strconv.ParseFloat(value, 64); err == nil {
output[key] = converted
}
}

.NET: How to change the value in a DataTable

In ADO.NET i'm using GetSchemaTable to return the schema table for a results set.
DataTable schema = rdr.GetSchemaTable();
gridSchema.DataSource = schema;
gridSchema.DataBind();
Unfortunatly the "ProviderType" value is displaying as an integer, rather than the OleDbType enumeration value that it is:
ProviderType Desired Display Value
============ =====================
129 Char
3 Integer
129 Char
129 Char
3 Integer
3 Integer
129 Char
135 DBTimeStamp
129 Char
129 Char
...
All these integers are the the enumeration values for the OleDbType enumeration:
public enum OleDbType
{
Empty = 0,
SmallInt = 2,
Integer = 3,
Single = 4,
Double = 5,
Currency = 6,
Date = 7,
BSTR = 8,
IDispatch = 9,
Error = 10,
Boolean = 11,
Variant = 12,
IUnknown = 13,
Decimal = 14,
TinyInt = 16,
UnsignedTinyInt = 17,
UnsignedSmallInt = 18,
UnsignedInt = 19,
BigInt = 20,
UnsignedBigInt = 21,
Filetime = 64,
Guid = 72,
Binary = 128,
Char = 129,
WChar = 130,
Numeric = 131,
DBDate = 133,
DBTime = 134,
DBTimeStamp = 135,
PropVariant = 138,
VarNumeric = 139,
VarChar = 200,
LongVarChar = 201,
VarWChar = 202,
LongVarWChar = 203,
VarBinary = 204,
LongVarBinary = 205,
}
i want to display the data type as something human readable, rather than an integer.
i've tried looping through the schema DataTable and modify the values inside the DataTable:
DataTable schema = rdr.GetSchemaTable();
//Change providerType column to be readable
foreach (DataRow row in schema.Rows)
{
OleDbType t = (OleDbType)row["ProviderType"];
row["ProviderType"] = t.ToString();
}
gridSchema.DataSource = schema;
gridSchema.DataBind();
But that throws an exception:
Column 'ProviderType' is read only.
i even looked at the GridView's RowDataBound event, thinking i could change the value as it is rendered:
protected void gridSchema_RowDataBound(object sender, GridViewRowEventArgs e)
{
//todo: magic
//e.Row["ProviderType"]
}
But it doesn't look like you can play with rendered values.
Can anyone suggest a nice way to much with the value of the ProviderType column so that it is human readable when i display it to humans?
Update
The workaround i'm using right now is tack an extra column on the end:
DataTable schema = rdr.GetSchemaTable();
schema.Columns.Add("OleDbDataType", typeof(String));
schema.Columns.Add("CLRDataType", typeof(String));
foreach (DataRow row in schema.Rows)
{
//Show the actual provider type
OleDbType t = (OleDbType)row["ProviderType"];
row["OleDbDataType"] = t.ToString();
//Show the corresponding CLR type while we're doing a hack
row["CLRDataType"] = row["DataType"].ToString();
}
gridSchema.DataSource = schema;
gridSchema.DataBind();
I might be completely off here, but can't you just set the ReadOnly property of the column to false? Like:
schema.Columns["ProviderType"].ReadOnly = false;
Afterwards you might get a columntype problem as you're trying to put a string value into a integer column. But this should get you into the right direction.
edit:
Solving the columntype issue:
DataTable newTable = schema.Clone();
newTable.Columns["ProviderType"].DataType = typeof(string);
foreach (DataRow dr in schema.Rows)
{
DataRow newRow = newTable.NewRow();
// fill newRow with correct data and newly formatted providertype
newTable.Rows.Add(newRow);
}
Or you could create an object with all the fields from the datatable, pass all the data into your object and modify it or create a read only field that returns the string acording to the integer given.
GridView and other also accept object arrays as datasources so it is automatic.

Resources