How do I add multiple Array to a JSON object in kotlin for retrofit - retrofit

My Code is:
val objSubBuyNow = JsonObject()
val arrayProducts = JsonArray()
for (i in 0 until response.arrList?.size!!) {
objSubBuyNow.addProperty("strProductId", response.arrList[i]?._id)
Log.i("kaaa", response.arrList[i]?._id.toString())
objSubBuyNow.addProperty("dblQty", response.arrList[i]?.dblQty)
objSubBuyNow.addProperty("strSize", response.arrList[i]?.strSize)
objSubBuyNow.addProperty("strColor", response.arrList[i]?.strColor)
objSubBuyNow.addProperty("strName", response.arrList[i]?.strName)
objSubBuyNow.addProperty("dblAmount", response.arrList[i]?.dblMRP)
objSubBuyNow.addProperty("strImageUrl", response.arrList[i]?.strImageUrl)
arrayProducts.add(objSubBuyNow)
}
objBuyNow.add("arrProducts", arrayProducts)
But it adds the same array as the loop goes.
Thanks in advance

Related

How to get/build a JavaRDD[DataSet]?

When I use deeplearning4j and try to train a model in Spark
public MultiLayerNetwork fit(JavaRDD<DataSet> trainingData)
fit() need a JavaRDD parameter,
I try to build like this
val totalDaset = csv.map(row => {
val features = Array(
row.getAs[String](0).toDouble, row.getAs[String](1).toDouble
)
val labels = Array(row.getAs[String](21).toDouble)
val featuresINDA = Nd4j.create(features)
val labelsINDA = Nd4j.create(labels)
new DataSet(featuresINDA, labelsINDA)
})
but the tip of IDEA is No implicit arguments of type:Encode[DataSet]
it's a error and I dont know how to solve this problem,
I know SparkRDD can transform to JavaRDD, but I dont know how to build a Spark RDD[DataSet]
DataSet is in import org.nd4j.linalg.dataset.DataSet
Its construction method is
public DataSet(INDArray first, INDArray second) {
this(first, second, (INDArray)null, (INDArray)null);
}
this is my code
val spark:SparkSession = {SparkSession
.builder()
.master("local")
.appName("Spark LSTM Emotion Analysis")
.getOrCreate()
}
import spark.implicits._
val JavaSC = JavaSparkContext.fromSparkContext(spark.sparkContext)
val csv=spark.read.format("csv")
.option("header","true")
.option("sep",",")
.load("/home/hadoop/sparkjobs/LReg/data.csv")
val totalDataset = csv.map(row => {
val features = Array(
row.getAs[String](0).toDouble, row.getAs[String](1).toDouble
)
val labels = Array(row.getAs[String](21).toDouble)
val featuresINDA = Nd4j.create(features)
val labelsINDA = Nd4j.create(labels)
new DataSet(featuresINDA, labelsINDA)
})
val data = totalDataset.toJavaRDD
create JavaRDD[DataSet] by Java in deeplearning4j official guide:
String filePath = "hdfs:///your/path/some_csv_file.csv";
JavaSparkContext sc = new JavaSparkContext();
JavaRDD<String> rddString = sc.textFile(filePath);
RecordReader recordReader = new CSVRecordReader(',');
JavaRDD<List<Writable>> rddWritables = rddString.map(new StringToWritablesFunction(recordReader));
int labelIndex = 5; //Labels: a single integer representing the class index in column number 5
int numLabelClasses = 10; //10 classes for the label
JavaRDD<DataSet> rddDataSetClassification = rddWritables.map(new DataVecDataSetFunction(labelIndex, numLabelClasses, false));
I try to create by scala:
val JavaSC: JavaSparkContext = new JavaSparkContext()
val rddString: JavaRDD[String] = JavaSC.textFile("/home/hadoop/sparkjobs/LReg/hf-data.csv")
val recordReader: CSVRecordReader = new CSVRecordReader(',')
val rddWritables: JavaRDD[List[Writable]] = rddString.map(new StringToWritablesFunction(recordReader))
val featureColnum = 3
val labelColnum = 1
val d = new DataVecDataSetFunction(featureColnum,labelColnum,true,null,null)
// val rddDataSet: JavaRDD[DataSet] = rddWritables.map(new DataVecDataSetFunction(featureColnum,labelColnum, true,null,null))
// can not reslove overloaded method 'map'
debug error infomations:
A DataSet is just a pair of INDArrays. (inputs and labels)
Our docs cover this in depth:
https://deeplearning4j.konduit.ai/distributed-deep-learning/data-howto
For stack overflow sake, I'll summarize what's here since there's no "1" way to create a data pipeline. It's relative to your problem. It's very similar to how you you would create a dataset locally, generally you want to take whatever you do locally and put that in to spark in a function.
CSVs and images for example are going to be very different. But generally you use the datavec library to do that. The docs summarize the approach for each kind.

How to get the class reference from KParameter in kotlin?

The code below is about reflection.
It tries to do 2 things:
case1() creates an instance from SimpleStudent class, it works.
case2() creates an instance from Student class, not work.
The reason that case2() not work as well as the question, is that inside that generateValue():
I don't know how to check it is kotlin type or my own type(I have a dirty way to check param.type.toString() not contain "kotlin" but I wonder if there is a better solution
I don't know how to get its class reference when it's a custom class. The problem is that even though param.type.toString() == "Lesson", when I tried to get param.type::class, it's class kotlin.reflect.jvm.internal.KTypeImpl
So, how to solve it? Thanks
==============
import kotlin.reflect.KParameter
import kotlin.reflect.full.primaryConstructor
import kotlin.test.assertEquals
data class Lesson(val title:String, val length:Int)
data class Student(val name:String, val major:Lesson )
data class SimpleStudent(val name:String, val age:Int )
fun generateValue(param:KParameter, originalValue:Map<*,*>):Any? {
var value = originalValue[param.name]
// if (param.type is not Kotlin type){
// // Get its ::class so that we could create the instance of it, here, I mean Lesson class?
// }
return value
}
fun case1(){
val classDesc = SimpleStudent::class
val constructor = classDesc.primaryConstructor!!
val value = mapOf<Any,Any>(
"name" to "Tom",
"age" to 16
)
val params = constructor.parameters.associateBy (
{it},
{generateValue(it, value)}
)
val result:SimpleStudent = constructor.callBy(params)
assertEquals("Tom", result.name)
assertEquals(16, result.age)
}
fun case2(){
val classDesc = Student::class
val constructor = classDesc.primaryConstructor!!
val value = mapOf<Any,Any>(
"name" to "Tom",
"major" to mapOf<Any,Any>(
"title" to "CS",
"length" to 16
)
)
val params = constructor.parameters.associateBy (
{it},
{generateValue(it, value)}
)
val result:Student = constructor.callBy(params)
assertEquals("Tom", result.name)
assertEquals(Lesson::class, result.major::class)
assertEquals("CS", result.major.title)
}
fun main(args : Array<String>) {
case1()
case2()
}
Problem solved:
You could get that ::class by using param.type.classifier as KClass<T> where param is KParameter

How efficiently to convert one dimensional array to two dimensional array in swift3

What is the efficient way to convert an array of pixelValues [UInt8] into two dimensional array of pixelValues rows - [[UInt8]]
You can write something like this:
var pixels: [UInt8] = [0,1,2,3, 4,5,6,7, 8,9,10,11, 12,13,14,15]
let bytesPerRow = 4
assert(pixels.count % bytesPerRow == 0)
let pixels2d: [[UInt8]] = stride(from: 0, to: pixels.count, by: bytesPerRow).map {
Array(pixels[$0..<$0+bytesPerRow])
}
But with the value semantics of Swift Arrays, all attempt to create new nested Array requires copying the content, so may not be "efficient" enough for your purpose.
Re-consider if you really need such nested Array.
This should work
private func convert1Dto2DArray(oneDArray:[String], stringsPerRow:Int)->[[String]]?{
var target = oneDArray
var outOfIndexArray:[String] = [String]()
let reminder = oneDArray.count % stringsPerRow
if reminder > 0 && reminder <= stringsPerRow{
let suffix = oneDArray.suffix(reminder)
let list = oneDArray.prefix(oneDArray.count - reminder)
target = Array(list)
outOfIndexArray = Array(suffix)
}
var array2D: [[String]] = stride(from: 0, to: target.count, by: stringsPerRow).map {
Array(target[$0..<$0+stringsPerRow])}
if !outOfIndexArray.isEmpty{
array2D.append(outOfIndexArray)
}
return array2D
}

How to convert List to Map in Kotlin?

For example I have a list of strings like:
val list = listOf("a", "b", "c", "d")
and I want to convert it to a map, where the strings are the keys.
I know I should use the .toMap() function, but I don't know how, and I haven't seen any examples of it.
You have two choices:
The first and most performant is to use associateBy function that takes two lambdas for generating the key and value, and inlines the creation of the map:
val map = friends.associateBy({it.facebookId}, {it.points})
The second, less performant, is to use the standard map function to create a list of Pair which can be used by toMap to generate the final map:
val map = friends.map { it.facebookId to it.points }.toMap()
From List to Map with associate function
With Kotlin 1.3, List has a function called associate. associate has the following declaration:
fun <T, K, V> Iterable<T>.associate(transform: (T) -> Pair<K, V>): Map<K, V>
Returns a Map containing key-value pairs provided by transform function applied to elements of the given collection.
Usage:
class Person(val name: String, val id: Int)
fun main() {
val friends = listOf(Person("Sue Helen", 1), Person("JR", 2), Person("Pamela", 3))
val map = friends.associate({ Pair(it.id, it.name) })
//val map = friends.associate({ it.id to it.name }) // also works
println(map) // prints: {1=Sue Helen, 2=JR, 3=Pamela}
}
From List to Map with associateBy function
With Kotlin, List has a function called associateBy. associateBy has the following declaration:
fun <T, K, V> Iterable<T>.associateBy(keySelector: (T) -> K, valueTransform: (T) -> V): Map<K, V>
Returns a Map containing the values provided by valueTransform and indexed by keySelector functions applied to elements of the given collection.
Usage:
class Person(val name: String, val id: Int)
fun main() {
val friends = listOf(Person("Sue Helen", 1), Person("JR", 2), Person("Pamela", 3))
val map = friends.associateBy(keySelector = { person -> person.id }, valueTransform = { person -> person.name })
//val map = friends.associateBy({ it.id }, { it.name }) // also works
println(map) // prints: {1=Sue Helen, 2=JR, 3=Pamela}
}
If you have duplicates in your list that you don't want to lose, you can do this using groupBy.
Otherwise, like everyone else said, use associate/By/With (which in the case of duplicates, I believe, will only return the last value with that key).
An example grouping a list of people by age:
class Person(val name: String, val age: Int)
fun main() {
val people = listOf(Person("Sue Helen", 31), Person("JR", 25), Person("Pamela", 31))
val duplicatesKept = people.groupBy { it.age }
val duplicatesLost = people.associateBy({ it.age }, { it })
println(duplicatesKept)
println(duplicatesLost)
}
Results:
{31=[Person#41629346, Person#4eec7777], 25=[Person#3b07d329]}
{31=Person#4eec7777, 25=Person#3b07d329}
Convert a Iteratable Sequence Elements to a Map in kotlin ,
associate vs associateBy vs associateWith:
*Reference:Kotlin Documentation
1- associate (to set both Keys & Values): Build a map that can set key & value elements :
IterableSequenceElements.associate { newKey to newValue } //Output => Map {newKey : newValue ,...}
If any of two pairs would have the same key the last one gets added to the map.
The returned map preserves the entry iteration order of the original array.
2- associateBy (just set Keys by calculation): Build a map that we can set new Keys, analogous elements will be set for values
IterableSequenceElements.associateBy { newKey } //Result: => Map {newKey : 'Values will be set from analogous IterableSequenceElements' ,...}
3- associateWith (just set Values by calculation): Build a map that we can set new Values, analogous elements will be set for Keys
IterableSequenceElements.associateWith { newValue } //Result => Map { 'Keys will be set from analogous IterableSequenceElements' : newValue , ...}
Example from Kotlin tips :
You can use associate for this task:
val list = listOf("a", "b", "c", "d")
val m: Map<String, Int> = list.associate { it to it.length }
In this example, the strings from list become the keys and their corresponding lengths (as an example) become the values inside the map.
That have changed on the RC version.
I am using val map = list.groupByTo(destinationMap, {it.facebookId}, { it -> it.point })

Field type returning numbers [Axapta]

I want to get the field types. My code is as follows:
tID = dict.tableName2Id(tableName);
counter = 0;
dt = new DictTable(tID);
if (dt)
{
counter = dt.fieldNext(counter);
while (counter)
{
df = dt.fieldObject(counter);
if (df)
{
fields = conIns(fields,1,df.baseType());
}
counter = dt.fieldNext(counter);
}
}
On return to .NET Business connector, the types are shown as numbers instead of strings.
Kindly help.
EDIT : DataField.baseType() returns "Types" can this be converted to string and then added to the container?
EDIT 2: Ok now, im getting a Types Enumeration. Is there any way to map this enumeration in AX and add to container as string?
Got it!! Here's the code :
tID = dict.tableName2Id(tableName);
counter = 0;
dt = new DictTable(tID);
if (dt)
{
counter = dt.fieldNext(counter);
while (counter)
{
df = dt.fieldObject(counter);
if (df)
{
t = df.baseType();
fields = conIns(fields,1,enum2str(t));
}
counter = dt.fieldNext(counter);
}
}

Resources