I have in a global gdscript, and inside i have
var number1 = 0
var number2 = 0,
which normally could be accessed by any gdscript by global.number1 and global.number2.
Then i have a different script with a Dictionary inside that holds some values.
How can i make it work like this(see following):
var dict = {0: {path = "global.number1"}, {1: {path = "global.number2"}}
and then, instead of using several if-statements if i want to expand the number of objects in the dictionary, i can do this:
dict[n].path += 1
I dont really know what are you actually asking, but maybe you forgot to set the path to global in the second script?
Anyway you should clarify what's the question.
Related
I am writing some code where I have multiple dictionaries for my data. The reason being, I have multiple core objects and multiple smaller assets and the user must be able to choose a smaller asset and have some function off in the distance run the code with the parent noted.
An example of one of the dictionaries: (I'm working in ROBLOX Lua 5.1 but the syntax for the problem should be identical)
local data = {
character = workspace.Stores.NPCs.Thom,
name = "Thom", npcId = 9,
npcDialog = workspace.Stores.NPCs.Thom.Dialog
}
local items = {
item1 = {
model = workspace.Stores.Items.Item1.Main,
npcName = "Thom",
}
}
This is my function:
local function function1(item)
if not items[item] and data[items[item[npcName]]] then return false end
end
As you can see, I try to index the dictionary using a key from another dictionary. Usually this is no problem.
local thisIsAVariable = item[item1[npcName]]
but the method I use above tries to index the data dictionary for data that is in the items dictionary.
Without a ton of local variables and clutter, is there a way to do this? I had an idea to wrap the conflicting dictionary reference in a tostring() function to separate them - would that work?
Thank you.
As I see it, your issue is that:
data[items[item[npcName]]]
is looking for data[“Thom”] ... but you do not have such a key in the data table. You have a “name” key that has a “Thom” value. You could reverse the name key and value in the data table. “Thom” = name
I am writing a Frama-C Plugin.
I want to develop a plugin, that sets the value of a local variable. By this idea I try to do the value-analysis afterwards, and then I can analyze the reachablility, path analysis and other things by my second plugin.
Is it possible to set the value of a local variable within a plugin (at the start of a function where I know the name)?
EDIT
I now found out how to make new local variables, how to get the Varinfo of variables and how to create new varinfos. The only missing thing is setting the variable's value.
I started with a code like this:
match kf_cil_fun with
| Cil_types.Definition(a,b) ->
let val_visitor = new value_set_visitor kf in
Visitor.visitFramacFileSameGlobals (val_visitor :> Visitor.frama_c_visitor) (Ast.get());
let old_varinfo = Cil.makeLocalVar a "x" Cil.intType in
let new_varinfo = Cil.makeVarinfo false false "x" Cil.intType in
val_visitor#doStuff old_varinfo new_varinfo;
()
| _ -> ()
where the visitor is a simple visitor with a method doStuff, and the builtin-methods vfile, vglob_aux and vstmt_aux that simply call Cil.DoChildren
method doStuff old_varinfo new_varinfo =
Cil.set_varinfo self#behavior old_varinfo new_varinfo;
Does anyone have an idea of how to set the value of x to 1 (or a fixed other value)? Am I doing the things right?
I'm trying to migrate some Hadoop Map Reduce code to Spark and I have doubts about how to manage map and reduce transformations when the schema of either the key or value change from input to output.
I have avro files with Indicator records that I want to process somehow. I already have this code that works:
val myAvroJob = new Job()
myAvroJob.setInputFormatClass(classOf[AvroKeyInputFormat[Indicator]])
myAvroJob.setOutputFormatClass(classOf[AvroKeyOutputFormat[Indicator]])
myAvroJob.setOutputValueClass(classOf[NullWritable])
AvroJob.setInputValueSchema(myAvroJob, Schema.create(Schema.Type.NULL))
AvroJob.setInputKeySchema(myAvroJob, Indicator.SCHEMA$)
AvroJob.setOutputKeySchema(myAvroJob, Indicator.SCHEMA$)
val indicatorsRdd = sc.newAPIHadoopRDD(myAvroJob.getConfiguration,
classOf[AvroKeyInputFormat[Indicator]],
classOf[AvroKey[Indicator]],
classOf[NullWritable])
val myRecordOnlyRdd = indicatorsRdd.map(x => (doSomethingWith(x._1), NullWritable.get)
val indicatorPairRDD = new PairRDDFunctions(myRecordOnlyRdd)
indicatorPairRDD.saveAsNewAPIHadoopDataset(myAvroJob.getConfiguration)
But this code works since the schema of the input and ouput keys does not change, is always Indicator. In hadoop Map Reduce you can define a map or reduce functions and modify the schema from input to output. In fact, I have map functions which process every Indicator record and generates a new record SoporteCartera. How can I do this in spark? It is possible from the same RDD or I have to define 2 different RDDs and pass from one to another somehow?
Thanks for your help.
To answer my own question... the problem was that you cannot change the RDD type, you must define a different RDD, so I solved it with the above code:
val myAvroJob = new Job()
myAvroJob.setInputFormatClass(classOf[AvroKeyInputFormat[SoporteCartera]])
myAvroJob.setOutputFormatClass(classOf[AvroKeyOutputFormat[Indicator]])
myAvroJob.setOutputValueClass(classOf[NullWritable])
AvroJob.setInputValueSchema(myAvroJob, Schema.create(Schema.Type.NULL))
AvroJob.setInputKeySchema(myAvroJob, SoporteCartera.SCHEMA$)
AvroJob.setOutputKeySchema(myAvroJob, Indicator.SCHEMA$)
val soporteCarteraRdd = sc.newAPIHadoopRDD(myAvroJob.getConfiguration,
classOf[AvroKeyInputFormat[SoporteCartera]],
classOf[AvroKey[SoporteCartera]],
classOf[NullWritable])
val indicatorsRdd = soporteCarteraRdd.map(x => (fromSoporteCarteraToIndicator(x._1), NullWritable.get))
val indicatorPairRDD = new PairRDDFunctions(indicatorsRdd)
indicatorPairRDD.saveAsNewAPIHadoopDataset(myAvroJob.getConfiguration)
I have 2 possibilities. It looks like these are the same (or I'm wrong). Which is better and why?
var quest1:DisplayObject = FrameCanvas.baseCanvas.addChild(app.questionmark1); //
quest1.x = posX; //
quest1.y = posY; //
or
app.questionmark1.x = posX;
app.questionmark1.y = posY;
In the first example quest1 is a reference to app.questionmark1 which you are adding to FrameCanvas.baseCanvas and then updating its x and y.
In the second example you are directly setting the x and y on app.questionmark1.
Both work to update app.questionmark1's x and y properties, but in the second example app.questionmark1 may not be on the stage unless you added it somewhere else in the code.
The second example is better because there's really not a reason to store a reference to app.questionmark1 as quest1 as you already can access it by app.questionmark1.
I need to know how I can parse a variable path in Flex 3 & e4X. For example, I have two XML strings where the name of one element is the only difference.
<NameOfRoot>
<NameOfChild1>
<data>1</data>
</NameOfChild1>
</NameOfRoot>
<NameOfRoot>
<NameOfChild2>
<data>2</data>
</NameOfChild2>
</NameOfRoot>
Currently I am accessing variables like this:
var data1:String = NameOfRoot.*::NameOfChild1.*::data;
var data2:String = NameOfRoot.*::NameOfChild2.*::data;
I would rather make this task more abstract so that if "NameOfChild3" is introduced I do not need to update the code. For example:
var data:String = NameOfRoot.*::{variable}.*::data;
Does anyone have insights into how this can be done?
Use the child property (LiveDocs example here):
var tagName:String = "NameOfChild1";
var data:String = NameOfRoot.child(tagName).data;
That's with no namespacing--not sure whether it's necessary in your case, but I assume you'd add some *::'s?
this also works:
var data:String = NameOfRoot..data;
but if you have more than 1 data node you'll have to sort some stuff out.
It looks like the ".*." operation will work. I wonder if this is the easiest way to handle this problem.
var data:String = NameOfRoot.*.*::data;