I'm switching over to using the Auto-IVC component as opposed to the IndepVar component. I'd like to be able to get a list of the promoted output names of the Auto-IVC component, so I can then use them to go and pull the appropriate value out of a configuration file and set the values that way. This will get rid of some boilerplate.
p.model._auto_ivc.list_outputs()
returns an empty list. It seems that p.model__dict__ has this information encoded in it, but I don't know exactly what is going on there so I am wondering if there is an easier way to do it.
To avoid confusion from future readers, I assume you meant that you wanted the promoted input names for the variables connected to the auto_ivc outputs.
We don't have a built-in function to do this, but you could do it with a bit of code like this:
seen = set()
for n in p.model._inputs:
src = p.model.get_source(n)
if src.startswith('_auto_ivc.') and src not in seen:
print(src, p.model._var_allprocs_abs2prom['input'][n])
seen.add(src)
assuming 'p' is the name of your Problem instance.
The code above just prints each auto_ivc output name followed by the promoted input it's connected to.
Here's an example of the output when run on one of our simple test cases:
_auto_ivc.v0 par.x
Related
When I run a Xcos model containing a scifunc_block_m block like shown below
I get an error message relating to data dimensions inconsistency:
"Data dimensions are inconsistent:"
" Variable size=[1,1]"
"Block output size=[100,1]."
But when I double click in the block in order to see what can I change to make the dimensions correct I get a message in the console saying
Undefined variable: scifunc_block_m
What bugs me is that scifunc_block_m is not the name of any variable, but rather the name of the block itself like can be seen in the official docs.
Of course I double checked that nowhere in my function phase_shifter neither anywhere else I have any variable named like that.
I tried with Scilab 6.1.1 and 6.1.0 believing that it might be a bug from apparently not.
In your phase_shifter.sce file generating the input variable,
the signalIn variable does not comply with the From Workspace block requirements, whose documentation says that the input variable
must be a structure with time and values fields
.time must be a column vector, and in your case
.values must also be a column
So,
t = (0:1/fs:Npp/fs - 1/fs); // time vector
signalIn = A*%e^(%i*w*t);
should be replaced with
t = (0:1/fs:Npp/fs - 1/fs)'; // time column vector
signalIn = struct("time",t, "values",A*%e^(%i*w*t));
This fixes the inconsistent dimensions message.
In addition, i am not able to reproduce your issue about Undefined variable: scifunc_block_m. The parameters interface opens as expected.
You may get this kind of messages if you try to run some xcos parts out of xcos, without beforehand loading xcos-related libraries.
Then, we get an unclear "Output should be of complex type." message on the From workspace block.
By the way, you try to plot some complex values. Please have a look to the MATMAGPHI block before entering MUX: https://help.scilab.org/docs/6.1.1/en_US/MATMAGPHI.html
I'm trying to practice writing workflows in snakemake.
The contents of my Snakefile:
configfile: "config.yaml"
rule get_col:
input:
expand("data/{file}.csv",file=config["datname"])
output:
expand("output/{file}_col{param}.csv",file=config["datname"],param=config["cols"])
params:
col=config["cols"]
script:
"scripts/getCols.R"
The contents of config.yaml:
cols:
[2,4]
datname:
"GSE3790_expression_data"
My R script:
getCols=function(input,output,col) {
dat=read.csv(input)
dat=dat[,col]
write.csv(dat,output,row.names=F)
}
getCols(snakemake#input[[1]],snakemake#output[[1]],snakemake#params[['col']])
It seems like both columns are being called at once. What I'm trying to accomplish is one column being called from the list per output file.
Since the second output never gets a chance to be created (both columns are used to create first output), snakemake throws an error:
Waiting at most 5 seconds for missing files.
MissingOutputException in line 3 of /Users/rebecca/Desktop/snakemake-tutorial/practice/Snakefile:
Job completed successfully, but some output files are missing.
On a slightly unrelated note, I thought I could write the input as:
'"data/{file}.csv"'
But that returns:
WildcardError in line 4 of /Users/rebecca/Desktop/snakemake-tutorial/practice/Snakefile:
Wildcards in input files cannot be determined from output files:
'file'
Any help would be much appreciated!
Looks like you want to run your Rscript twice per file, once for every value of col. In this case, the rule needs to be called twice as well.
The use of expand is also a bit too much here, in my opinion. expand fills your wildcards with all possible values and returns a list of the resulting files. So the output for this rule would be all possible combinations between files and cols, which the simple script can not create in one run.
This is also the reason why file can not be inferred from the output - it gets expanded there.
Instead, try writing your rule easier for just one file and column and expand on the resulting output, in a rule which needs this output as an input. If you generated the final output of your workflow, put it as input in a rule all to tell the workflow what the ultimate goal is.
rule all:
input:
expand("output/{file}_col{param}.csv",
file=config["datname"], param=config["cols"])
rule get_col:
input:
"data/{file}.csv"
output:
"output/{file}_col{param}.csv"
params:
col=lambda wc: wc.param
script:
"scripts/getCols.R"
Snakemake will infer from rule all (or any other rule to further use the output) what needs to be done and will call the rule get_col accordingly.
There seems to be 3 ways to display output in Jupyter:
By using print
By using display
By just writing the variable name
What is the exact difference, especially between number 2 and 3?
I haven't used display, but it looks like it provides a lot of controls. print, of course, is the standard Python function, with its own possible parameters.
But lets look at a simple numpy array in Ipython console session:
Simply giving the name - the default out:
In [164]: arr
Out[164]: array(['a', 'bcd', 'ef'], dtype='<U3')
This is the same as the repr output for this object:
In [165]: repr(arr)
Out[165]: "array(['a', 'bcd', 'ef'], dtype='<U3')"
In [166]: print(repr(arr))
array(['a', 'bcd', 'ef'], dtype='<U3')
Looks like the default display is the same:
In [167]: display(arr)
array(['a', 'bcd', 'ef'], dtype='<U3')
print on the other hand shows, as a default, the str of the object:
In [168]: str(arr)
Out[168]: "['a' 'bcd' 'ef']"
In [169]: print(arr)
['a' 'bcd' 'ef']
So at least for a simple case like this the key difference is between the repr and str of the object. Another difference is which actions produce an Out, and which don't. Out[164] is an array. Out[165] (and 168) are strings. print and display display, but don't put anything on the Out list (in other words they return None).
display can return a 'display' object, but I won't get into that here. You can read the docs as well as I can.
I have a unit test for a function that adds data (untransformed) to the database. The data to insert is given to the create function.
Do I use the input data in my asserts or is it better to specify the data that I’m asserting?
For eample:
$personRequest = [
'name'=>'John',
'age'=>21,
];
$id = savePerson($personRequest);
$personFromDb = getPersonById($id);
$this->assertEquals($personRequest['name'], $personFromDb['name']);
$this->assertEquals($personRequest['age'], $personFromDb['age']);
Or
$id = savePerson([
'name'=>'John',
'age'=>21,
]);
$personFromDb = getPersonById($id);
$this->assertEquals('John', $personFromDb['name']);
$this->assertEquals(21, $personFromDb['age']);
I think 1st option is better. Your input data may change in future and if you go by 2nd option, you will have to change assertion data everytime.
2nd option is useful, when your output is going to be same irrespective of your input data.
I got an answer from Adam Wathan by e-mail. (i took his test driven laravel course and noticed he uses the 'specify' option)
I think it's just personal preference, I like to be able to visually
skim and see "ok this specific string appears here in the output and
here in the input", vs. trying to avoid duplication by storing things
in variables." Nothing wrong with either approach in my opinion!
So i can't choose a correct answer.
I'm trying to read or redirect the test console output. I'm doing an AssertAreEqualIgnoringOrder and I need to parse some of the values within the failures. The failures output looks like this:
Expected elements to be equal but possibly in a different order.
Ensure the list of idols matches with the db
Equal Elements : ["98932", "670945", "6747749", "6770804", "7110604", "13280109", "13280121", "13280149", "14448042", "14448336", "15726213", "17009409", "17245584", "93123", "2212314", "10129661", "13280123", "13280125", "13280135", "13280144", "17245263", "18784003", "1112597", "2885514", "8505390", "13279857", "15032800", "17009391", "17009396", "17009398", "17880635", "18340462", "3606775", "13280116", "13280133", "13280137", "14448341", "15050039", "16711731", "17008920", "17009377", "17009381", "17009402", "17245606", "17901335", "865871", "6029748", "17009372", "17009386", "17009406", "17245604", "19113286", "19865372"]
Excess Elements : ["14419207"]
Missing Elements : ["17241620"]
How can I read this directly or redirect it to a file? If I can just parse the output, I can work it afterwards with some RegEx, that will not be a problem.
Another solution would be to parse the Gallio report file, but I'm sure there is a faster way.
Can you help me, please?
Thanks,
Andrei