I have some trouble using CombiTimeTable.
I want to fill the table using a txt file that contains two columns, the first is the time and the second is the related value (a current sample). Furthermore, I add #1 in the first line as the manual says.
Moreover, I add the following parameters:
tableOnFile=true,
fileName="C:/Users/gg/Desktop/CurrentDrivingCycle.txt"
I also have to add the parameter tableName but I don't know how to define it. I tried to define it using the name of the file (i.e. CurrentDrivingCycle) but I got this error message at the end of the simulation:
Table matrix "CurrentDrivingCycle" not found on file "C:/Users/ggalli/Desktop/CurrentDrivingCycle.txt".
simulation terminated by an assertion at initialization
Simulation process failed. Exited with code -1.
Do you know how can I solve this issue?
Thank you in advance!
See the documentation:
https://build.openmodelica.org/Documentation/Modelica.Blocks.Sources.CombiTimeTable.html
The name tab1(6,2) in the example of the documentation is the tableName. So yours should look something like:
#1
double CurrentDrivingCycle(6,2) # comment line
0 0
1 0
1 1
2 4
3 9
4 16
Related
I have the following dataset.
id
action
source
target
1
ADD
N/A
/root/dir2.trash
2
ADD
N/A
/root/dir1
3
MOVE
/home/user
/home/user.wasted
4
MOVE
/usr/bin
/usr/local/bin
5
MOVE
/usr/sbin
/usr/sbin.trash
When I run the following query, I am expecting to get row 3 and 5 in response but I get row 1 as well.
SELECT * FROM test WHERE action = 'MOVE' AND target LIKE '%.wasted' OR target LIKE '%.trash'
Demo
Your AND and OR act differently than you seem to think, at least in different precedence.
This gets the desired result, probably because it matches how you think.
It makes sure that the first part always has to apply.
SELECT * FROM test WHERE action = 'MOVE' AND ( target LIKE '%.wasted' OR target LIKE '%.trash' )
I.e. your original code acted like "(move AND wasted) OR trash".
This is similar to 2 * 3 + 5 being 11, not 16.
Chapter 9, page 163, of the AMPL book gives an example of reading a single parameter from a file:
For example, if you want to read the number of weeks and the
hours available each week for our simple production model (Figure 4-4),
param T > 0;
param avail {1..T} >= 0;
from a file week_data.txt containing
4
40 40 32 40
then you can give the command
read T, avail[1], avail[2], avail[3], avail[4] <week_data.txt;
This command fails in GLPK with error colon missing where expected. The Modeling Language GNU MathProg Language reference only contains table data IN, which serves for reading tabular data. Can GLPK read a single parameter from a file?
You can use the table statement to read parameters from a CSV files or SQL tables.
You can use data files for passing parameters, e.g. see this example of writing data files in AWK and Visual Basic.
AMPL and GMPL are related functional languages. GMPL contains a subset of the AMPL syntax but differs in several areas like the table statement.
One way to read a single parameter is to write the data into a file with a certain syntax, e.g. the contents below show a single parameter and a table:
param T := 4;
param avail :=
1 0
2 1
3 1
4 0;
end;
To verify the syntax, consider this code in the file problem.mod:
param T > 0;
param avail {1..T} >= 0;
var use {1..T} >= 0;
maximize usage: sum {t in 1..T} avail[t];
subject to constraint {t in 1..T}: use[t] <= avail[t];
solve;
end;
The result shows that it worked:
> glpsol -m problem.mod -d problem.dat
GLPSOL: GLPK LP/MIP Solver, v4.65
Parameter(s) specified in the command line:
-m problem.mod -d problem.dat
Reading model section from problem.mod...
13 lines were read
Reading data section from problem.dat...
9 lines were read
Generating usage...
Generating constraint...
Model has been successfully generated
glp_mpl_build_prob: row usage; constant term 2 ignored
GLPK Simplex Optimizer, v4.65
5 rows, 4 columns, 4 non-zeros
Preprocessing...
~ 0: obj = 2.000000000e+00 infeas = 0.000e+00
OPTIMAL SOLUTION FOUND BY LP PREPROCESSOR
Time used: 0.0 secs
Memory used: 0.1 Mb (110236 bytes)
Model has been successfully processed
I am fully aware that similar questions may have been posted, but after searching it seems that the details of our questions are different (or at least I did not manage to find a solution that can be adopted in my case).
I currently have two files: "messyFile" and "wantedID". "messyFile" is of size 80,000,000 X 2,500, whereas "wantedID" is of size 1 x 462. On the 253rd line of "messyFile", there are 2500 IDs. However, all I want is the 462 IDs in the file "wantedID". Assuming that the 462 IDs are a subset of the 2500 IDs, how can I process the file "messyFile" such that it only contains information about the 462 IDs (ie. of size 80,000,000 X 462).
Thank you so much for your patience!
ps: Sorry for the confusion. But yeah, the question can be boiled down to something like this. In the 1st row of "File#1", there are 10 IDs. In the 1st row of "File#2", there are 3 IDs ("File#2" consists of only 1 line). The 3 IDs are a subset of the 10 IDs. Now, I hope to process "File#1" so that it contains only information about the 3 IDs listed in "File#2".
ps2: "messyFile" is a vcf file, whereas "wantedID" can be a text file (I said "can be" because it is small, so I can make almost any type for it)
ps3: "File#1" should look something like this:
sample#1 sample#2 sample#3 sample#4 sample#5
0 1 0 0 1
1 1 2 0 2
"File#2" should look something like this:
sample#2 sample#4 sample#5
Desired output should look like this:
sample#2 sample#4 sample#5
1 0 1
1 0 2
For parsing VCF format, use bcftools:
http://samtools.github.io/bcftools/bcftools.html
Specifically for your task see the view command:
http://samtools.github.io/bcftools/bcftools.html#view
Example:
bcftools view -Ov -S 462sample.list -r chr:pos -o subset.vcf superset.vcf
You will need to get the position of the SNP to specify chr:pos above.
You can do this using DbSNP:
http://www.ncbi.nlm.nih.gov/SNP/index.html
Just make sure to match the genome build to the one used in the VCF file.
You can also use plink:
https://www.cog-genomics.org/plink2
But, PLINK is finicky about duplicated SNPs and other things, so it may complain unless you address these issues.
I've done what you are attempting in the past using the awk programming language. For your sanity, I recommend using one of the above tools :)
Ok, I have no idea what a vcf file is but if the File#1 and File#2 samples you gave were files containing tab separated columns this will work:
declare -a data=(`head -1 data.txt`)
declare -a header=(`head -1 header.txt`)
declare fields
declare -i count
for i in "${header[#]}" ; do
count=0
for j in "${data[#]}" ; do
count=$count+1;
if [ $i == $j ] ; then
fields=$fields,$count
fi
done
done
cut -f ${fields:1} data.txt
If they aren't tab separated values perhaps it can be amended for the actual data format.
I apologize in advance because I'm extremely new to coding and was thrust into it just a few days ago by my boss for a project.
My data set is called s1. S1 has 123 variables and 4 of them have some form of "QISSUE" in their name. I want to take these four variables and duplicate them all, adding "Rec" to the end of each one (That way I can freely play with the new variables, while still maintaining the actual ones).
Running this line of code keeps giving me an error:
b<- llply(s1[,
str_c(names(s1)
[str_detect(names(s1), fixed("QISSUE"))],
"Rec")],table)
The error is as such:
Error in `[.data.frame`(s1, , str_c(names(s1)[str_detect(names(s1), fixed("QISSUE")) & :
undefined columns selected
Thank you!
Use this to get the subset. Of course there is other ways to do that with simpler code
b<- llply(s1[,
names(s1)[str_detect(names(s1), fixed("QISSUE"))]
],c)
nwnam=str_c(names(s1)[str_detect(names(s1), fixed("QISSUE"))],"Rec")
ndf=data.frame(do.call(cbind,b));colnames(ndf)=nwnam
ndf
# of course you can do
cbind(s1,ndf)
I have 1000 files, which have a format of framexxx.dat, such as
frame0.dat frame1.dat frame2.dat .... frame999.dat
I hope to change these file's name to
frame000.dat frame001.dat frame002.dat .... frame999.dat
Is there anyway to do this with simple linux command?
Also, if my files are framexx.dat or framexxxx.dat (xx are 2digit numbers and xxxx are 4 digit numbers) then how can I change the code to do the same?
you have to handle them by groups:
group 0: from frame100.dat to frame999.dat: nothing to do here.
group 1: from frame10.dat to frame99.dat: add one 0
for i in {10..99}; do mv frame$f.dat frame0$f.dat; done
group 2: from frame0.dat to frame9.dat: add 2 0s
for i in {0..9}; do mv frame$f.dat frame00$f.dat; done
A general guideline is to handle the big numbers first (in some cases some complications could arise)
This can be extended to bigger numbers...you got the idea.