I just have a little problem concerning gnuplot:
I have a huge data file, containing several blocks of data and i just want to plot the data contained in the first line of each block. Thus, i use the every command:
plot "../path/to/data.dat" u 1:2 every ::1::1
The Problem now is, that i want to use "with lines", but gnuplot doesn't join the plotted data with lines.
There are two solutions I can think of:
The first would be setting the terminal type to "table", and then plotting this new table data file.
The second would be using awk within the plot command, such that I extract via awk the first line of data of each block in the original data file.
But I'm quite sure, there have to be much easier solutions?
Thanks in advance,
Jürgen
I think awk solution is very simple already
plot "<awk -v p=1 'n==p; NF{n++} !NF{n=0}' test.dat" w l, \
"test.dat" every ::1::1
With test.dat:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Related
I have two different text files with a list of numbers which I want to plot. One file contains the x values and the other the y values. I know how to plot them if they were in the same file but I don't know how to go about it for the separate files. How do I go about it? I am using GNUplot by the way.
If it is useful here are two small bits of data from both files:
x values
0
563
1563
2563
3563
4563
5563
corresponding y values
738500.0
683000.0
647000.0
623500.0
607500.0
I guess I have seen such a question already, but I can't find it right now.
Well, Linux (in contrast to Windows) has some built-in tools where you can easily append two files line by line.
If you want to do this in gnuplot only (and hence platform independent), the following would be a suggestion.
Prerequisite is that you have your files already in a datablock. How to get this done see: gnuplot: load datafile 1:1 into datablock.
Code:
### merge files by line
reset session
$Data1 <<EOD
0
563
1563
2563
3563
4563
5563
EOD
$Data2 <<EOD
738500.0
683000.0
647000.0
623500.0
607500.0
EOD
maxRow = |$Data1| <= |$Data2| ? |$Data1| : |$Data2| # find the shorter datablock
set print $Data
do for [i=1:maxRow] {
print $Data1[i][1:strlen($Data1[i])-1]." ".$Data2[i]
}
set print
plot $Data u 1:2 w lp pt 7
### end of code
Result:
New to the R/ggplot.
I have a data set like this. Each mol-code is made of 3 components and copies represent how many times each mol-code appears. There are 8 unique components available and it is represented as smile files.
full.mol.code2 Copies Pair1.Acids Pair2.Acids Pair3.Acids
1 1.301241e+23 18 OC(C1=COC(CCl)=N1)=O OC(C1=CC=C(CCl)C=C1)=O O=C(O)C1=C(C)OC=C1
2 1.303241e+23 18 OC(C1=CSC(CCl)=N1)=O OC(C1=CSC(CCl)=N1)=O OC([C#H](C)Br)=O.[R]
3 1.301241e+23 17 OC(C1=COC(CCl)=N1)=O OC(C1=COC(CCl)=N1)=O O=C(O)C1=C(C)OC=C1
4 1.304241e+23 12 ClC/C(C)=C/[C##H](C)C(O)=O OC(C1=COC(CCl)=N1)=O OC([C#H](C)Cl)=O.[S]
5 1.309240e+23 12 OC(C1=CSC(CCl)=N1)=O OC(C1=CC=C(CCl)C=C1)=O O=C(O)C1=C(C)OC=C1
6 1.301241e+23 11 OC(C1=COC(CCl)=N1)=O OC(C1=CC=C(CCl)C=C1)=O OC([C#H](C)Cl)=O.[S]
Edit: thanks Allan for formatting this properly.
'full.mol.code2' is a number like this (130124051501260617102804), it will not be considered as value.
I want to represent this data in a barplot where x-axis will be mol-code and y-axis represents copies and each bar represent the combination of three components in different color.
I hope that made sense and appreciate any help.
Thanks.
I am having trouble with the findCorrelation() function, Here is my input and the output.
>findCorrelation(cor(subset(segdata, select=-c(56))),cutoff=0.9)
[1] 16 17 14 15 30 51 31 25 40
>cor(segdata)[c(16,17,14,15,30,51,31,25,40),c(16,17,14,15,30,51,31,25,40)]
enter image description here
I deleted the 56 colum because this is factor variable.
Above the code, I use cutoff=0.9. it means print only those variables whose correlation is greater than or equal to 0.9.
But, in the result image file, the end variable(P12002900) has very very low correlation. As i use "cutoff=0.9", Low correlations such as P12002900 should not be output.
why this is printed??
so I use Vehicle bigdata that presented in R.
>library(mlbench)
>library(caret)
>data(Vehicle)
>findCorrelation(cor(subset(Vehicle,select=-c(Class))),cutoff=0.9)
[1]3 8 11 7 9 2
>cor(subset(Vehicle,select=-c(Class)))[c(3,8,11,7,9,2),c(3,8,11,7,9,2)]
this is result image.
enter image description here
the last variable(Circ) has lower than 0.9 correlation.
but it is printed....
please help me... thanks you for your help!
Using:
Unix
2.6.18-194.el5
I am having an issue where this join statement is omitting values/indexes from the match. I found out the values are between 11-90 (out of about 3.5 Million entries) and I have tried to look for foreign characters but I may be overlooking something (Tried cat -v to see hidden characters).
Here is the join statement I am using (only simplified the output columns for security):
join -t "|" -j 1 -o 1.1 2.1 file1 file2> fileJoined
file1 contents (first 20 values):
1
3
7
11
12
16
17
19
20
21
27
28
31
33
34
37
39
40
41
42
file2 contents (first 50 values so you can see where it would match):
1|US
2|US
3|US
4|US
5|US
6|US
7|US
8|US
9|US
10|US
11|US
12|US
13|US
14|US
15|US
16|US
17|US
18|US
19|US
20|US
21|US
22|US
23|US
24|US
25|US
26|US
27|US
28|US
29|US
30|US
31|US
32|US
33|US
34|US
35|US
36|US
37|US
38|US
39|US
40|US
41|US
42|US
43|US
44|US
45|US
46|US
47|US
48|US
49|US
50|US
From my initial testing it appears that file2 is the culprit. Because when I create a new file with values 1-100 I am able to get the join statement to match completely against file1; however the same file will not match against file2.
Another strange thing is that the file is 3.5 million records long and at value 90 they start matching again. For example, the output of fileJoined looks like this (first 20 values only):
1|1
3|3
7|7
90|90
91|91
92|92
93|93
95|95
96|96
97|97
98|98
99|99
106|106
109|109
111|111
112|112
115|115
116|116
117|117
118|118
Other things I have tried are:
Using vi to manually enter a new line 11 (still doesnt match on the join statement)
copying the code into notepad, deleting the lines in vi and then copying them back in (same result, no matching 11-90)
Removing lines 11-90 to see if the problem then shifts to 90-170 and it does not shift
I think that there may be some hidden values that I am missing, or that the 11 - 90 from file1 is not the same binary equivalent as the 11 - 90 in file2?
I am lost here, any help would be greatly appreciated.
I tried this out, and I noticed a couple things.
First: this is minor, but I think you're missing a comma in your -o specifier. I changed it to -o 1.1,2.1.
But then, running it on just the fragments you posted, I got only three lines of output:
1|1
3|3
7|7
I think this is because join assumes alphabetical sorting, while your input files look like they're numerically sorted.
Rule #1 of join(1) is to make sure your inputs are sorted, and the same way join expects them to be!
When I ran the two input files through sort and then joined again, I got 18 rows of output. (Sorting was easy, since you're joining on the first column; I didn't have to muck around with sort's column specifiers.)
Beware that, these days, sort doesn't always sort the way you expect, due to locale issues. I tend to set LC_ALL=C to make sure I get the old-fashioned behavior I'm used to.
for different classes i have NSCC count ,now i have to make line chart showing this NSCC count falling in range like 1-10 is low risk,10-20 is moderate risk,20-50 is high risk and above 50 horrible.How to plot data with this range on x axis?And how to color different range width.
Please help me
One possible solution is use Multiple Line series with different Color
i suppose you have data some thing like this
|NSCC| |count|
A 10
B 12
C 54
D 25
you could convert to matrix like
|NSCC| |count| |LOW| |MODERATE| |HIGH|
A 10 10 null null
B 12 12 null null
C 54 null null 54
D 25 null 25 null
and create Multiple Line Series on chart,
you may found split among series, to overcome this you could add dummy boundry points
There are also other options like
Use customize background with differnt colors
Use customize itemrendrer
Hopes that Helps