I have a large dataset which I need to plot in loglog scale in Gnuplot, like this:
set log xy
plot 'A_1D_l0.25_L1024_r0.dat' u 1:($2-512)
LogLogPlot of my datapoints
Text file with the datapoints
Datapoints on the x axis are equally spaced, but because of the logscale they get very dense on the right part of the graph, and as a result the output file (I finally export it in .tex) gets very large.
In linear scale, I would simply use the option every to reduce the number of points which get plotted. Is there a similar option for loglogscale, such that the plotted points appear equally spaced?
I am aware of a similar question which was raised a few years ago, but in my opinion the solution is unsatisfactory: plotted points are not equally spaced along the x-axis. I think this is a really unsophisticated problem which deserves a clearer solution.
As I understand it, you don't want to plot the actual data points; you just want to plot a line through them. But you want to keep the appearance of points rather than a line. Is that right?
set log xy
plot 'A_1D_l0.25_L1024_r0.dat' u 1:($2-512) with lines dashtype '.' lw 2
Amended answer
If it is important to present outliers/errors in the data set then you must not use every or any other technique that simply discards or skips most of the data points. In that case I would prefer the plot with points that you show in the original question, perhaps modified to represent each point as a dot rather than a cross. I will simulate this by modifying a single point in your 500000 point data set (first figure below). But I would also suggest that the presence of outliers is even more apparent if you plot with lines (second figure below).
Showing error bounds is another alternative for noisy data, but the options depend on what you have to work with in your data set. If you want to pursue that, please ask a separate question.
If you really want to reduce the number of data to be plotted, you might consider the following script.
s = 0.1 ### sampling interval in log scale
### (try 0.05 for more detail)
c = log10(0.01) ### a parameter used in sampler(x)
### which should be initialized by
### smaller value than any x in log scale
sampler(x) = (x>0 && log10(x)>=c) ? (c=ceil(log10(x)/s+0.5)*s, x) : NaN
set log xy
set grid xtics
plot 'A_1D_l0.25_L1024_r0.dat' using (sampler($1)):($2-512) with points pt 7 lt 1 notitle , \
'A_1D_l0.25_L1024_r0.dat' using 1:($2-512) with lines lt 1 notitle
This script samples the data in increments of roughly 0.1 on x-axis in log scale. It makes use of the property that points whose x value is evaluated as NaN in using are not drawn.
Related
I’m trying to plot multiple Gaussian functions on the same graph with Gnuplot, which is quite a simple thing. The problem is that the peaks do not overlap and I get the following result that looks like they have different peaks, which they don’t. How can I fix this?
First, it helps to understand how gnuplot generates plots of functions (or really how any computer program must do it). It must convert a continuous function into some kind of discrete representation. The mathematical function to be plotted is evaluated at various points along the independent (x) axis. This creates a set of (x,y) points. A line is then drawn between these points (think "connect the dots"). As you might imagine, the number of discrete samples used affects how accurately the curve is represented, and how smooth it looks.
The problem you have noticed is that the default sample size in gnuplot is a bit too low. The default (I believe) is 100 samples across the visible x-axis. You can adjust the number of samples (to 1000, for example) with
set samples 1000
I have made some example plots of gaussians to illustrate this point. (I made a rough estimate of your gaussian parameters.) Each plot has a different number of samples:
Notice how the lines get too jagged if the sample size is too low. Even the default value of 100 is too low. Setting to 1000 makes it plenty smooth. This is probably more than it needs to be, but it works. If you're using a terminal that generates a bitmap image (e.g. PNG), then you shouldn't need more samples than you have width in pixels used for the x-axis plot area. If you're generating vector based output, then just pick something that "looks right" for whatever you are using it in.
See the question Gnuplot x-axis resolution for more.
By the way, the code to generate the above examples is:
set terminal pngcairo size 640,480 enhanced
# Line styles
set style line 1 lw 2 lc rgb "blue"
set style line 2 lw 2 lc rgb "red"
set style line 3 lw 2 lc rgb "yellow"
# Gaussian function stuff
set yrange [0:1.1]
set xrange [-20:20]
gauss(x,a) = exp(-(x/a)**2)
eqn(a) = sprintf("y = e^{-(x/%d)^2}", a)
# First example (default)
set output "example1.png"
set title "100 samples (default)"
plot gauss(x,8) ls 1 title eqn(8), \
gauss(x,2) ls 2 title eqn(2), \
gauss(x,1) ls 3 title eqn(1)
# Second example (too low)
set output "example2.png"
set title "20 samples (too low)"
set samples 20
replot
# Third example (plenty high)
set output "example3.png"
set title "1000 samples (plenty high)"
set samples 1000
replot
I'm looking for a way to plot histograms in 3d to produce something like this figure http://www.gnuplot.info/demo/surface1.17.png but where each series is a histogram.
I'm using the procedure given here https://stackoverflow.com/a/19596160 and http://www.gnuplotting.org/calculating-histograms/ to produce histograms, and it works perfectly in 2d.
Basically, the commands I use are
hist = 'u (binwidth*(floor(($2-binstart)/binwidth)+0.5)+binstart):(1) smooth freq w boxes
plot 'data.txt' #hist
Now I would just like to add multiple histograms in the same plot, but because they overlap in 2d, I would like to space them out in a 3d plot.
I have tried to do the following command (using above procedure)
hist = 'u (1):(binwidth*(floor(($2-binstart)/binwidth)+0.5)+binstart):(1) smooth freq w boxes
splot 'data.txt' #hist
But gnuplot complains that the z values are undefined.
I don't understand why this would not put a histogram along the value 1 on the x-axis with the bins along the y-axis, and plot the height on the z-axis.
My data is formatted simply in two columns:
Index angle
0 92.046
1 91.331
2 86.604
3 88.446
4 85.384
5 85.975
6 88.566
7 90.575
I have 10 files like this, and since the values in the files are close to each other, they will completely overlap if I plot them all in one 2d histogram. Therefore, I would like to see 10 histograms behind each other in a sort of 3d perspective.
This second answer is distinct from my first. Whereas the first addresses what the OP was trying to accomplish, this second provides an alternative approach which address the underlying problem the OP was trying to overcome.
I have posted an answer that addresses the ability to do this in 3d. However, this isn't usually the best way to do this with multiple histograms like this. A 3d graph like that will be difficult to compare.
We can address the overlap in 2D by stagnating the position of the boxes. With default settings, the boxes will spread out to touch. We can turn that off and adjust the position of the boxes to allow more than 1 histogram on a graph. Remember, that the coordinates you supply are the center of the boxes.
Suppose that I have the data you have provided and this additional data set
Index Angle
0 85.0804
1 92.2482
2 90.0384
3 99.2974
4 87.729
5 94.6049
6 86.703
7 97.9413
We can set the boxwidth to 2 units with set boxwidth 2 (your bins are 4 units wide). Additionally, we will turn on box filling with set style fill solid border lc black.
Then I can issue
plot datafile1 u (binwidth*(floor(($2-binstart)/binwidth)+0.5)+binstart):(1) smooth freq w boxes, \
datafile2 u (binwidth*(floor(($2-binstart)/binwidth)+0.5)+binstart+1):(1) smooth freq w boxes
The second plot command is identical to the first, except for the +1 after binstart. This will shift this box 1 unit to the right. This produces
Here, the two series are clear. Keeping track of which box is associated with each is easy because of the overlap, but it is not enough to mask the other series.
We can even move them next to each other, with no overlap, by subtracting 1 from the first plot command:
plot datafile1 u (binwidth*(floor(($2-binstart)/binwidth)+0.5)+binstart-1):(1) smooth freq w boxes, \
datafile2 u (binwidth*(floor(($2-binstart)/binwidth)+0.5)+binstart+1):(1) smooth freq w boxes
producing
This first answer is distinct from my second. This answer address what the OP was trying to accomplish whereas the second addresses the underlying problem the OP was trying to overcome.
Gnuplot isn't going to be able to do this on it's own, as the relevant styles (boxes and histograms) only work in 2D. You would have to do it using an external program.
For example, using your data and your 2d command (your first command), we get (using your data and the linked values of -100 and 4 for binstart and binwidth)
To draw these boxes on the 3d grid, we will need to use the line style and have four points for each: lower left, upper left, upper right, and lower right. We can use the previous command and capture to a table, but this will only gives the upper center point. We can use an external program to pre-process, however. The following python program, makehist.py, does just that.
from sys import argv
import re
from math import floor
pat = re.compile("\s+")
fname = argv[1]
binstart = float(argv[2])
binwidth = float(argv[3])
data = [tuple(map(float,pat.split(x.strip()))) for x in open(fname,"r").readlines()[1:]]
counts = {}
for x in data:
bn = binwidth*(floor((x[-1]-binstart)/binwidth)+0.5)+binstart
if not bn in counts: counts[bn] = 0
counts[bn]+=1
for x in sorted(counts.keys()):
count = counts[x]
print(x-binwidth/2,0)
print(x-binwidth/2,count)
print(x+binwidth/2,count)
print(x+binwidth/2,0)
print(max(counts.keys())+binwidth/2,0)
print(min(counts.keys())-binwidth/2,0)
Essentially, this program does the same thing as the smooth frequency option does, but instead of getting the upper center of each box, we get the four previously mentioned points along with two points to draw a line along the bottom of all the boxes.
Running the following command,
plot "< makehist.py data.txt -100 4" u 1:2 with lines
produces
which looks very similar to the original graph. We can use this in a 3d plot
splot "< makehist.py data.txt -100 4" u (1):1:2 with lines
which produces
This isn't all that pretty, but does lay the histogram out on a 3d plot. The same technique can be used to add multiple data files onto it spread out. For example, with the additional data
Index Angle
0 85.0804
1 92.2482
2 90.0384
3 99.2974
4 87.729
5 94.6049
6 86.703
7 97.9413
We can use
splot "< makehist.py data.txt -100 4" u (1):1:2 with lines, \
"< makehist.py data2.txt -100 4" u (2):1:2 with lines
to produce
There are a number of questions in this forum on locating intersections between a fitted model and some raw data. However, in my case, I am in an early stage project where I am still evaluating data.
To begin with, I have created a data frame that contains a ratio value whose ideal value should be 1.0. I have plotted the data frame and also used abline() function to plot a horizontal line at y=1.0. This horizontal line and the plot of ratios intersect at some point.
plot(a$TIME.STAMP, a$PROCESS.RATIO,
xlab='Time (5s)',
ylab='Process ratio',
col='darkolivegreen',
type='l')
abline(h=1.0,col='red')
My aim is to locate the intersection point, say x and draw two vertical lines at x±k, as abline(v=x-k) and abline(v=x+k) where, k is certain band of tolerance.
Applying a grid on the plot is not really an option because this plot will be a part of a multi-panel plot. And, because ratio data is very tightly laid out, the plot will not be too readable. Finally, the x±k will be quite valuable in my discussions with the domain experts.
Can you please guide me how to achieve this?
Here are two solutions. The first one uses locator() and will be useful if you do not have too many charts to produce:
x <- 1:5
y <- log(1:5)
df1 <-data.frame(x= 1:5,y=log(1:5))
k <-0.5
plot(df1,type="o",lwd=2)
abline(h=1, col="red")
locator()
By clicking on the intersection (and stopping the locator top left of the chart), you will get the intersection:
> locator()
$x
[1] 2.765327
$y
[1] 1.002495
You would then add abline(v=2.765327).
If you need a more programmable way of finding the intersection, we will have to estimate the function of your data. Unfortunately, you haven’t provided us with PROCESS.RATIO, so we can only guess what your data looks like. Hopefully, the data is smooth. Here’s a solution that should work with nonlinear data. As you can see in the previous chart, all R does is draw a line between the dots. So, we have to fit a curve in there. Here I’m fitting the data with a polynomial of order 2. If your data is less linear, you can try increasing the order (2 here). If your data is linear, use a simple lm.
fit <-lm(y~poly(x,2))
newx <-data.frame(x=seq(0,5,0.01))
fitline = predict(fit, newdata=newx)
est <-data.frame(newx,fitline)
plot(df1,type="o",lwd=2)
abline(h=1, col="red")
lines(est, col="blue",lwd=2)
Using this fitted curve, we can then find the closest point to y=1. Once we have that point, we can draw vertical lines at the intersection and at +/-k.
cross <-est[which.min(abs(1-est$fitline)),] #find closest to 1
plot(df1,type="o",lwd=2)
abline(h=1)
abline(v=cross[1], col="green")
abline(v=cross[1]-k, col="purple")
abline(v=cross[1]+k, col="purple")
I am trying to convey the concentration of lines in 2D space by showing the number of crossings through each pixel in a grid. I am picturing something similar to a density plot, but with more intuitive units. I was drawn to the spatstat package and its line segment class (psp) as it allows you to define line segments by their end points and incorporate the entire line in calculations. However, I'm struggling to find the right combination of functions to tally these counts and would appreciate any suggestions.
As shown in the example below with 50 lines, the density function produces values in (0,140), the pixellate function tallies the total length through each pixel and takes values in (0, 0.04), and as.mask produces a binary indictor of whether a line went through each pixel. I'm hoping to see something where the scale takes integer values, say 0..10.
require(spatstat)
set.seed(1234)
numLines = 50
# define line segments
L = psp(runif(numLines),runif(numLines),runif(numLines),runif(numLines), window=owin())
# image with 2-dimensional kernel density estimate
D = density.psp(L, sigma=0.03)
# image with total length of lines through each pixel
P = pixellate.psp(L)
# binary mask giving whether a line went through a pixel
B = as.mask.psp(L)
par(mfrow=c(2,2), mar=c(2,2,2,2))
plot(L, main="L")
plot(D, main="density.psp(L)")
plot(P, main="pixellate.psp(L)")
plot(B, main="as.mask.psp(L)")
The pixellate.psp function allows you to optionally specify weights to use in the calculation. I considered trying to manipulate this to normalize the pixels to take a count of one for each crossing, but the weight is applied uniquely to each line (and not specific to the line/pixel pair). I also considered calculating a binary mask for each line and adding the results, but it seems like there should be an easier way. I know that you can sample points along a line, and then do a count of the points by pixel. However, I am concerned about getting the sampling right so that there is one and only one point per line crossing of a pixel.
Is there is a straight-forward way to do this in R? Otherwise would this be an appropriate suggestion for a future package enhancement? Is this more easily accomplished in another language such as python or matlab?
The example above and my testing has been with spatstat 1.40-0, R 3.1.2, on x86_64-w64-mingw32.
You are absolutely right that this is something to put in as a future enhancement. It will be done in one of the next versions of spatstat. It will probably be an option in pixellate.psp to count the number of crossing lines rather than measure the total length.
For now you have to do something a bit convoluted as e.g:
require(spatstat)
set.seed(1234)
numLines = 50
# define line segments
L <- psp(runif(numLines),runif(numLines),runif(numLines),runif(numLines), window=owin())
# split into individual lines and use as.mask.psp on each
masklist <- lapply(1:nsegments(L), function(i) as.mask.psp(L[i]))
# convert to 0-1 image for easy addition
imlist <- lapply(masklist, as.im.owin, na.replace = 0)
rslt <- Reduce("+", imlist)
# plot
plot(rslt, main = "")
I know on gnuplot you can plot some data with circles as the plot points:
plot 'data.txt' using 1:2 ls 1 with circles
How do I then set the size of the circles? I want to plot several sets of data but with different size circles for each data set.
If you have a third column in your data, the third column specifies the size of the circles. In your case, you could have the third column have the same value for all the points in each data set. For example:
plot '-' with circles
1 1 0.2
e
will plot a circle at (1,1) with radius 0.2. Note that the radius is in the same units as the data. (The special file name '-' lets you input data directly; typing 'e' ends the input. Type help special at the gnuplot console for more info.)
You can look here for more ideas of how to use circles.
I used:
plot "file" using 1:2:($2*0+10) with circles
This will fake a the third column specifying the sizes - it is probably possible to write it simpler, but this worked for me.