Get my monitor's info with xrandr.
xrandr
Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192
VGA-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm
1920x1080 60.00*+
There are only 1920 pixels in the xaxis, only 1920 tiny boxes.
import matplotlib.pyplot as plt
y = 1*19200
x = range(0,len(x))
plt.scatter(x,y)
plt.show()
I draw a horizontal line that contains 19200 pairs of records. There are only 1920 pixels in the x axis, how to put 19200 items into 1920 boxes?
Does one pixel draw 10 x records?
Put 10 x records into just one box?
Ten different data records in just one pixel?
How can one pixel express ten records?
Fix all my typo:
import matplotlib.pyplot as plt
y = [1]*19200
x = range(0,len(y))
plt.scatter(x,y)
plt.show()
It means that there are 19200 pairs of data record (x,y) to draw, but only at most 1920 pairs of data record really shown on the monitor in this case ?
How many pixel to draw the ten pairs (x,y) data record :(0,1) ,(1,1),(2,1),(3,1),(4,1),(5,1),(6,1),(7,1),(8,1),(9,1) in my case?
In my point of view,only one pixel to draw the ten pairs (x,y) data record,that is to say ,only one pair (x,y) data record was drawn into one pixel,in x axis direction,there are only 1920 pixels,one pixel draw one pair of (x,y) record,you need 19200 pixels in x axis direction.
The code you've posted doesn't run. At least there is no x to take a len() of.
So probably it should look like this:
import matplotlib.pyplot as plt
y = [1]*19200
x = range(0,19200)
plt.scatter(x,y)
plt.show()
But that way it just draws a line.
Even if you'd have a monitor with 19200 pixels you'll still see a line with a dot in every pixels of it.
So I suppose you've actually meant something like this:
import matplotlib.pyplot as plt
import numpy as np
L = 19200
y = [1]*L
x = [v*(i+1) for i, v in enumerate(np.random.randint(0,2,L))]
plt.scatter(x,y,alpha=0.5)
plt.show()
On a really big monitor it would look like this:
And of course you can't show separate points in one pixel, but you can show how many points fit into one particular pixel. Just add another dimension to your plot.
As your plot is just a line and you don't really utilize y axis you can use it as an extra dimension:
x = np.random.randint(0,2,size=(L,10))
y = np.sum(x,axis=1)
x = range(0,L)
plt.fill_between(x,y, alpha=0.5)
plt.show()
It will give:
Height of a bar represents number of points in a pixel.
Or if you really want a line you can use color as an extra dimension:
x = np.random.randint(0,2,size=(L,10))
colors = [[(1,1,1),(0,0,1),(0,0.5,1),(0,1,1),(0,1,0.5),(0,1,0),(0.5,1,0),(1,1,0),(1,0.5,0),(1,0,0)][v] for v in np.sum(x,axis=1)]
# or if your prefer monochrome
# colors = [(1,0,0,v/10) for v in np.sum(x,axis=1)]
x = range(0,L)
y = [1]*L
plt.scatter(x,y, c=colors, s=3)
plt.show()
It gives:
Monochrome version:
You can play with it here
As images are even smaller that 1920 I played with the L to make them more vivid on a random data.
Update:
There are 3 points in the first group of pixels. And there is one point in the second.
Image:
How many points do you see in the third group?
If you have a good eyesight and a big screen you'll see that there are 2 point.
But what if you have a poor eyesight, a small screen, or if you'll just step away from the screen. Can you still tell how many points are there? Yes, you can!
To some extent, of course. If you'll stand in 1Km from the screen you'll probably won't be able to see the screen itself :)
But how can you tell? By the weight of the group - it looks lighter than the first one and darker than the second.
Now, show the next image to someone and tell that the first group has tree pixels. Then ask: how many pixels are in other groups?
Image:
They'll probably tell you that there are 2 and 1 pixels in those group.
But that's not true. There are same number of pixels. The only difference is that those pixels have different color.
So, it doesn't really matters how many pixels you'll draw. What matters is how they perceived.
But more than that... You say "pixel", but is it a dot? No!
In most cases there are 3 dots of different color.
So if you see a red pixel you can be sure that there is one dot lighten up. If you see yellow - there are 2 dots lighten up. Etc.
Judging by the color you can even say exactly which of the dots constituting the pixel are highlighted.
But again - does it really matter? If you'll just say: "this particular color means (0,1)(1,1)(2,1), and this particular color means (3,1)(4,1), etc." people will understand your plot regardless of monitors and their resolutions.
But, again, more than that: when you draw a pixel on your monitor it is not even a single physical pixel and not just 3 dots. You monitor has a maximum resolution of 8192*8192... so there is more that 8 physical pixels for a one logical on resolution 1920*1080. And this actually gives more than 16 physical pixels for one logical. So can you
Put 10 x records into just one box?
...as you can see, the "box" is quite big. You can put 16 records into it. Physically. And logically you can add even more.
I'm trying to scale an image down in steps so as to avoid artifacts. I have an 800 square pixel image that needs to be scaled down to 100 square pixels. I want to perform the scaling in a variable number of iterations. So lets say that I want to go from 800 to 100 in 3 iterations. How do I find the ratio to apply to the image each time to achieve the desired size?
If you want to achieve a final ratio R, in N steps, then the ratio at each step would
be the N-th root of R, or equivalently, R^(1/N). For your example, R=1/8, and N=3, so
the ratio at each step would be (1/8)^(1/3), or 1/2.
>>> math.exp(math.log(100./800) / 3)
0.5
A rather basic maths problem.
I got an image with a specific width and height in pixels:
WIDTH = 3648 px
HEIGHT = 2736 px
In order to compute the target print size in millimeters, given a specific DPI amount (200) i came up with this:
PRINT-WIDTH = IMAGE-WIDTH-PX / 200 * 2.54 * 10;
PRINT-HEIGHT = IMAGE-HEIGHT-PX / 200 * 2.54 * 10;
This works well. In our example it computes
463 x 347 mm
as target print size. Perfect.
However, i now must be able to make changes to the widths and heights in millimeters, and based on the fact that we assume 200 DPI for printing, i must compute the new DPI value.
So for instance, when changing 463 x 347 to 400 x 300 i should somehow be able to calculate how that affects the DPI.
The only possible approach that came to my mind was to compute the difference between the old and the new format as a percentage, and then apply that percentage to the DPI. But the results are incorrect.
How can i compute the DPI value from the new width and height, given the original 200 DPI matching the original format?
NewDPI = 200 * 463 / 400
Or without using DPI 200 at all:
NeededDPI = IMAGE-WIDTH-PX(3648) * 25.4 / PRINT-WIDTH(400)
From Wikipedia, here's the equation of an ellipse:
I want all ellipses to have a width of 240 pixels.
And want to the height of all ellipses to be a value between 10 and 60 pixels which will be randomly generated.
Something like this:
My questions is where do I plug in the 240, and where do I plug in my randomly generated height values?
a and b are the radii of the ellipse.
This is the answer I was looking for:
.. where h = randomly generated height between 10 and 60.
Because an ellipse has two radii, instead of just one in the case of a circle.
I would like to know the approximate dimensions of symbol in my plot area. I think that par()$ps only really refers to text size. So how is a symbol size calculated using the cex parameter? For example, below is a plot of a single point of size cex=10. Can i determine its size from the plot devices par parameters?
plot(50, 50, ylim=c(0,100), xlim=c(0,100), cex=10)
#click on outer x limits
p1 <- locator(n=1,typ="n")
p2 <- locator(n=1,typ="n")
#approx width in x units(~15)
abs(p1$x - p2$x)
Thanks for you help. -Marc
According to the documentation contained in ?par, we have that,
cin - R.O.; character size (width, height) in inches. These are the same measurements as cra, expressed in different units.
cra - R.O.; size of default character (width, height) in ‘rasters’ (pixels). Some devices have no concept of pixels and so assume an arbitrary pixel size, usually 1/72 inch. These are the same measurements as cin, expressed in different units.
On my machine, these values appear to be:
par("cin")
[1] 0.15 0.20
> par("cra")
[1] 10.8 14.4
So character magnification via cex ought to happen relative to these dimensions, presumably by scaling the horizontal and vertical dimensions separately (although I don't know that for sure).