This is probably straight forward enough but for some reason I just can't crack it (well actually its because I'm terrible at maths sadly!).
I have a viewport that shows 800 pixels at a time and I want to move it left and right over a virtual number of pixels say 2400 pixels wide.
I use the relative position of a scrollbar (0-1) to determine the virtual pixel that the viewport should have moved to.
I have a loop n = 800 that I run through.
I want to start reading an array of plottin points from the position the slider tells me and iterate through the next 800 points from that point.
The problem I have is when my scrollbar is all the way to the right and its relative position = 1 my readpoint = 2400..... which in a way is right. But that makes my read position of plotting points go to the last point and when I begin my iteration I have no points left to read.
Based on my moving an 800 pixel wide sliding box analagy the left point of the box should be 1600 in this case and its right point should be 2400
How can I achieve this?
Do I need some sort of value mapping formula?
private function calculateShuttleRelativePosition():void
{
var _x:Number=_scrollthumb.x
_sliderCenter=_x + (_scrollthumb.width / 2)
_sliderRange=_scrollbar.width - _scrollthumb.width;
_slider_rel_pos=(_sliderCenter - _sliderCenterMinLeft) / _sliderRange
}
pixel = slider relative position * 2400
readpoint= pixel
The range of your scrollbar should be the range that the left most pixel can take, not the range of all the pixels.
So your range would be 1600 (2400 - 800), not 2400 alone. This is the scaling factor you should apply to determine your offset.
As a word of warning always be on the lookout for off by one errors in these kinds of things. Since your bar ranges from 0 to 1 your outputted pixels will range from 0 to 800 if you are not careful, which could overflow your array if you don't watch out :).
The answer is actually pretty easy -- instead of making the relative position slide across the entire width (which is what you have above), you start at 0 and run to width - 800 (or the width of your viewport).
Some C-ish code for that would look like:
int viewPortWidth = 800;
int virtualWindowWidth = 2400;
// I'm assuming your code above is right -- I didn't check
float sliderRelativePosition = calculateShuttleRelativePosition();
// The casts (int and float here), make sure that you're rounding down to an integer
// pixel between 0 and virtualWindowWidth - 1, inclusive
int pixel = (int)(sliderRelativePosition * (float)(virtualWindowWidth - viewPortWidth - 1));
readPixels(pixel, viewPortWidth); // Function that loops through the pixels you want
Related
The idea was to put a tooltip with value related to slider. I thought initially to accomplish that task by using css grid. CSS provides you with grid of any size, 10 or 1000 cols, doesn't matter. By utilizing grid functionality, we can align out tooltip as we wish.
What we really get is:
Thumb position is sort of unpredictable. It seems that it is being offset, and the direction of that offset is dependent on whether input value is in the left or right part of slider.
Note: default thumb behaves exactly the same way. I mean, shape of thumb is not of concern.
So, how does html calculate position of thumb based on its value?
For anyone else stumbling upon this problem, the issue is that the first and last position of thumb are offset by half the width of thumb. Because of this, all of the other positions are slightly offset to compensate for the first and last one.
One simple solution for this is to normalize the thumb position between actual first and last thumb positions which are actually 0 + halfThumb and width - halfThumb respectively.
To normalize, we can use the formula recommended in this answer.
So the left offset of absolutely positioned element in px would be:
const left = (((value - minValue) / (valueMax - valueMin)) * ((totalInputWidth - thumbHalfWidth) - thumbHalfWidth)) + thumbHalfWidth;
Where
value is actual input value.
minValue is actual minimum value for input.
maxValue is actual maximum value for input.
thumbHalfWidth is half of the width of thumb used for slider drag.
totalInputWidth is width of input element in pixels.
You have to take into consideration the size of your thumb
It is possible to achieve this using a little bit of javascript (unless you can get the range ratio which I am not aware of as of now)
Therorical
Calculate the ratio of your current value relative to the min and max of your input (between 0 and 1)
(value - el.min) / (el.max - el.min)
So its position (starting from the arrow relative to the input container) would be:
thumbSize / 2 + ratio * 100% - ration * thumbSize
Method
This might give an idea of how to implement this in javascript
const thumbSize = 10
const range = document.querySelector('input[type=range]')
const tooltip = document.querySelector('.tooltip')
range.addEventListener('input', e => {
const ratio = (range.value - range.min) / (range.max - range.min)
tooltip.style.left = `calc(${thumbSize / 10}px + ${ratio * 100} - ${ratio} * ${thumbSize}px)`
}
This above code has not been tested but the same method has been implemented
My project is a large map that can be panned around containing "info spots" that can be clicked. For now I use four large images, each spans 5000x5000 pixels (so total map size is 20'000x20'000 pixel). On my AMD Phenom 9950 Quad-Core with 8GB RAM and an NVIDIA GeForce 610 this takes a certain while to load while it's quite fast afterwards when panning the image. I tried tiling it up but there's no visible enhancement in loading speed as the image still has to be loaded completely before it's separated into tiles.
The only way to have some real improvement on speed and memory usage would be, to only load those parts of the map image that are actually shown.
Does PyGame offer any way of doing so? I'm thinking of a "theoretical" tile map which contains the needed x and y values of each tile (I group them a little, less to compute each frame) and a theoretical image information (like: which image and which position therein). Only when a tile comes near the visible part of the screen, its image information is loaded, otherwise it remains a number and string value.
Would this make any sense? Is there any way to achieve this?
The only way to accomplish this with Pygame would be to break the images themselves into smaller squares (say 250x250), and then, as the user pans, just get the current topleft x,y coordinates, as well as the screen size, and load any tiles that fit into that screen or around the buffer edge into memory, and clear out any others that are outside that range. The math will be fairly straightforward unless you add support for rotation and/or zooming. I would name the tiles after their location as a multiple of the square size (for example the tile at 500, 500 would be named 2-2.png). This will make it very trivial to generate the tile name that you need to load at each location - take the current x/y coordinates, integer divide by 250, subtract the buffer tile amount, and then loop by your screen width integer divided by 250 plus 1 plus the buffer tile amount for each row. Do that loop for each column.
After reading #lukevp 's reply, I was interested and tried this:
http://imgur.com/Q1N2UtU
Go get this image and create a folder named 'test_data'. Now place this image, code outside of test_data folder and run. The output would be cropped figures named as per their order (It's a bit off on the edges as the image is 1920 * 1080). You can try it with your custom size tho. Also note that I am on ubuntu so take care to appropriate paths.
OUTPUT: http://imgur.com/2v4ucGI Final link: http://imgur.com/a/GHc9l
import pygame, os
pygame.init()
original_image = pygame.image.load('test_pic.jpg')
x_max = 1920
y_max = 1080
current_x = 0
current_y = 0
count = 1
begin_surf = pygame.Surface((x_max,y_max), flags = pygame.SRCALPHA)
begin_surf.blit(original_image,(0,0))
cropped_surf = pygame.Surface((100,100),flags=pygame.SRCALPHA)
while current_y + 100 < y_max:
while current_x + 100 < x_max:
cropped_surf.blit(begin_surf, (0,0), (current_x, current_y,100,100))
pygame.image.save(cropped_surf, os.path.join("test_data", str(count) + '.jpg'))
current_x += 100
count += 1
current_x = 0
current_y += 100
Would now actually be working to load those images and span them as he said.
I am working with OpenGL and I wanted to invert the image. So I went here, asked a question and finally I had the following code:
glMatrixMode(GL_PROJECTION);
glScalef(-1,1,1);
glTranslatef(-width(),0,0);
From what I understand from this, the position of every pixel gets inverted, so the pixels that were on the right of the image are now on the same absolute position, but are the left of the image, so I have to move the entire thing back exactly as many pixels as its wide: 360 (which is the size of the "canvas", so in the snippents the function width() is being used)! So to undo this process I would invert the image again and then move it back to where it came from:
glMatrixMode(GL_PROJECTION);
glScalef(-1,1,1);
glTranslatef(width(),0,0);
Nope, blackscreen. I have to do exactly the same thing twice to undo the flipping: I have to move with -360 every time I flip the image. Why?
It's exactly as Daniel Fischer mentioned in the comment. Here is an illustration of the process.
What you must have in mind is that the transformations operate on the transformed coordinate systems.
We start with the image (grey) on the screen (green):
Then we scale the image. So the origin is preserved, but the x-axis is mirrored.
No we have to move the image onto the screen again. Because the x-axis points to the left (but we want to move the image to the right), we have to use a negative offset for the translation:
If we flip the image again, the following happens. The origin is preserved and the x-axis is mirrored:
So we must translate the image by a negative offset:
Another way of undoing the flip is undoing the operations (but in the opposite order):
glTranslatef(width, 0, 0);
glScalef(-1,1,1);
The mathematical reason for that is that inversion reverses the oder. If we have Matrix A = B * C then A^-1 = (C^-1 * B^-1).
I have a player who can rotate and move around a 2D Cartesian grid, I need to calculate where to draw the enemies on screen.
The player should have a certain viewpoint which is the size of the screen in front of the direction the player is facing. (and a little behind)
I've tried tons of ways to implement this messing with Bi-Polar co-ordinates and Trig but I havn't been able to solve the problem of calculating where on the screen the enemies should be drawn.
The problem is best represent in the form of a graph with green being the viewpoint which is a rectangle that can rotate and move around the grid, and dots representing player and enemy.
So I need to work out the positions of the enemies on screen relative to the players rotation and position.
If you're going for a Doom-like perspective, you should imagine the viewing area as a parallelogram, rather than a rectangle. Imagine that behind your character is a camera man with its own position and angle.
The enemy's screen position is related to the angle between the camera and the enemy.
//indicates where on the screen an enemy should be drawn.
//-1 represents the leftmost part of the screen,
//and 1 is the rightmost.
//Anything larger or smaller is off the edge of the screen and should not be drawn.
float calculateXPosition(camera, enemy){
//the camera man can see anything 30 degrees to the left or right of its line of sight.
//This number is arbitrary; adjust to your own tastes.
frustumWidth = 60;
//the angle between the enemy and the camera, in relation to the x axis.
angle = atan2(enemy.y - camera.y, enemy.x - camera.x);
//the angle of the enemy, in relation to the camera's line of sight. If the enemy is on-camera, this should be less than frustumWidth/2.
objectiveAngle = camera.angle - angle;
//scale down from [-frustumWidth/2, frustumWidth/2] to [-1, 1]
return objectiveAngle / (frustrumWidth / 2);
}
These diagrams visualize what the variables I'm using here represent:
Once you have an "X position" in the range of [-1, 1], it should be easy enough to convert that into pixel coordinates. For example, if your screen is 500 pixels wide, you can do something like ((calculateXPosition(camera, enemy) + 1) / 2) * 500;
Edit:
You can do something similar to find the y-coordinate of a point, based on the point's height and distance from the camera.
(I'm not sure how you should define the height of the enemy and camera - any number should be fine as long as they somewhat match the scale set by the x and y dimensions of the cartesian grid.)
//this gives you a number between -1 and 1, just as calculateXPosition does.
//-1 is the bottom of the screen, 1 is the top.
float getYPosition(pointHeight, cameraHeight, distanceFromCamera){
frustrumWidth = 60;
relativeHeight = pointHeight - cameraHeight;
angle = atan2(relativeHeight, distanceFromCamera);
return angle / (frustrumWidth / 2);
}
You can call the method twice to determine the y position of both the top and the bottom of the enemy:
distanceFromCamera = sqrt((enemy.x - camera.x)^2 + (enemy.y - camera.y)^2);
topBoundary = convertToPixels(getYPosition(enemy.height, camera.height, distanceFromCamera));
bottomBoundary = convertToPixels(getYPosition(0, camera.height, distanceFromCamera));
That should give you enough information to properly scale and position the enemy's sprite.
(aside: the frustrumWidths in the two methods don't need to be the same - in fact, they should be different if the screen you are drawing to is rectangular. The ratios of the x frustrum and y frustrum should be equal to the ratios of the width and height of the screen.)
I need to know what the visible height of a display object will be after I change it's rotationX value.
I have an application that allows users to lay out a floor in 3D space. I want the size of the floor to automatically stretch after a 3D rotation so that it always covers a certain area.
Anyone know a formula for working this out?
EDIT: I guess what I am really trying to do is convert degrees to pixels.
On a 2D plane say 100 x 100 pixels, a -10 degree change on rotationX means that the plane has a gap at the top where it is no longer visible. I want to know how many pixels this gap will be so that I can stretch the plane.
In Flex, the value for the display objects height property remains the same both before and after applying the rotation, which may in fact be a bug.
EDIT 2: There must be a general math formula to work this out rather than something Flash/Flex specific. When viewing an object in 3D space, if the object rotates backwards (top of object somersaults away from the viewer), what would the new visible height be based on degrees of rotation? This could be in pixels, metres, cubits or whatever.
I don't have a test case, but off the top of my head I'd guess something like:
var d:DisplayObject;
var rotationRadians:Number = d.rotationX * Math.PI / 180;
var visibleHeight:Number = d.height * Math.cos(rotationRadians);
This doesn't take any other transformations into account, though.
Have you tried using the object's bounding rectangle and testing that?
var dO:DisplayObject = new DisplayObject();
dO.rotation = 10;
var rect:Rectangle = dO.getRect();
// rect.topLeft.y is now the new top point.
// rect.width is the new width.
// rect.height is the new height.
As to the floor, I would need more information, but have you tried setting floor.percentWidth = 100? That might work.
Have you checked DisplayObject.transform.pixelBounds? I haven't tried it, but it might be more likely to take the rotation into account.
Rotation actually changes DisplayObject's axis's (i.e. x and y axes are rotated). That is why you are not seeing the difference in height. So for getting the visual height and y you might try this.var dO:DisplayObject = new DisplayObject();
addChild();
var rect1:Rectangle = dO.getRect(dO.parent);
dO.rotation = 10;
var rect2:Rectangle = dO.getRect(dO.parent);
rect1 and rect2 should be different in this case. If you want to check the visual coordinates of the dO then just change dO.parent with root.