Verify size and coordinates in Selenium - css

I have a question - did Python + Selenium give the opportunity to verify size and coordinates on the web-page?
I mean I have a mockup in Figma with the size of the element and his coordinates. And I want to know can I verify these values by Python + Selenium on the Web Page?
Maybe we can't use Figma, but maybe you can verify coordinates in the browser, and then write them in the test-cases?

Related

Any way to find out how skewed the document is using Textract?

Is there a way to make Amazon Textract return the skew angle of the pdf document that it is processing?
When I do detection on a document that has been scanned it, it attempts to de-skew it but it's calculations are slightly off. There doesn't seem to be a way that I can find to access the raw bounding box or geometry of each LINE or WORD, it is all fixed to de-skewed position.
Can I find the skew angle another way or can I disable de-skewing on the request somehow?

How to get each pixel value of raster, and compare with another image using gdal/python/bash/freeware?

I need to grab every pixel value of an raster image (.tif, single band, with pixel value as elevation value) and compare it with another image to see if the pixel values are identical or not. Tried gdalcompare.py, but this only gives generic differences such as file name, file type, file size etc.
I only have access to freeware, would be awesome to find out a way how to do this, as my google searches have been futile
You can probably use Imagemagick's compare tool for this. (If the usage examples on that page aren't enough, there's more here.)
For example, this command would compare image1.tiff and image2.tiff, output the number of differing pixels (other metrics are available too) to the console and write a difference map to differing_pixels.tiff.
compare -metric AE image1.tiff image2.tiff differing_pixels.tiff

Qt EvdevTouch plugin alterning coordinates

I have an embedded Qt application that used to be able to interface with my touch-screen just fine using the EvdevTouch plugin but for some reason it has now started to alter the coordinates received from the driver in a very strange fashion.
In order for me to now get something close to the actual Y coordinate of a touch I have to calculate it as:
y = y + (y / 640 * 150)
I should mention though that this is actually the x coordinate coming from the screen because my scene is rotated 90 degrees.
I can confirm that the Linux driver is fine looking at the evtest output and when I run the application through xinput rather than evdevtouch then the coordinates are also correct. I don't want to use xinput though because it doesn't work as nice as EvdevTouch used to for multitouch.
PS. My Qt build is the latest one on the Jethro branch of meta-qt5 (5.5) for Yocto.

moving a spinning 3D object across the screen, making it face the correct way when it stops

The best example of what I am trying to achieve is on this youtube video
http://www.youtube.com/watch?v=53Tk-oGL2Uo
The letters that make up the word 'Atari' fly in from the edges of the screen spinning and then line up to make the word at the end.
I know how to make an object move across the screen, but how do I calculate the spinning so that when the object gets to its end position it's facing the correct direction?
The trick is to actually have the object(s) in the right position for a specific time (say t=5.0 seconds) and then calculate backwards for the previous frames.
i.e. before 5.0 seconds you rotate the object(s) by [angular velocity] * (5.0 - t) and translate by [velocity] * (5.0 - t)
If you do this, then it will look like the objects fly together and line up perfectly. But what you've actually done is blown them apart in random directions and played the animation backwards in time :-)
The CORRECT way of doing this is using keyframes. You can create the keyframes in any 3D editor (I use MAX, but you could use Blender). You don't necessarily need to use the actual characters, even a cuboid would suffice. You will then need to export those animation frames (again, in MAX I would use ASE - COLLADA would work with Blender) and either load them up at runtime or transform them to code.
Then it's a simple matter of running that animation based on the current time.
Here's a sample from my own library that illustrates this technique. Doing this once will last you far longer and give you more benefits in the long run than figuring out how to do this procedurally.

Augmented Reality Demo

I'm trying to build an Augmented Reality Demonstration, like this iPhone App:
http://www.acrossair.com/acrossair_app_augmented_reality_nearesttube_london_for_iPhone_3GS.htm
However my geometry/math is a bit rusty nowadays.
This is what I know:
If i have my Android phone on the landscape mode (with the home button on the left), my z axis points to the direction I'm looking.
From the sensors of my phone i know what is the angle my z axis has with the North axis, let's call this angle theta.
If I have a vector from my current position to the point I want to show in my screen, i can calculate the angle this vector does with my z axis. Let's call this angle alpha.
So, based on the alpha angle I have a perception of where the point is, and I'm able to show it in the screen (like the Nearest Tube App).
This is the basic theory of a simple demonstration (of course it's nothing like the App, but it's the first step).
Can someone give me some lights on this matter?
[Update]
I've found this very interesting example, however I need to have the movement on both xx and yy axis. Any hints?
The basics are easy. You need the angle between your location and your destiny (arctangent), and the heading (from the digital compass in your phone). See this answer: Augmented Reality movement There is some objective-c code down there that you can read if you come from java.
What you want is a 3d-Space-Filling-Curve for example a hilbert-curve. That is a spatial index over 3 ccordinate. It is comparable to a octree. You want to store the object in that octree and do a depth-firat search on the coordinate you have recorded with your iphone as fixed coordinate probably the center of the screen. A octree subdivde the space continously in eigth directions and a 3d-Space-Filling-Curve is an hamiltonian path through the space which is like a fracta but it is clearly distinctable from the region of the octree. I use 2d-hilbert-curve to speed search in geospatial databases. Maybe you want to start with this first?

Resources