Qt with python (pyside) rotation check fails - qt

I have problem with rotation check with QGraphicsItemGroup. Several items are grouped into group which is rotating in scene. After rotation, the QGraphicsItem.rotation() always returns 0. The group is flagged with setHandlesChildEvents(False) - if it matters.
Furthermore, all child items are also rotated with group, and same method returns 0 for them as well (...maybe this is OK ).
Am I doing something wrong in checking group rotation ?
EDIT:
item_group.rotate(90)
print item_group.rotation() #prints 0
or
for i in item_group:
i.rotate(90)
print i.rotation() #also prints 0 for each

Related

sum graph from bottom to top

given the following graph modeled in neo4j
goal:
calculate the sum of all nodes multiplied by the edge percentage from the bottom up.
e.g.
(((30*.6)+(50*.1)+100)*.5)+10)=71,5
status:
I found the REDUCE function (http://neo4j.com/docs/stable/query-functions-collection.html#functions-reduce)
but in my opinion it sums from top to the bottom, instead of bootom up.
Is this a commen problem with a well known name, and I dont know it?
Is there any solution in neo4j or in another (graph)database/language?
This was a really interesting one :
I assumed 2 things, first all nodes have the :A label, second the property on nodes and relationship has the key p
Here is a working query :
MATCH p=(:A)-[r]->(pike)
WITH pike, collect(p) as paths
OPTIONAL MATCH (pike)-[r]->()
WITH
CASE r WHEN null THEN 1 ELSE r.p END as multiplier,
CASE r WHEN null THEN last(nodes(paths[0])).p
ELSE reduce(x=0, path in paths | x + (head(nodes(path)).p * head(rels(path)).p)) + last(nodes(paths[0])).p END as total
RETURN sum(total*multiplier) as total
The logic behind :
Find one depths paths, agreggate the children by the pike (first WITH)
In case the optional match doesn't pass, the multiplier will be 1 instead of a possible float value on the relationship property
The second case, do the math logic, if this is the top of the pikes (hence here node A) it will just add the value of the top node, otherwise it will take the value of the children
Then it sum the score + the multiplication
You can test it here : http://console.neo4j.org/r/ih8obf

Programming Dot Probe for Psychopy in Builder

I am new to using PsychoPy and I have programmed a few simple tasks. I am currently really struggling to program a word dot probe. I do not want to use coder, simply because the rest of my research team need to be able to easily edit the program, and work and use it.
In case anyone is wondering what my specific problem is, I cannot seem to get the pictures to load at the same time correctly and do not know how to get a probe to appear behind one of the pictures once the pictures have disappeared.
Timing
The timing issue can be solved by inserting an ISI period in the beginning of the trial, e.g. during a fixation cross. This allows psychopy to load the images in the background so that they are ready for presentation.
Truly random dot position
In your case, you want the dot position to be random, independently of image. This is one of the cases that TrialHandler does not handle and I suspect you need to insert a code component to make this work. For true randomness but only 50% probability in the limit of infinite trials, simply put this in a code component under "begin routine":
x = (np.random.binomial(1, prob) - 0.5) * xdist
y = 0
dot.pos = [x, y]
and change dot to the name of your dot stimulus, y is the vertical offset, x is the horizontal offset (here varying between trials), xdist is the distance between the dot positions, and prob is the chance of the dot appearing to the right. You probably want to set this to 0.5, i.e. 50 %.
Balanced dot position
If you want the dot to appear at each side exactly the same number of times, you can do the following in the code component:
Under "begin experiment", make a list with the exact length of the number of trials:
dotPos = [0, 1] * int(round(numberOfTrials/2)) # create the correct number of left/right (coded as 0 and 1). [0,1] yields 50%. [0,0,0,1] and /4 would yield 25 % etc.
np.random.shuffle(dotPos) # randomize order
Then under "begin routine" do something akin to what we did above:
x = (dotPos.pop() - 0.5) * xdist # dotPos.pop() takes returns the last element while removing it from the list.
y = 0
dot.pos = [x, y]
Naturally, if the number of trials is uneven, one position will be occupied one more time than the other.
Two dot positions for each condition
For the record, if the dot position is to be shown at each position for each image-combination, simply count each of these situations as conditions, i.e. give them a separate rows in the conditions file.

How to get the negative position value in Group?

The dashed rectangle is the parent group and inside, there is a label. Its x is negative.
Now, what I want to do is relocation the outside group to the contents' top-left point and meanwhile the contents' move back to the outside group's (0,0) point. The result looks like everything keeps the same position as before.but in fact, both inside content and outside group is moved.
It is easy to realize in flash, however, in flex i got trouble.
The function "getRect" returns wrong values.it's never return the correct position the inside content is.(like the thumb shows,the position should be like [-70,50])
(Feel free to correct me because I'm not sure what you want to accomplish here)
If your Label (let's say it is called myLabel) is correctly located directly inside of your Group, simply calling myLabel.x will return the X-coordinate of the label compared to its parent (which is your Group here, so you should get -70).
Then if you want to move the label so it fits into your Group viewport, you have two solutions:
Either you manually set myLabel.x = 0 and myLabel.y = 0. In this case the label will actually be moved at the Group origin.
Either you retrieve the matrix of your label component to call its .translate(dx, dy) function. Using the matrix functions will modify the way your Label is displayed, but its position will remain unchanged (More information about that on this page).
Short answer: If you don't care about keeping the original position of your label, just set myLabel.x = 0 and myLabel.y = 0 and it should be moved correctly.

Different types of smooth object movement

In a game I have a specific object and two positions the object will move from and to.
I already have the function for calculating current position in specific time.
It works like this:
Inputting 0 will move the object to Position 1.
Inputting 1 will move the object to Position 2.
Inputting 0.5 will move the object in the middle of the two positions.
etc...
(In the examples below, time is varying from 0 to 1)
When I want to start the object slowly and stop it when it is moving fast, I use:
MoveObject(sin(time * 90))
When I want to start the object fast and stop it as it is getting slower, I use:
MoveObject(1 - cos(time * 90))
Without the effects, it's:
MoveObject(time)
How do I make the object start moving slowly, move fast in the center of two positions and then get slower while reaching the second position?
It would be:
MoveObject((time) * (time) * (3 - 2 * (time)))
sol.gfxile.net/interpolation

Calculating a LookAt matrix

I'm in the midst of writing a 3d engine and I've come across the LookAt algorithm described in the DirectX documentation:
zaxis = normal(At - Eye)
xaxis = normal(cross(Up, zaxis))
yaxis = cross(zaxis, xaxis)
xaxis.x yaxis.x zaxis.x 0
xaxis.y yaxis.y zaxis.y 0
xaxis.z yaxis.z zaxis.z 0
-dot(xaxis, eye) -dot(yaxis, eye) -dot(zaxis, eye) 1
Now I get how it works on the rotation side, but what I don't quite get is why it puts the translation component of the matrix to be those dot products. Examining it a bit it seems that it's adjusting the camera position by a small amount based on a projection of the new basis vectors onto the position of the eye/camera.
The question is why does it need to do this? What does it accomplish?
Note the example given is a left-handed, row major matrix.
So the operation is: Translate to the origin first (move by -eye), then rotate so that the vector from eye to At lines up with +z:
Basically you get the same result if you pre-multiply the rotation matrix by a translation -eye:
[ 1 0 0 0 ] [ xaxis.x yaxis.x zaxis.x 0 ]
[ 0 1 0 0 ] * [ xaxis.y yaxis.y zaxis.y 0 ]
[ 0 0 1 0 ] [ xaxis.z yaxis.z zaxis.z 0 ]
[ -eye.x -eye.y -eye.z 1 ] [ 0 0 0 1 ]
[ xaxis.x yaxis.x zaxis.x 0 ]
= [ xaxis.y yaxis.y zaxis.y 0 ]
[ xaxis.z yaxis.z zaxis.z 0 ]
[ dot(xaxis,-eye) dot(yaxis,-eye) dot(zaxis,-eye) 1 ]
Additional notes:
Note that a viewing transformation is (intentionally) inverted: you multiply every vertex by this matrix to "move the world" so that the portion you want to see ends up in the canonical view volume.
Also note that the rotation matrix (call it R) component of the LookAt matrix is an inverted change of basis matrix where the rows of R are the new basis vectors in terms of the old basis vectors (hence the variable names xaxis.x, .. xaxis is the new x axis after the change of basis occurs). Because of the inversion, however, the rows and columns are transposed.
I build a look-at matrix by creating a 3x3 rotation matrix as you have done here and then expanding it to a 4x4 with zeros and the single 1 in the bottom right corner. Then I build a 4x4 translation matrix using the negative eye point coordinates (no dot products), and multiply the two matrices together. My guess is that this multiplication yields the equivalent of the dot products in the bottom row of your example, but I would need to work it out on paper to make sure.
The 3D rotation transforms your axes. Therefore, you cannot use the eye point directly without also transforming it into this new coordinate system. That's what the matrix multiplications -- or in this case, the 3 dot-product values -- accomplish.
That translation component helps you by creating an orthonormal basis with your "eye" at the origin and everything else expressed in terms of that origin (your "eye") and the three axes.
The concept isn't so much that the matrix is adjusting the camera position. Rather, it is trying to simplify the math: when you want to render a picture of everything that you can see from your "eye" position, it's easiest to pretend that your eye is the center of the universe.
So, the short answer is that this makes the math much easier.
Answering the question in the comment: the reason you don't just subtract the "eye" position from everything has to do with the order of the operations. Think of it this way: once you are in the new frame of reference (i.e., the head position represented by xaxis, yaxis and zaxis) you now want to express distances in terms of this new (rotated) frame of reference. That is why you use the dot product of the new axes with the eye position: that represents the same distance that things need to move but it uses the new coordinate system.
Just some general information:
The lookat matrix is a matrix that positions / rotates something to point to (look at) a point in space, from another point in space.
The method takes a desired "center" of the cameras view, an "up" vector, which represents the direction "up" for the camera (up is almost always (0,1,0), but it doesn't have to be), and an "eye" vector which is the location of the camera.
This is used mainly for the camera but can also be used for other techniques like shadows, spotlights, etc.
Frankly I'm not entirely sure why the translation component is being set as it is in this method. In gluLookAt (from OpenGL), the translation component is set to 0,0,0 since the camera is viewed as being at 0,0,0 always.
Dot product simply projects a point to an axis to get the x-, y-, or z-component of the eye. You are moving the camera backwards so looking at (0, 0, 0) from (10, 0, 0) and from (100000, 0, 0) would have different effect.
The lookat matrix does these two steps:
Translate your model to the origin,
Rotate it according to the orientation set up by the up-vector and the looking
direction.
The dot product means simply that you make a translation first and then rotate. Instead of multiplying two matrices the dot product just multiplies a row with a column.
A transformation 4x4 matrix contains two-three components:
1. rotation matrix
2. translation to add.
3. scale (many engine do not use this directly in the matrix).
The combination of the them would transform a point from space A to Space B, hence this is a transformation matrix M_ab
Now, the location of the camera is in space A and so it is not the valid transformation for space B, so you need to multiply this location with the rotation transform.
The only open question remains is why the dots?
Well, if you write the 3 dots on a paper, you'd discover that 3 dots with X, Y and Z is exactly like multiplication with a rotation matrix.
An example for that forth row/column would be taking the zero point - (0,0,0) in world space. It is not the zero point in camera space, and so you need to know what is the representation in camera space, since rotation and scale leave it at zero!
cheers
It is necessary to put the eye point in your axis space, not in the world space. When you dot a vector with a coordinate unit basis vector, one of the x,y,z, it gives you the coordinates of the eye in that space. You transform location by applying the three translations in the last place, in this case the last row. Then moving the eye backwards, with a negative, is equivalent to moving all the rest of the space forwards. Just like moving up in an elevator makes you feel lke the rest of the world is dropping out from underneath you.
Using a left-handed matrix, with translation as the last row instead of the last column, is a religious difference which has absolutely nothing to do with the answer. However, it is a dogma that should be strictly avoided. It is best to chain global-to-local (forward kinematic) transforms left-to-right, in a natural reading order, when drawing tree sketches. Using left-handed matrices forces you to write these right-to-left.

Resources