I'm trying to transfer the following mockup into working dynamic code but I'm having a few problems right now.
Mockup: http://www.imagebanana.com/view/a6yuqvgm/chat.png
The goal here is to implement the "negative margin" each message box has so that the messages overlap a bit. So, if person A (me) and person B have a conversation, all messages from person B should be on the right side and all of my messages (person A) should be on the left side. This part is obviously rather easy.
Also, if I answer to a message from my chat partner, my message should have a negative margin so that my message sort of "goes into" the message from my partner but on the other side. This is for design and space-saving reasons. The longer the messages, the greater the margin should be. Shorter messages need to have a smaller margin.
I'm currently a pit puzzled as how to successfully implement such things. A simple negative margin is not enough, because when a user sends two messages in a row, the messages overlap (the second one goes into the first one). The mockup shows the perfect situation, rotary messages (person A, person B, person A, person B, and so on), but that's not always the case obviously.
My question now is, is that even possible with pure CSS? I guess I need to add some dynamic part, either in PHP or JS, both is fine. I just need some hints in the right direction.
You can do it in pure CSS if you don't need need the margins sized according to the height of each message. The key in either case is the use of the adjacent (+) selector to target from-messages which follow to-messages and vice versa, avoiding overlap between consecutive messages from the same person.
Here's how: http://jsbin.com/ujonoj/14/edit
Note the commented-out bit of CSS: you can use that to have static negative margin (however much you want) and avoid the JS, if need be.
Edit - added two safety checks to cover cases of very long messages following very short ones, and to stop setMargin running on consecutive to-to/from-from messages. The long-short safety check simply doesn't set the negative margin greater than some percent (80 in my example) of the previous message.
Related
I have noticed that every computer graphics system I have ever used uses a left-handed coordinate system with its origin in the upper left corner. Cairo, Java, Microsoft XYZ, and most graphics programs all use this system. I assume they all date back to a common ancestor, but I can't find any references about this.
If I had to guess I'd say it came from VGA graphics mode, using the same coordinates as text, which were naturally based on how the English language is read top-down, left-right, with the "second line" below the "first line"... but I'm making that up.
Was anyone around to tell the tale, or can point me in the direction of the correct history book?
It's an old convention, and the reasons might be a bit apocryphal. Here are some hypotheses I've found:
It's derived from CRT electron beam sweep behavior.
Scanning from top to bottom means you don't have to wait for an entire frame to be sent first, you just begin scanning as soon as you begin receiving data. (Which raises the question again, why scan from top to bottom)
It allows a right-handed coordinate system with the Z axis going into the screen rather than coming out of it.
Annoyingly, Cocoa and Quartz use lower-left origin.
I doubt that is an old convention that is kept due to legacy reasons.
UpperLeft has the advantage, that is no language writing system that goes from bottom to up. So in UpperLeft is easier:
To place multiline text
Work with pages of unknown or infinite height
If the page height is changed (ie bigger or smaller device), in BottomLeft you have to translate every object coordinate, while in UpperLeft you don't.
The last one extends also to dynamic placement and layouts, where a graphics object's coordinates are offsets to their parent
No idea. I don't think there is a definitive answer. It's likely that when people still had console based machines it made sense to go from the top left corner down to the bottom right. It's how a lot of people in the world read, as you've said. It makes sense to put the origin there.
http://en.wikipedia.org/wiki/Memory-mapped_I/O
The wikipedia article has some information about memory mapped displays. Say for example we dedicate a part of our memory to turning off and on pixels on the screen. And we let address 0 be the upper left hand part of the screen and move over in chunks turning on and off pixels depending on if they're in the memory. That's basically what the first article is saying.
I don't know if they let address 0 be the upper left hand side of a display but it makes sense and it might have just carried over.
I was also wondering about the same question. Here is another source:
The origin is always in the upper left. And that comes from the fact that, kind of TVs when they're first built, scan from left to right and then top to bottom. So it doesn't work he same way you saw kind of at high school geometry where the origin wasn't always in the lower left....
An Introduction to Interactive Programming in Python
I'm not a specialist, but as far as I know, a bit of information in a QR-code is coded more than once, and it is defined as the redundancy level
How can I estimate a QR-code redundancy level ? Is where an mobile app or a website where I can test my QR-code redundancy level easily ? If not, is it an easy algorithm that I can implement ?
Redundancy is sorted in different categories according to this website,
but I'd like to have the direct percentage value if possible
There are some pixels next to the lower left positioning block which indicate the redundancy level. Quote from https://blog.qrstuff.com/2011/12/14/qr-code-error-correction
Quite conveniently, there’s also 2 modules down in the bottom left-hand corner of every QR code that display what the error correction level used in that QR code is.
There is a very nice graphic on that page which visualizes this, which I won't include here as I assume that I'm not licensed to do so.
I'm working on a game (using Game Maker: Studio Professional v1.99.355) that needs to have both user-modifiable level geometry and AI pathfinding based on platformer physics. Because of this, I need a way to dynamically figure out which platforms can be reached from which other platforms in order to build a node graph I can feed to A*.
My current approach is, more or less, this:
For each platform consider each other platform in the level.
For each of those platforms, if it is obviously unreachable (due to being higher than the maximum jump height, for example) do not form a link and move on to next platform.
If a link seems possible, place an ai_character instance on the starting platform and (within the current step event) simulate a jump attempt.
3.a Repeat this jump attempt for each possible starting position on the starting platform.
If this attempt is successful, record the data necessary to replicate it in real time and move on to the next platform.
If not, do not form a link.
Repeat for all platforms.
This approach works, more or less, and produces a link structure that when visualised looks like this:
linked platforms (Hyperlink because no rep.)
In this example the mostly-concealed pink ghost in the lower right corner is trying to reach the black and white box. The light blue rectangles are just there to highlight where recognised platforms are, the actual platforms are the rows of grey boxes. Link lines are green at the origin and red at the destination.
The huge, glaring problem with this approach is that for a level of only 17 platforms (as shown above) it takes over a second to generate the node graph. The reason for this is obvious, the yellow text in the screen centre shows us how long it took to build the graph: over 24,000(!) simulated frames, each with attendant collision checks against every block - I literally just run the character's step event in a while loop so everything it would normally do to handle platformer movement in a frame it now does 24,000 times.
This is, clearly, unacceptable. If it scales this badly at a mere 17 platforms then it'll be a joke at the hundreds I need to support. Heck, at this geometric time cost it might take years.
In an effort to speed things up, I've focused on the other important debugging number, the tests counter: 239. If I simply tried every possible combination of starting and destination platforms, I would need to run 17 * 16 = 272 tests. By figuring out various ways to predict whether a jump is impossible I have managed to lower the number of expensive tests run by a whopping 33 (12%!). However the more exceptions and special cases I add to the code the more convinced I am that the actual problem is in the jump simulation code, which brings me at long last to my question:
How would you determine, with complete reliability, whether it is possible for a character to jump from one platform to another, preferably without needing to simulate the whole jump?
My specific platform physics:
Jumps are fixed height, unless you hit a ceiling.
Horizontal movement has no acceleration or inertia.
Horizontal air control is allowed.
Further info:
I found this video, which describes a similar problem but which doesn't provide a good solution. This is literally the only resource I've found.
You could limit the amount of comparisons by only comparing nearby platforms. I would probably only check the horizontal distance between platforms, and if it is wider than the longest jump possible, then don't bother checking for a link between those two. But you might have done this since you checked for the max height of a jump.
I glanced at the video and it gave me an idea. Instead of looking at all platforms to find which jumps are impossible, what if you did the opposite? Try placing an AI character on all platforms and see which other platforms they can reach. That's certainly easier to implement if your enemies can't change direction in midair though. Oh well, brainstorming is the key to finding something.
Several ideas you could try out:
Limit the amount of comparisons you need to make by using a spatial data structure, like a quad tree. This would allow you to severely limit how many platforms you're even trying to check. This is mostly the same as what you're currently doing, but a bit more generic.
Try to pre-compute some jump trajectories ahead of time. This will not catch all use cases that you have - as you allow for full horizontal control - but might allow you to catch some common cases more quickly
Consider some kind of walkability grid instead of a link generation scheme. When geometry is modified, compute which parts of the level are walkable and which are not, with some resolution (something similar to the dimensions of your agent might be good starting point). You could also filter them with a height, so that grid tiles that are higher than your jump height, and you can't drop from a higher place on to them, are marked as unwalkable. Then, when you compute your pathfinding, as part of your pathfinding step you can compute when you start a jump, if a path is actually executable ('start a jump, I can go vertically no more than 5 tiles, and after the peak of the jump, i always fall down vertically with some speed).
When the player character goes into the Staffroom at the orphanage/boarding school they live at, said player has two turns before they hear the manager's footsteps coming down the hall and they are urged to hide. I've done this through use of number variables. At this point I have another number variable (set up like a true/false thingy by only using 0 and 1) to govern whether or not trying to do anything except 'hiding' or 'hiding wrongly' gives the response 'There's no time for that, just hide!'. The problem is this: Whenever I start the game, ANY ACTION is rejected and met by 'There's no time for that, just hide!'.
Code:
NOTSITS is a number variable.
When play begins:
now NOTSITS is 0.
Every turn when the location is the Staffroom:
increase NOTSITS by 1.
Every turn when the location is the Staffroom:
if NOTSITS is 2:
now HYF is 1;
say "From the hall outside, you hear footsteps... Shit, that sounds like Rodger![paragraph break]HIDE!".
HYF is a number variable.
When play begins:
now HYF is 0.
Every turn :
if HYF is 1:
instead of doing anything other than hiding or hiding wrongly:
say "There's no time for that, just hide!".
Hiding is an action applying to nothing.
Understand "hide" as hiding.
Hiding wrongly is an action applying to one thing.
Understand "hide in [something]" as hiding wrongly.
Instead of hiding:
try entering the empty cupboard;
now HYF is 0.
Instead of hiding wrongly, say "Don't waste time with stupidity, just hide!"
Please don't suggest using Inform 7's own time system to solve this. I tried that and it was a far bigger shizztorm of problems than this has been.
I think the problem is that you're relying too much on every turn rules, but they run after the actions have all been processed, so it's too late for them to do what you want them too. I also defined hiding as a synonym for entering, because that action already exists, and it's what you want to happen. So try this instead:
First turn is a truth state variable. First turn is true.
The staffroom is a room.
In the staffroom is an enterable container called the empty cupboard.
Understand "hide" as entering.
Carry out entering when first turn is true:
now first turn is false;
Understand "hide in [something]" as a mistake ("Don't waste time with stupidity, just hide!").
Instead of doing something other than looking or entering when first turn is true:
say "There's no time for that, just hide!";
(Also in the future it will help if you provide the full source code, or at least all that's relevant. This time you left out the staffroom and cupboard.)
You can specify an action and its time of appearance after entering a room:
After going to Staffroom for the first time:
manager comes in three turns from now.
At the time when manager comes:
YOUR STUFF
This has been greatly bothering me in the past few weeks. In this time I've been researching online, even reading books in the Computers section at Borders to try to find an answer, but I haven't had much luck.
I programmed a 2D level editor for side-scroller video games. Now I want to turn it into a game where I have a player who can run and jump to explore the level, similar to "Mario".
The thing that is really giving me trouble is the collision response (not detection: I already know how to tell if two blocks are colliding). Here are some scenarios that I am going to illustrate so that you can see my problems (the shaded blocks are the ground, the arrow is the velocity vector of the player, the dashed lines are the projected path of the player).
See this collision response scenarios image:
http://dl.dropbox.com/u/12556943/collision_detection.jpg
Assume that the velocity vectors in scenarios (1) and (2) are equal (same direction and magnitude). Yet, in scenario (1), the player is hitting the side of the block, and in scenario (2), the player is landing on top of the block. This allows me to conclude that determining the collision response is dependent not only on the velocity vector of the player, but also the player's relative position to the colliding block. This leads to my first question: knowing the velocity vector and the relative position of the player, how can I determine from which direction (either left side, right side, top, or bottom) the player is colliding with the block?
Another problem that I'm having is how to determine the collision response if the player collides with multiple blocks in the same frame. For instance, assume that in scenario (3), the player collides with both of those blocks at the same time. I'm assuming that I'm going to have to loop through each block that the player is colliding with and adjust the reaction accordingly from each block. To sum it up, this is my second question: how do I handle collision response if the player collides with multiple blocks?
Notice that I never revealed the language that I'm programming in; this is because I'd prefer for you to not know (nothing personal, though :] ). I'm more interested in pseudo-code than to see language-specific code.
Thanks!
I think the way XNA's example platform game handles collisions could work well for you. I posted this answer to a very similar question elsewhere on Stack Overflow but will relay it here as well.
After applying movement, check for and resolve collisions.
Determine the tiles the player overlaps based on the player's bounding box.
Iterate through all of those tiles doing the following: (it's usually not very many unless your player is huge compared to your world tiles)
If the tile being checked isn't passable:
Determine how far on the X and Y axes the player is overlapping the non-passable tile
Resolve collision by moving the player out of that tile only on the shallow axis (whichever axis is least penetrated)
For example, if Y is the shallow axis and the collision is below, shift the player up to no longer overlap that tile.
Something like this: if(abs(overlap.y) < abs(overlap.x)) { position.y += overlap.y; } else { position.x += overlap.x; }
Update the bounding box's position based on the player's new position
Move on to the next tile...
If the tile being checked is passable, do nothing
If it's possible that resolving a collision could move the player into another collision, you may want to run through the above algorithm a second time. Or redesign your level.
The XNA version of this logic is in player.cs in the HandleCollisions() function if you are interested in grabbing their code to see what they specifically do there.
So what makes this a little more tricky is the constant force of gravity adjusting your players position. If your player jumps on top of a block they shouldn't bounce off they should land on top of the block and stay there. However, if the player hits a block on the left or right they shouldn't just stay there gravity must pull them down. I think that's roughly your question at a high level.
I think you'll want to separate the two forces of gravity and player velocity from collision detection/response algorithm. Using the velocity of the player if they collide with a block regardless of direction simply move the player's position to the edge of the collision, and subtract equal and opposite vector from the player's velocity since not doing this would cause them to collide yet again with the object. You will want to calculate the intersection point and place the player's position there on the block.
On a side note you could vary that really big force by what type of block the player collided with allowing for interesting responses like the player can break through the block if they are running fast enough (ie the player's velocity > than the force of the block)
Then continue to apply the constant force gravity to the player's position and continue doing your normal calculation to determine if the player has reached a floor.
I think by separating these two concepts you have a really simple straight forward collision response algorithm, and you have a fairly simple gravity-floor algorithm. That way you can vary gravity without having to redo your collision response algorithm. Say for example a water level, space level, etc and collision detection response is all the same.
I thought about this for a long time recently.
I am using the separating axis theorem, and so if I detected a collision I proceeded to project the object onto the normalized velocity vector and move the object by that distance in the direction of the negative velocity vector. Assuming the object came from a safe place this solution will position the object in a safe place post collision.
May not be the answer you're looking to get, but hopefully it'll point you in the right direction?