Did I create this MSS graph right? - networking

enter image description here
(Translation)
The following graph should represent the size of the cwnd in units of one Maximum segment size.
Fill the graph till Round 15 and add the following events:
Round 3, 6 and 11: three duplicate acknowledgments
I am not sure if my graph is correct since I dont know if the event happens at around 3, or straight after (so round 4 the Mss gets halved)
I am expecting that on the corresponding round the event happens and not after?

Related

Dicom protocol and CT scan and sorting according to z position of axial slices?

I have a CT scan of the chest where I can't seem to figure out how to determine how to sort the aixal slices such that the first slices are the ones closet to the head.
The scan has the following dicom parameters:
Patient Position Attribute (0018,5100): FFS (Feet First Supine)
Image Position (Patient) (0020,0032): -174-184-15 (one slice)
Image Orientation (Patient)(0020,0037): 1\0\0\0\1\0
The most cranial slice (anatomically, closet to the head) has z position 13 and the most caudal (lower) -188.
However, when the Patient Position is FFS shouldn't the slice with the lowest z position (e.g. -188) be the one being the one most cranially (anatomically, i.e. closet to the head) located?
Can anyone enlighten me?
Kind regards
DICOM very clearly defines, that the x-axis has to go from the patients right to the left, the y-axis has to go from front to back, and the z-axis has to go from foot to head.
So the lower z-position of -188 has to be closer to the feet than the higher position of 13. You should always rely on this.
Patient Position Attribute is rather some informational annotation. If you do all the math yourself, then you can ignore it.
If a viewer does not do the math (there are a lot of them) and just loads the images and shows them sored by ImageNumber, then the Position Attribute is the info to annotate, if the image with ImageNumber 1 is the one closer to the head or the one closer to the feet. Meaning: when the patient went throught the ct scanner, which one was the first imate aquisitioned: the one of the head or of the feet.

Translate screen coordinates to graph coordinates

tl;dr: Is there a function to get the same output as identify() or locator(), but without a mouse click (say a mouse hover position instead)?
I am generating plots, and saving them to a PNG file, and embedding them into my application. When the user interacts with the image in my application, I'd like to send those screen coordinates back to the graphics device in R to understand where the user is in the data coordinates.
I need a version of identify() or locator() that lets me pass in the mouse coordinates explicitly.
For example, is the user hovering on a point corresponding to year 2015 and birth rate 90, if they are hovering on pixel 1000, 2000?
Have a look at ?grconvertX, which with enough care should allow you
should allow you to implement something like this.
Here is an
answer
in which I used it and grconvertY() to go from plot ("user")
coordinates to normalized device ("ndc") coordinates - basically the
reverse of the operation you'll likely want to use.
– Josh O'Brien Apr 2 '15 at 17:05
I'm finding that 'dev' or device coordinates give me the exact pixel
values for 'user' coordinates, which are the values from the graph, so
going the other way around should work. It seems like the 'ndc'
coordinates are basically the same, but divided by the width of the
image in order to normalize to the range of 0 to 1.
- Neil Apr 12 '15 at 0:14

Parallel coordinate plot (seqpcplot) using TraMineR: How are event sequences without any transition represented?

I'm using the R packages TraMineR to compute and analyze the event state sequences. My alphabet consists of 7 states: however, some individuals do not experience any transition along the 84 months considered, staying always in the same state. The event state sequences for these cases are, for example:
[1] (full_time)-84
[2] (part_time)-84
If one of those sequences is at the same time one of the most frequent, how is it represented by the command seqpcplot? It is simply ignored because no transition appears along the sequence and the plots show only the most frequent trajectories of those who change state?
Thank you very much.
Empty zero-event sequences are represented with a black square south-west of the bottom left translation zone.
However, the two event sequences given in your example are NOT zero-event sequences. They each have a start event, namely full_time and part_time. Such patterns are represented with a square on the first coordinate axis, respectively in regard of full_time and part_time.
If you don't use the embedding trick, they will appear as isolated squares with size proportional to their frequencies. With option ltype = "non-embeddable", the pattern will be embedded in some other pattern starting with the same event. This is reflected with the start square bigger than the next one on the same path.
So in your case, if say the first one is the most frequent pattern: With ltype = "unique", you should have a relatively large isolated square next to full_time on the first coordinate. With ltype = "non-embeddable", you should have an even bigger square next to full_time, but with a path continuing to some point on the second coordinate where you should observe a smaller square (where it represents fewer cases than at the start point).
Hope this helps.

Recover data from a QR code with the last 2 lines missing

Bitcoin.
I have 250BTC on a qr code that i discover only now has the last 2 lines missing.
If my math is correct, 2 lines (width is around 25 pixels, so 2 lines = 50 boxes that can be only black or white)
2^50 = 10^15 combinations.
The qr code produces a 30-character hash, i have the first 13 characters of the hash.
Is there any way you suggest me to try to recover the money?
Some of the last two lines are part of the finder pattern at the bottom-left, which has no information (you can easily draw it back in). It's surrounded by a white gutter of 1 module, and the next column over (moving right) is part of the format information section. This is error-correctable itself, but, is also replicated at the top right. You won't need this bit.
The rest is indeed information in a v2 code. You're missing only 16*2 = 32 bits, or 4 codewords. The minimal error correction for a QR code, level L, has 10 EC codewords. It can correct 10 errors. Just leave the area white, and all the codewords will be errors, but that's easy to correct with room to spare by any decoder.
Just draw back in the finder pattern.

Different rendering speed Qt widgets

I'm building an app (in Qt) that includes a few graphs in it which are dynamic (meaning refreshes to new values rapidly), and gets there values from a background thread.
I want the first graph, whose details are important refreshing at one speed (100 Hz) and 4 other graphs refreshing in lower speed (10Hz).
The problem is, that when I'm refreshing them all at the same rate (100 Hz) the app can't handle it and the computer stucks, but when the refresh rate is different the first signal gets artifacts on it (comparing to for example running them all an 10Hz).
The artifacts are in the form of waves (instead of straight line for example I get a "snake").
Any suggestions regarding why it has artifacts (rendering limits I guess) and what can be done about it?
I'm writing this as an answer even if this doesn't quite answer your question, because this is too long for a comment.
When the goal is to draw smooth moving graphics, the basic unit of time is frame. At 60 Hz drawing rate, the frame is 16.67 ms. The drawing rate needs to match the monitor drawing rate. Drawing faster than the monitor is totally unnecessary.
When drawing graphs, the movement speed of graph must be kept constant. If you wonder why, walk 1 second fast, then 1 seconds slow, 1 second fast and so on. That doesn't look smooth.
Lets say the data sample rate is 60 Hz and each sample is represented as a one pixel. In each frame all new samples (in this case 1 sample) is drawn and the graph moves one pixel. The movement speed is one pixel per frame, in each frame. The speed is constant, and the graph looks very smooth.
But if the data sample rate is 100 Hz, during one second in 40 frames 2 pixels are drawn and in 20 frames 1 pixel is drawn. Now the graph movement speed is not constant anymore, it varies like this: 2,2,1,2,2,1,... pixels per frame. That looks bad. You might think that frame time is so small (16.67 ms) that you can't see this kind of small variation. But it is very clearly seen. Even single varying speed frames can be seen.
So how is this data of 100 Hz sample rate is drawn smoothly? By keeping the speed constant, in this case it would be 1.67 (100/60) pixels per frame. That of course will require subpixel drawing. So in every frame the graph moves by 1.67 pixels. If some samples are missing at the time of drawing, they are simply not drawn. In practice, that will happen quite often, for example USB data acquisition cards can give the data samples in bursts.
What if the graph drawing is so slow that it cannot be done at 60 Hz? Then the next best option is to draw at 30 Hz. Then you are drawing one frame for every 2 images the monitor draws. The 3rd best option is 20 Hz (one frame for every 3 images the monitor draws), then 15 Hz (one frame for every 4 images) and so on. Drawing at 30 Hz does not look as smooth as drawing at 60 Hz, but the speed can still be kept constant and it looks better than drawing faster with varying speed.
In your case, the drawing rate of 20 Hz would probably be quite good. In each frame there would be 5 new data samples (if you can get the samples at a constant 100 Hz).

Resources