Are writes of large blocks affected by 4k alignment of ssd? - solid-state-drive

As you know that ssd should be partitioned ensuring 4k alignment, because the writes may be amplified if not 4k aligned.
But I wonder if the side-effect of non-4k alignment will decrease if the write block size is becoming larger.
For example, if the write size is 4k each time, it will be amplified to actually 2 blocks. But if the write size is 128k each time, then is it related only to 128/4 + 1 = 33 blocks?

The issue only arises on the first part and possibly the last part of the write, where you are writing partial blocks. For example, if you're up to position 2048 and you write 8k, you have to write 2048 non-aligned bytes at the start, one fully aligned bloxk, and another 2048 bytes at the end. If you wrote 16k instead it would be the same 2048 issue at the start, more aligned blocks, and the same 2048 issue at the end.

Related

DICOM pixel data lossless rendering and representation

I quote :
DICOM supports up to 65,536 (16 bits) shades of gray for monochrome image display, thus capturing the slightest nuances in medical imaging. In comparison, converting DICOM images into JPEGs or bitmaps (limited to 256 shades of gray) often renders the images unacceptable for diagnostic reading. - Digital Imaging and Communications in Medicine (DICOM): A Practical Introduction and Survival Guide by Oleg S. Pianykh
As I am a beginner in image processing I'm used to process images colored and monochrome with 256 levels, so for Dicom images, in which representation I have to process pixels without rendering them to 256 levels?, because of the loss of information.
note: If you can put a better tittle for this question, please feel free to do so, I've a hard time doing that and didn't come to a good one.
First you have to put the image's pixels through the Modality VOI/LUT transform in order to transform modality-dependent values to known units (e.g. Hounsfield or Optical Density).
Then, all your processing must be done on the entire range of values (do not convert 16 bit values to 8 bit).
The presentation (visualization) can be performed using scaled values (using 8 bit values), usually passing the data through the Presentation VOI/LUT (window-width or LUT).
See this for the Modality transform: rescale slope and rescale intercept
See the this for Window/Width: Window width and center calculation of DICOM image

Editing size of image without loosing its quality and dimensions

I have a image with size of 5760 x 3840px with size of 1.14mb. But I would like to reduce its size to 200 kb by not changing the dimensions of the image. How Could I do this using photoshop? Please help me
This answer is based on a large format file size that has a resolution of 300 dpi. If the image is 5760 x 3840, your canvas size will be 19.2 x 12.8. in order to not "reduce" the dimension (printable area) you are going o have to reduce the image size [ALT + CTRL/CMD + I] and then reduce the resolution from there.
At 300 DPI:
At 72 DPI:
This reduction in resolution can decrease the file size dramatically, There is a chance of artifacts, but as you are starting at a high resolution the compression algorithms will smooth it out to almost non-existent.
NOW... If you are starting at 72dpi and you are looking for a good way to generate a lower file size for the web, your best bet may be to do a Safe for web option [ALT + CTRL/CMD + SHIFT + S] and generat a .jpg, .gif or a .png based on the final needs of the file. If it is an photograph with a lot of colors, I would go .jpg. If you have a lot of areas of solid color (logo perhaps) I would go with .png or .gif.
The Save for Web option allows you to see, side by side, the results of the export BEFORE going through the save process. it also allows you to alter the settings of the save process to dial in your results. best of all, it gives you a pretty good preview of the expected file size.
Either way this should help you save that larger file size for future use.

Responsive Printing in millimeters

I am going to print from the browser to a receipt printer. I want to support 58mm --> full size paper using responsive design. I used: http://www.unitconversion.org/typography/pixels-x-to-millimeters-conversion.html and it says 58 mm is approx. 219 pixels.
Is this an accurate way to measure pixels in the browser?
That converter is misleading - it can produce only approximations (or at the very best, results that work for your specific display), but never generally accurate results.
The number of pixels per mm (or any other physical unit) will vary from display to display, as different displays will have a different number of differently sized pixels. See "Pixel Density" in Wikipedia.
You can specify physical measures in CSS and when printing they should come out OK if the browser and the printer driver are handling things right:
.mysheet { width: 19.2cm; height: 8cm; }
Some Browser/OS/display combinations (I don't know which protocols do this) apparently also can interpret physical measures and render them in their correct size on screen.

Should barcode font sizes match?

I am trying to convert a string into Code39 barcode. To increase the reliability I am trying to increase the font size of barcode from 40 to 60. Would this cause any issue as the width and height of the bars will change compared to the previous version of font 40?
No, the scanner reads the ratio between the width of the symbols. As long as they both scale the same way, you're fine. I doubt that you'll see increased reliability. I hope you'll post results.

GDI+ 's Amazing decode speed, and terrible draw speed!

Thanks for answers,Actually I am not puzzled about draw 1024*768 pixels is slower than 100* 100 pixels... It is so simple a logic..
Which made me puzzled is that DrawImage's interpolation algorithm may be very slow, while there exists lots of better algorithm, and its decoder seems can decode from a jpg with a certain resolution, it is really cool, I search for sometime but do not find any free lib to do this...
It is really strange!
I add the following code into on Paint method. c:\1.jpg is 5M jpg file, about 4000*3000
//--------------------------------------------------------------
HDC hdc = pDC->GetSafeHdc();
bitmap = Bitmap::FromFile(L"c:\\1.jpg",true);
Graphics graphics(hdc);
graphics.SetInterpolationMode( InterpolationModeNearestNeighbor );
graphics.DrawImage(bitmap,0,0,200,200);
The above is really fast! even real time! I don't think decode a 5m JPG can be that fast!
//--------------------------------------------------------------
HDC hdc = pDC->GetSafeHdc();
bitmap = Bitmap::FromFile(L"c:\\1.jpg",true);
Graphics graphics(hdc);
graphics.SetInterpolationMode( InterpolationModeNearestNeighbor );
graphics.DrawImage(bitmap,0,0,2000,2000);
The above code become really slow
//--------------------------------------------------------------
If I add Bitmap = Bitmap::FromFile(L"c:\1.jpg", true); // into construct
leave
Graphics graphics(hdc);
graphics.SetInterpolationMode( InterpolationModeNearestNeighbor );
graphics.DrawImage(bitmap,0,0,2000,2000);
in OnPaint method,
The code is still a bit slow~~~
//------------------------------------------------------------------
Comparing with decoding, the drawImage Process is really slow...
Why and How did they do that? Did Microsoft pay the men taking charge of decoder double salary than the men taking charge of writing drawingImage?
So, what you're really wondering is why
graphics.DrawImage(bitmap,0,0,200,200);
is faster than
graphics.DrawImage(bitmap,0,0,2000,2000);
Correct?
Well, the fact that you are drawing 100 times more pixels in the second case could have something to do with it.
You don't need to decode JPGs if you're scaling down by a factor of 8. JPG images consist of blocks of 8 by 8 pixels, DCT-transformed. The average value of this block is the 0,0 coefficient of the DCT. So, scaling down a factor of 8 is merely a matter of throwing away all other components. Scaling down even further (eg 4000->200) is just a matter of scaling down from 4000 to 500, and then scaling normally from 500 to 200 pixels.
It could be possible that the decoding is deferred until needed. That's why it is so fast.
Maybe on the 200x200 case GDI+ only decodes enough blocks to paint 200x200 and on 2000x2000 they decodes more.
Graphic routines always contains some obscure optimizations, you could never know.
Maybe Reflector will tell you?
Just a guess, but could you try drawing with 4000x3000 or 2000x1500? Perhaps the fact that 4000 and 3000 are divisible by 200 is speeding up the whole and 3000 not being divisible by 200 slows it down (although this really would be weird).
Generally, do some profiling or time measurement. If 2000x2000 is about 100 times slower than 200x200, everything is okay. And don't bother if 2000x2000 is too slow. If your screen is at 1024x768, you can't see the whole image, so you better pick the part of the image that is visible on the screen and draw it, 1024x768 is 5 times faster than 2000x2000.

Resources