what are the algorithm differences between JPEG and GIF? - multimedia

I am currently doing an assigment and cannot find the answer to this question..as Algorithm is supposed to mean (solving problems as such)

The main difference is that JPEG uses a lossy algorithm, and GIF uses a losless algorithm (LZW). In addition, GIF is limited to 256 colors, while JPEG is truecolor (8 bits per color per pixel)

Some info is here.
Basically, JPEG is good for real life images, and GIF is good for computer generated images with solid areas or when you need some text to not be blurred (JPEG is lossy, GIF is not). There are many other differences too.
See also Wikipedia:
GIF
JPEG
For bonus points in your assignment you might want to mention other commonly used standards such as PNG.

i found a very good web site that explains about the difference between gif and jpeg plus it shows image examples of several scenarios. enjoy.
http://www.siriusweb.com/tutorials/gifvsjpg/

Related

Operating on SVG files with a large number of elements

Assume an SVG file that was generated via R, represents a graph with about 160000 data points and whose file size is more than 20 MiB. Specifically, let us assume that this SVG file contains 160000 XML circle definitions. For example, see this graph. The file is, thus, not atypical for a scientific project.
Assume further that you wish to post-process this file in an SVG editor (e.g., Inkscape).
I have found that an SVG file larger than 20MiB is virtually impossible to operate on via a typical SVG editor on a typical user system (x86_64 GNU/Linux, 4 CPUs, 20 GiB RAM), as the file is barely loaded into Inkscape.
Several potential solutions to this problem come to mind, each with a severe drawback:
Optimize the SVG with tools such as svgo beforehand. While the application of svgo does decrease the file size by about 20%, it also messes up the graph itself (as is done with the above-linked example file).
Use a different file format, such as PDF. However, editors such as Inkscape typically convert the PDF back into an SVG.
Save the graph via a different SVG renderer in R. However, both the base command svg() as well as the command svglite() from the R package with the same name generate graphs of approximately the same size.
Does anyone have a suggestion as to how to open and manually edit such SVG files with a large number of XML elements?
You've certainly managed to find a good stress test for SVG renderers :)
Your SVG contains what appears to be a totally unnecessary clip path that is applied to every data point.
If I surround the points with a group and apply the clip path to the group of points instead, rendering times are significantly reduced.
Chrome: 255 secs -> 58 secs
Firefox: 188 secs -> 14 secs
If I remove that clip path completely, I get:
Chrome: 27 secs
Firefox: 10 secs.
These changes don't help rendering times in Inkscape unfortunately, but hopefully it helps you somehow. If you need rendering times faster than that, you likely need to do as Robert says, and reduce the number of data points somehow.

Lossless JPEG is a special case of JPEG image compression

I've read lossless jpeg is invoked when the user selects a 100% quality factor in an image tool.
What is the image tool they meant?
Thanks in advance.
OK Image Compression is sorta like a ZIP file; JPG Takes up less space but has less quality then a PNG or TIFF. In short it removes Bytes and Changes The Algorithm, the Higher the Quality The More Space it Takes for the Algorithm Compression, Read More Here:
https://en.wikipedia.org/wiki/Lossless_compression

DICOM pixel data lossless rendering and representation

I quote :
DICOM supports up to 65,536 (16 bits) shades of gray for monochrome image display, thus capturing the slightest nuances in medical imaging. In comparison, converting DICOM images into JPEGs or bitmaps (limited to 256 shades of gray) often renders the images unacceptable for diagnostic reading. - Digital Imaging and Communications in Medicine (DICOM): A Practical Introduction and Survival Guide by Oleg S. Pianykh
As I am a beginner in image processing I'm used to process images colored and monochrome with 256 levels, so for Dicom images, in which representation I have to process pixels without rendering them to 256 levels?, because of the loss of information.
note: If you can put a better tittle for this question, please feel free to do so, I've a hard time doing that and didn't come to a good one.
First you have to put the image's pixels through the Modality VOI/LUT transform in order to transform modality-dependent values to known units (e.g. Hounsfield or Optical Density).
Then, all your processing must be done on the entire range of values (do not convert 16 bit values to 8 bit).
The presentation (visualization) can be performed using scaled values (using 8 bit values), usually passing the data through the Presentation VOI/LUT (window-width or LUT).
See this for the Modality transform: rescale slope and rescale intercept
See the this for Window/Width: Window width and center calculation of DICOM image

How to avoid strange structure artifacts in scaled images?

I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).

Is there an easy (and not too slow) way to compare two images in Qt/QML to detect motion

I would like to implement a motion detecting camera in Qt/QML for Nokia N9. I hoped that there would be some built in methods for computing image differences but I can't find any in the Qt documentation.
My first thoughts were to downscale two consecutive images, convert to one bit per pixel, compute XOR, and then count the black and white pixels.
Or is there an easy way of using a library from somewhere else to achieve the same end?
Edit:
I've just found some example code on the Qt developer network that looks promising:
Image Composition Example.
To compare images qt has QImage::operator==(const QImage&). But i don't think it will work for motion detection.
But this may help: Python Motion Detection Library + Demo.

Resources