Assume an SVG file that was generated via R, represents a graph with about 160000 data points and whose file size is more than 20 MiB. Specifically, let us assume that this SVG file contains 160000 XML circle definitions. For example, see this graph. The file is, thus, not atypical for a scientific project.
Assume further that you wish to post-process this file in an SVG editor (e.g., Inkscape).
I have found that an SVG file larger than 20MiB is virtually impossible to operate on via a typical SVG editor on a typical user system (x86_64 GNU/Linux, 4 CPUs, 20 GiB RAM), as the file is barely loaded into Inkscape.
Several potential solutions to this problem come to mind, each with a severe drawback:
Optimize the SVG with tools such as svgo beforehand. While the application of svgo does decrease the file size by about 20%, it also messes up the graph itself (as is done with the above-linked example file).
Use a different file format, such as PDF. However, editors such as Inkscape typically convert the PDF back into an SVG.
Save the graph via a different SVG renderer in R. However, both the base command svg() as well as the command svglite() from the R package with the same name generate graphs of approximately the same size.
Does anyone have a suggestion as to how to open and manually edit such SVG files with a large number of XML elements?
You've certainly managed to find a good stress test for SVG renderers :)
Your SVG contains what appears to be a totally unnecessary clip path that is applied to every data point.
If I surround the points with a group and apply the clip path to the group of points instead, rendering times are significantly reduced.
Chrome: 255 secs -> 58 secs
Firefox: 188 secs -> 14 secs
If I remove that clip path completely, I get:
Chrome: 27 secs
Firefox: 10 secs.
These changes don't help rendering times in Inkscape unfortunately, but hopefully it helps you somehow. If you need rendering times faster than that, you likely need to do as Robert says, and reduce the number of data points somehow.
Related
I have a TableView with around 40 rows and 4 columns. All of the 160 cells have a Rectangle with a gradient. I use Qt5.13 with enabled Quick compiler. Yet, when I animate all of these 160 cells in relatively large time intervals (100ms), the UI will become unresponsive. This means that rendering the gradients takes too long. In fact, if I only render 40 of such cells, I can update in 100ms intervals with ease.
The rectangles represent progress bars. They have gradients from top to bottom. However, the value (length) of the progress bars changes the gradients, too. This is why for each value (length) point, the gradients have to be recreated and rerendered.
Clearly, this is slow. What I would like to do is have the gradients being cached for each value (length) point. They represent percentages, so I would only need to cache 101. I am quite certain that this improves the performance here.
However, how can I cache gradients (or any objects) myself in QML? The more general (or bonus) question is: how can I have a shared QML resource between multiple QML files?
You can try and load images instead of rendering if you have access to a large memory. Maybe you can also try scaling SVGs.
I'm currently working on a paper using the Google Earth Engine, But when I try to collect Landsat imagery, the results come back as "transparent" maps. When you zoom in on it you see that the transparency comes from lines without images. I figured this is only with landsat 7 data from later then 2003. Does anyone know what this is and how it can be solved?
My code is simply:
var image = ee.ImageCollection(landsat7_SurfaceReflectance
.filterBounds(geometry)
.filterDate('2004-06-01', '2004-08-01')
.median()
);
Map.addLayer(image, imageParams, "image");
I've added two images showing the issue.
What you observe is the Scan Line Corrector (SLC) sensor failure (https://landsat.usgs.gov/slc-products-background), there are simply no valid measurements available for these dates.
The only solution is to replace these missing pixels with valid pixels from the closest images (in time). One of the algorithms to do this was discussed here: Algorithm to improve the gaps in Landsat 7 images.
Here is a script trying to achieve this, but this is already a variation of the original USGS algorithm, which was designed to process RAW images, and not SR: https://code.earthengine.google.com/17ee7142a98fdb1c37b7da4aa679587c. You may need to mask and fill cloud and cloud shadow pixels as well to create a good looking composite.
Another solution is to increase time interval.
You can also try combining Landsat 7 with Landsat 5, but unfortunately, Landsat 5 has no images for the above location/time combination. However, it may work for other location/time combinations because of an overlap in these missions:
I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).
Internally Flash obviously keeps a list of the primitives drawn using Graphics so I wondered if you have many such primitives in a Sprite, can you re-position/remove/alter individual items rather than clear and re-draw everything? Or is this deeper into the bowels of Flash than you're allowed (or recommended) to go?
Drawing primitives aren't accessible to user code once they've been committed to the graphics context, but if you need fast drawing objects you should use shapes instead of sprites. Sprites are containers that can contain other sprites and graphics contexts, Shapes are objects with only graphics contexts and non interactive.
Sprite -> DisplayObjectContainer - > InteractiveObject -> DisplayObject
Shape -> DisplayObject
Unfortunately, it is impossible: Once the items are drawn, you can only modify the full shape, but not the drawing itself.
To give you more of an explanation, I googled about how Flash actually calculates display objects. Unfortunately, I couldn't find anything specific.
But I found enough to make an educated guess: [EDIT]: I found a very interesting PDF on the Anatomy of a Flash. It explains the rendering tree and how graphics objects are treated internally.
I know for a fact that all shape tweens created in the IDE are compiled into shape sequences (each frame is stored as a separate image). And it makes sense to do it that way: Each new frame of the movie must be calculated, all vector images are added to a tree, each rendered as bitmaps, combined and drawn as one final bit plane, in order to be displayed. So it is only logical to do every possible shape calculation at compile time, rather than at runtime.
Then again, a bitmap would store 32 bits of color information for every single pixel, while vectors are stored in simple values, storing x and y coordinates, line style, fill style, etc. Some vectors can be grouped, so that for more complex shapes, line and fill styles only have to be stored once, and only coordinates are necessary for the rest. Also, primitive shapes like circles and rectangles require less information than objects combined from many individual points and lines.
[EDIT]: The above mentioned PDF says this:
Both AS3 and AS3 DisplayObjects are
converted to SObjects internally.
SObjects have a character associated.
Based on the character type it has
different drawing methods, but it all
resumes to drawing fills with
different source of colors.
It would take a very, very complex vector shape to require more single pieces of information than its bitmap representation, provided it is larger than a few pixels in width and height. Therefore, keeping simple shapes as vector representations consumes considerably less memory than storing full bitmaps - and so it is logical not to do shape rendering at compile time, as well (except for complicated shapes - then the "cacheAsBitmap" property comes into play).
Consider what I've said about vectors, line style and fill style, etc. - sounds quite a lot like the sequence of commands we have to write when drawing in ActionScript, right? I would assume these commands are simply converted 1:1 into exactly the kind of vector representations I was talking about. This would make the compiler faster, the binaries smaller, and the handling of both the IDE shapes and the AS shapes exactly the same.
[EDIT]: Turns out I was not quite right on that:
Edge & Colors
LSObjects tree is traversed and a list of edges is created
Edges have colors associated
Strokes are converted to edges
Colors are sources of display data, eg. Bitmaps, Video, Solid fills,
Gradients
Rasterization
Edges are sorted and a color is calculated for each pixel – pixels are
touched only once
Presentation
After the main rasterizer is done painting, the memory buffer is
copied to the screen
Now imagine all of those vectors were freely editable:
The sequence of commands would no longer be final! What if you were to add, move or erase one at runtime? For example: Having a rectangle inside of a filled rectangle subtracts the inner shape from the outer shape. What if you moved one of the corner points to the outside? The result would be a completely different shape! Or if you added one point? You could not store the shape as a rectangle any longer, requiring 5 point items to draw the same thing that once had been one rect item. In short: All the groupings and memory optimizations would no longer work. And it would also slow down runtime graphics considerably. That's why it is only allowed to add new elements to the shape, but not to modify them once they are drawn. And why you have to clear and redraw your graphics, if you want existing shapes to change.
[EDIT]: You can always do complex stuff by doing the calculations yourself. I still believe it was a good decision not to integrate those into basic graphics functionality.
With Flash CS5, and the XFL file format, this data is now accessible as XML.
For my example, you could make a tile map composed of 'Graphic' items from a MovieClip with various frames being various tiles. Instantly you come to the problem of needing to access those inaccessible frame indexes from 'Shape' objects.
If you put them into a symbol (even one that is not exported), you can find it in a file in your LIBRARY folder (after saving as 'xfl'). It mirrors the Library contents.
<DOMSymbolItem xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://ns.adobe.com/xfl/2008/" name="Tileset_Level_Test" itemID="4e00fe7f-00000450" linkageExportForAS="true" linkageClassName="Tileset_Level_Test" sourceLibraryItemHRef="Symbol 1" lastModified="1308719656" lastUniqueIdentifier="3">
<timeline>
<DOMTimeline name="Tileset_Level_Test">
<layers>
<DOMLayer name="Layer 1" color="#4FFF4F" current="true" isSelected="true" autoNamed="false">
<frames>
<DOMFrame index="0" keyMode="9728">
<elements>
<DOMSymbolInstance libraryItemName="Tileset_Test" name="" symbolType="graphic" firstFrame="8" loop="play once">
<transformationPoint>
<Point/>
</transformationPoint>
</DOMSymbolInstance>
<DOMSymbolInstance libraryItemName="Tileset_Test" name="" symbolType="graphic" firstFrame="4" loop="play once">
<matrix>
<Matrix tx="48"/>
</matrix>
<transformationPoint>
<Point/>
</transformationPoint>
</DOMSymbolInstance>
... lots more...
</elements>
</DOMFrame>
</frames>
</DOMLayer>
</layers>
</DOMTimeline>
</timeline>
</DOMSymbolItem>
The XML looks quite complex, but you can process it down to something much simpler with the XML class, and (for instance) construct a collision mask from a MovieClip mirroring those frame indexes, and identify spawn points and other special classes of things. Or you might process the data and draw the whole map yourself, having only needed a way to build it visually. All you might really care about is tx,ty attributes in the Matrix (for where a tile is placed), and 'firstFrame' attribute in the 'DOMSymbolInstance' (for which tile).
Anyways, you could preprocess it with an AIR applet to make just the data you want, and then either poop out a .as file to include in the project, or simplified XML, or whatever you like. Or use whatever other tools/languages you prefer, and add that processing step to your build scripting.
The xfl file format is also handy for tracking down and fixing all manner of things which Flash is too broken/buggy/AFU to fix, such as leftover font references in obscure parts of parts of parts.... You can either fix them in the library, or literally delete the file of the offending part, or edit the XML by hand. Grep and sed and find and xargs are all your friends for these tasks. Especially for things like snapping all coordinates to integer values, or proper cell boundaries, since all of Flash 'snapping' is horribly broken, too. Piping XML files through sed can be quite hazardous to files that you have not backed up, but quite rewarding for evil people who know what they're up to, and use version control.
Well every DisplayObject has only one graphic reference. So if you want to move (or scale etc.) several graphic objects in one Sprite, I suggest you use the display tree as it was intended.
Just add several children (Sprites or MovieClips or ...) in one Sprite each being redrawn when necessary.
I am currently doing an assigment and cannot find the answer to this question..as Algorithm is supposed to mean (solving problems as such)
The main difference is that JPEG uses a lossy algorithm, and GIF uses a losless algorithm (LZW). In addition, GIF is limited to 256 colors, while JPEG is truecolor (8 bits per color per pixel)
Some info is here.
Basically, JPEG is good for real life images, and GIF is good for computer generated images with solid areas or when you need some text to not be blurred (JPEG is lossy, GIF is not). There are many other differences too.
See also Wikipedia:
GIF
JPEG
For bonus points in your assignment you might want to mention other commonly used standards such as PNG.
i found a very good web site that explains about the difference between gif and jpeg plus it shows image examples of several scenarios. enjoy.
http://www.siriusweb.com/tutorials/gifvsjpg/