Why won't Autocad print georefenced jpeg that is reduced? - r

So, on my projects I require an aerial photo of a site. I usually use ones in public record. I use the USGS high res ortho photos located here https://earthexplorer.usgs.gov/. I have them uploaded to my server and they are TIFs and have TFWs and XMLs associated with them (I am unsure of what the xml is for). I can load these into autocad fine, and print them just fine. average file size of these appears to be in the 250,000 kb range.
On some of my projects, I need more detail. I get privately flown aerial photos of a site. These come as a JPG format and are georefernced by a .jgw. These files are about 25000 kb depending on the site ( I did not notice this at first, as i was told they are very large relative to the TIFs). When these are loaded into autocad and i try to plot, the whole system freezed and wont plot for about 15-20 mins. At first I thought this was a file size issue. So I did the following in R, to try to reduce the size. My code is as follows.
library(jpeg)
library(tiff)
img <- readJPEG("ortho.jpg", native = TRUE)
writeJPEG(img, quality = 0.2)'
This got the file size down to about 9000 kb. I loaded this into autocad and it still would not plot. This leads me to assume that size is not the issue. With this is mind what could be a property of this photo that would freeze autocad? How could I fix those properties in R or in Autocad.

First I would check out the first and third causes listed here: https://knowledge.autodesk.com/support/autocad/troubleshooting/caas/sfdcarticles/sfdcarticles/Some-OLE-objects-do-not-plot.html and see if that fixes your issue.
Second I would convert to a png (in my limited experience those seem to be the most stable in autocad.)
library(png)
writePNG(img)
If you really need it in jpg I would try the solutions here too: https://www.landfx.com/kb/autocad-fxcad/images/item/1926-raster-disappear.html

Related

Is there a way to (de)serialize and hardcode an image in a R-markdown file?

I want to load an image into a .rmd file.
I am working on a university project where I have to hand in this .rmd file at the end. As a restriction, this file has to run out of the box, so unfortunately it is not an option to load the image from a given file path as I can't submit a folder containing the image or similar (and I don't want to upload it somewhere and access it via URL either).
I was looking for a way to serialize the image information and hard code it into the file but I couldn't find anything helpful related to that.
So in short, I want to do the following:
serialize image
hard code serialized image as variable in .rmd
deserialize hard coded variable and plot image data in .rmd
Is there a simple way to do this?
This seems to be unfeasible. I've tried:
image <- load.image(path)
serialized_image <- serialize(image, NULL)
some_file <- file("test", "wb")
write(serialized_image, "test")
and then I wanted to open test and copy the content manually with CTRL+C, CTRL+V into a variable in my .rmd but it turned out that the content is way too large for doing that (around 30 million lines of bytes..), so theoretically this would be a solution but unless you want a 30 million lines .rmd, this is not an option.

Why does R raster::writeRaster() generate a pic which can't be shown in Win10?

I read my hyperspectral (.raw) file and combine three bands to "gai_out_r" Then I output as following:
writeRaster(gai_out_r,filepath,format="GTiff")
finally I got gai_out_r.tif
But, why Win10 can't display this small tif as the pic that I output the same way from envi--save image as--tif
Two tiffs are displayed by Win10 as following:
Default windows image viewing applications doesn't support Hyperspectral Images-since you are just reading and combining 3 bands from your .raw file, the resulting image will be a hyperspectral image.You need to have separate dedicated softwares to view hypercubes or can view it using spectral-python also.
In sPy, using envi.save_image , will save it as a ENVI type file only. To save it as an rgb image file(readable in windows OS) we need to use other methods.
You are using writeRaster to write to a GTiff (GeoTiff) format file. To write to a standard tif file you can use the tiff method. With writeRaster you could also write to a PNG instead
writeRaster(gai_out_r, "gai.png")
Cause of the issue:
I had a similar issue and recognised that the exported .tif files had a different bit depth than .tif images I could open. The images could not be displayed using common applications, although they were not broken and I could open them in R or QGIS. Hence, the values were coded in a way Windows would not expect.
When you type ?writeRaster() you will find that there are various options when it comes to saving a .tif (or other format) using the raster::writeRaster() function. Click on the links therein to get to the dataType {raster} help site and you'll find there are various integer types to choose from.
Solution (write a Windows-readable GeoTIFF):
I set the following options to make the resulting .tif file readable (note the datatype option):
writeRaster(raster, filename = "/path/to/your/output.tif",
format = "GTiff", datatype = "INT1U")
Note:
I realised your post is from 2 and a half years ago... Anyways, may this answer help others who encounter this problem.

Font Awesome 5: Differences fa-svg-with-js.css / fontawesome.min.js / fontawesome-all.min.js

I've read the official documentation but I didn't get much information regarding fa-svg-with-js.css
Currently, I'm using fontawesome-all.min.js and that's all I need to get going. However, the file size is ridiculously big (680kb).
I noticed there was another folder with a file named fa-svg-with-js.css what is this for? Do I need it?
I also notice there's another file called fontawesome.min.js with a much smaller file size (27kb). How do I use this and why is it smaller than the svg one.
I don't want to use the webfont version. Any suggestions?

Extract Zip File with 100% Compression Ratio

I noticed this problem when trying to run the following R script.
library(downloader)
download('http://download.cms.gov/nppes/NPPES_Data_Dissemination_Feb_2016.zip',
dest = 'dataset.zip', mode = 'wb')
npi <- read.csv(unz('dataset.zip', 'npidata_20050523-20160207.csv'),
as.is = TRUE)
The script kept spinning for some reason so I manually downloaded the data and noticed the compression ratio was 100%.
I am not certain if StackOverflow is the best Exchange for this question, so I am open to moving this question is another Exchange is suggested. The Open Data Exchange might be appropriate, but there isn't very much activity on that site.
My question is this: I work a lot with government curated data from Centers for Medicare and Medicaid Services (CMS). The data downloads from this site are in the form of zip files and occasionally, they have zip ratios of 100%. This is clearly impossible since the uncompressed size is ~800PB. (CMS notes on their site that they estimate the uncompressed size to be ~4GB.) This has affected me on my work computer; I have replicated this problem with co-worker's computer as well as my own personal computer.
One example can be found here. (Click the link and then click on NPPES Data Dissemination). There are other examples I've noticed and I've emailed CMS about this. They respond that the files are large and can't be handled with Excel. I am aware of this and this isn't really the problem I'm facing.
Does any one know why this would be happening and how I can fix it?
Per cdetermans point, what is the available system memory you have available for R to execute the uncompressing and subsequent loading of the data? Looking at both the image you posted, and the link to the actual data, which reads as ~560mb compressed, it did not pose a problem on my system ( Win 10, 16 GB, Core i7, R v.3.2.3) to download, uncompress, read the uncompressed CSV into a table.
I would recommend - if nothing else works - to decouple your uncompressing and data loading steps. Might even go as far as invoking (depending on your OS) a R system command to decompress your data, manually inspect, and then separately issue piecewise read.tables on the dataset.
Best of luck
rudycazabon

Slow rendering with image | MigraDoc

I am creating a PDF using MigraDoc and have now ran into a little problem. I am using a A4 size image (2480px x 3508px / 96KB in size) as a background for my PDF using the following code:
Dim frame = Section.Headers.FirstPage.AddTextFrame
frame.AddImage("background.png")
frame.WrapFormat.Style = WrapStyle.Through
frame.RelativeHorizontal = RelativeHorizontal.Page
Using this causes the PDF to render around 10 times longer (say 10 seconds) then without or a smaller sized file (say 1 second). Is there anyway to speed this up?
I have tried to not use a frame thinking this could be the problem displaying the image using:
Dim backing As Image = Section.Headers.FirstPage.AddImage("background.png")
But still the same results, the reason I want time cut down is I create up to 1000 of these and this can take a long time at the current speed.
I cant downsize the image any more but I don't see why it should be a problem with the size. If this is the problem and there is no way around it please do let me know.
Maybe it goes faster when you use a JPEG file (if that is an option).
JPEG files are copied into the PDF as they are. PNGs and other formats have to be converted into "PDF images".
You can use pages from PDF files just like images. This is another option you can try: once create a PDF with your background image, and create all other files with that PDF instead of the PNG (if JPEG is not appropriate for your image).
There are two builds of MigraDoc: one using GDI+, one using WPF. You could try both to see if that makes a difference.
BTW: Images can be positioned like TextFrames, so there is no need to put an Image into a TextFrame.

Resources