Import tiff + tfw + prj files in R - r

I want to import in R a map that I have downloaded from
http://www.naturalearthdata.com/downloads/10m-raster-data/10m-natural-earth-1/
When I download it I get 3 files with the following extension
.tif
.tfw
.prj
How should I read them? I can read the .tif file with
imported_raster=raster('NE1_HR_LC_SR_W.tif')
but then the colours and the projection are different from the original tif.
Thanks

I was looking for some information on another topic when I came across this one.
It's quite normal that the colours appear different from the original tif. There probably was a color distribution or a colour scheme applied on the original tif which isn't exported to or with the output tif. It's the user that should set a colour scheme or color distribution. (Just like in arcmap for example).
I guess your exported tif has no projection at all when you load it in R? You need to use the information from the .tfw file to give each pixel (row, column) a coordinate.
Read in the .tfw file
Assume that your .tfw (ascii file) is something like this:
10.000000
0.000000000000000
0.000000000000000
-10.000000000000
137184.00000000000
180631.000000000
The last two rows are the X/Y coordinates of the center of the upperleft pixel of your tif.
The first row tells you what your spatial resolution is, in this case 10.
So if you know the coordinate of the center of the upper pixel, than the coordinate of pixel (row=i, column=j) is
137184+i*10, 180631+i*10).

Related

How to resize a NetCDF file so it matches the grid and coordinates of another NetCDFfile in R

I have two .nc files, file1.nc and file2.nc The dimensions of the files are the following:
file1 [lon,lat,t]=[21,9,t1] 0.25x0.25 Grid
file2 [lon,lat,t]=[9,3,t2] 0.5X0.5 Grid
Each netcdf file have different time ranges but I'm only interested in the xy grid.
I want to transform file 1 so it has the grid size and same coordinates as of file 2. I have attached a picture so my explanation is clearer.
see picture
Some remarks:
Some recommend using CDO (Climate Data Operators) but since I'm using my company's computer I don't have the permits to install what's required to run CDO.
Other recommend to use resample() but they apply it to rasters which only visualizes one point in time and I want to resize the entire NetCDF.
-I would like to perform the transformation either by using the netcdf file it self or the multivariate array resulting from extracting one variable form the netcdf file.

QGIS gdal_contour not respecting scale_factor/offset for netCDF

I am working with some netCDF files and want to import netCDF parameter's data as a Raster and build a contour layer for it. I am using gdal_contour for this.
When I import the netCDF and choose a parameter (water_temp) in QGIS, the raster is loaded into the map with no problem and displays values in the range of roughly 4 degC to 31.25 degC.
However, when I use gdal_contour to make a contour layer for it, the values are in the range of -15944 to 11250. It certainly doesn't help that among other issues, it takes forever to generate the layer because I'm specifying an interval of 1.0 and the value range is far larger than the expected temperatures for Celsius.
From what I can tell, it looks like perhaps gdal_contour either isn't respecting the raster band's offset and scale_factor or has no knowledge of it. I understand that the netCDF is storing the temperature values as integers instead of floats to optimize file size, but I'm a bit confused by why QGIS can understand the offset when reading the netCDF into a raster layer, but not when generating a contour layer.
Am I missing something, or is there perhaps a caveat to using gdal_contour of which I'm unaware?
The command I am using to generate the conotur layer is:
gdal_contour -b 1 -a water_temp -i 1.0 -snodata -30000.0 -f "ESRI Shapefile" NETCDF:"C:/path/to/input/netcdf/INPUT.nc":water_temp C:/path/to/output/layer/OUTPUT.shp
The scale_factor, offset, and associated metadata for the band are:
add_offset=20
missing_value=-30000
NETCDF_VARNAME=water_temp
scale_factor=0.001
STATISTICS_MAXIMUM=11250
STATISTICS_MEAN=5475.011083141
STATISTICS_MINIMUM=-15944
STATISTICS_STDDEV=5863.9957197805
units=degC
_FillValue=-30000
This question was answered here.
TLDR; Convert the netCDF to a GeoTIFF first using
gdal_translate with the -unscale option to get GDAL to unpack
the values, then perform gdal_contour on the GeoTIFF to get a
contour layer with the correctly unpacked values.
However, one thing that may be important to note is the scaled/unscaled data types, which may have to be explicitly set for gdal_translate (using the -ot option) in order to not lose data precision during unscaling if the scaled data type is a smaller size than the unscaled data type.

How to create irregular raster with gdal using csv points

I am trying to create a irregular shaped .tiff from a csv list of points (xyz data). I am doing this using gdal_grid.
I can seem to generate the .tiff file no problem but I cannot preserve the outline / shape of the original csv points.
Everytime I generate the .tiff file it creates a raster with the size of (xmax-xmin) x (ymax-ymin) and assigns interpolated values to pixels that fall far away from my initial points.
Is it possible to generate a .tiff file of ONLY the points I provide?
For context, I am trying to generate a raster of xyz data for a river, and only want the raster in the river (not the entire bounding box of the river). I am only providing xyz data in the river.
I tried playing with the -nodata flag, and limiting -max_points to the number of points I've provided.
My final code (once everything is imported and declared):
gdal_grid -a invdist:power=2.0:smoothing=1.0:nodata=-999:max_points=2128164 -txe 582387.4 591069.4 -tye 4505028.08 4515344.079999999 -outsize 50 50 -zfield "z" -of GTiff -ot Float64 -l Book2 Book2.vrt Book2.tiff
Welcome to Stack Overflow, Derek!
Maybe there is a creation option inside gdal_grid that would do it, but I think that you will have to achieve desired result with additional calculation:
Run the gdal_grid as you have it.
Create a concave hull from the given points. If this is a one time job, I suggest using QGIS (with grass tools), because there is some tweaking of concave hull parameters required.
Cut the raster with the created shapefile by using gdalwarp.
Let me know if this got you through!

Using R for extracing data from colour image

I have a scanned map from which i would like to extract the data into form of Long Lat and the corresponding value. Can anyone please tell me about how i can extract the data from the map. Is there any packages in R that would enable me to extract data from the scanned map. Unfortunately, i cannot find the person who made this map.
Thanks you very much for your time and help.
Take a look at OCR. I doubt you'll find anything for R, since R is primarily a statistical programming language.
You're better off with something like opencv
Once you find the appropriate OCR package, you will need to identify the x and y positions of your characters which you can then use to classify them as being on the x or y axis of your map.
This is not trivial, but good luck
Try this:
Read in the image file using the raster package
Use the locator() function to click on all the lat-long intersection points.
Use the locator data plus the lat-long data to create a table of lat-long to raster x-y coordinates
Fit a radial (x,y)->(r,theta) transformation to the data. You'll be assuming the projected latitude lines are circular which they seem to be very close to but not exact from some overlaying I tried earlier.
To sample from your image at a lat-long point, invert the transformation.
The next hard problem is trying to get from an image sample to the value of the thing being mapped. Maybe take a 5x5 grid of pixels and average, leaving out any gray pixels. Its even harder than that because some of the colours look like they are made from combining pixels of two different colours to make a new shade. Is this the best image you have?
I'm wondering what top-secret information has been blanked out from the top left corner. If it did say what the projection was that would help enormously.
Note you may be able to do a lot of the process online with mapwarper:
http://mapwarper.net
but I'm not sure if it can handle your map's projection.

using ggplot's "annotation_raster" and reached R's "memory ceiling"

I am using R to create a floorplan of a house with several layers like below, starting from the bottom layer:
basemap: a scanned version of the floorplan which I put it at the bottom layer to aid the reading
bed: the house have several dozens of beds, scattered in different rooms of the house, they have different colours based on the characteristics of the residents
piechart: each bed has a piechart of top of it, again the piecharts are created based on the residents' other set of characteristics, some beds have piecharts, some don't.
The bed and piechart were created based on the shp file created based on the basemap (i.e. I use Mapwindow the create a vector layer, import the basemap as raster layer and put it at the bottom, then draw the beds one by one. The bed shp file is then imported into R, the bed polygons' centroid are calculated and that centroid helps to position the piecharts)
I used read.jpeg to import the basemap to imagematrix object, then use the new annotation_raster function in ggplot2 0.9 to put the basemap at the bottom map layer, since the bed layer is created based on the basemap also, the bed layer superimpose on the basemap layer perfectly in ggplot2.
I can create the map without problem - if the basemap is small enough (3000 x 3000 pixels), now I have a basemap of 8000+ x 3000+ pixels (object.size 241823624 bytes), I did not aware of the R memory issue when I was creating the shp file, the ggplot object can be compiled if I have the annotation_raster disabled, but R keeps saying that I can allocate memory with xxxMB when I try to include the basemap into the ggplot object.
I think this is nothing to do with the compression of the jpg files, as the dimension is not changed even I further compress the jpg file. But I can't resize the jpg file as my bed layer is created based on the original jpg file's dimension.
Can anyone help to shrink the size of the basemap's imagematrix, without changing the jpeg's dimension, or some other tricks to deal the R's memory limitation? Thanks.
I fixed it.
I first created a new basemap image file with width and height halved, then in the annotation_raster I did the following:
chart <- chart + annotation_raster(db$temp.basemap,
xmin=0,
xmax=basemap.xlength*2, # I stretched the image in R
ymin=0,
ymax=basemap.ylength*2) # I stretched the image in R
Now the map can be compiled within R's memory limit, the drawback I can think of is the reduce in image quality, but that is bearable, as it was 8000 x 3000 originally.

Resources