which file in the raster stack do I use? - raster

I have a *.zip file from arcmap. It contains a bunch of files. In the top directory, one *.xml, one *.ovr, and two nested directories, *.dat and *.nit files (in one), and 6 *.adf files (in the other).
I drag the *.zip into qgis and there it sits in the layers box as a stack of 6 rasters (see picture).
My big bad question is:
If I want to perform raster calculations on this, or clip the raster, or warp it, or any other such thing, which file/layer do I choose, and why?

the answer is: one of the files in the directory with the spatial files is the biggest file. That's the one to drag into qgis and manipulate.

Related

How to resize a NetCDF file so it matches the grid and coordinates of another NetCDFfile in R

I have two .nc files, file1.nc and file2.nc The dimensions of the files are the following:
file1 [lon,lat,t]=[21,9,t1] 0.25x0.25 Grid
file2 [lon,lat,t]=[9,3,t2] 0.5X0.5 Grid
Each netcdf file have different time ranges but I'm only interested in the xy grid.
I want to transform file 1 so it has the grid size and same coordinates as of file 2. I have attached a picture so my explanation is clearer.
see picture
Some remarks:
Some recommend using CDO (Climate Data Operators) but since I'm using my company's computer I don't have the permits to install what's required to run CDO.
Other recommend to use resample() but they apply it to rasters which only visualizes one point in time and I want to resize the entire NetCDF.
-I would like to perform the transformation either by using the netcdf file it self or the multivariate array resulting from extracting one variable form the netcdf file.

Java or scala, GeoTools or GeoTrellis, how to convert sentinel 2 data to Multiband geotif

I am trying to process Sentinel 2 data from (example)
http://sentinel-s2-l1c.s3-website.eu-central-1.amazonaws.com/#tiles/10/S/EG/2016/10/12/0/
The jp2 files are not georeferenced, and I need to put all the jp2 files as bands in a geotif. I have googled aplenty and find no way to do this in Java or Scala.
I am pretty familiar with Geotools, i've done a lot of geotif processing with geotools, but I can't figure out how to
a. Make a geotif raster out of a jp2 file (given coords for the envelope), and
b. take those and make a multiband geotif out of them
I am decent with Scala, so I've looked at geotrellis, but don't see a solution with that either.
Does anyone know how to make geotifs out of JP2 files (given a polygon) and then make a multiband geotif?
thanks
I've never tried this, but, I would break the problem down to:
Import JP2 image
GeoReference the image
For each band in the image data, convert to GeoTiff
Step 1 will need you to make sure that you have the JP2K plugin, that page also gives some sample code showing how to use it.
Step 2 should just be a case of building a GridCoverage using a GridCoverageFactory - see the user guide for an example (I am assuming you know where the bounds of the grid are and it's projection etc).
Step 3 is a simple CoverageWriter, there is an example here.

Crop large raster in network drive

Is it possible to use R to crop out a subsection from a raster currently stored on a network drive without the need to download the file to the local drive first?
It is possible to read a raster from a network drive using the call:
img <- raster("Z:/path/file.bsq")
However, the raster I am working with exceeds the file size limit of my network drive. I would have thought that because raster() does not actually read the entire file into memory it should be possible to just read in the metadata and then use:
img_crop <- crop(x=img, y=extent)
To read in a cropped part of the image into the memory. Of course, this fails at the stage of reading the raster so I can't even try and crop it. I guess that crop has no reason to know in advance how large the cropped raster will be, which is part of the reason this doesn't work.
Is there a way of doing this or do I just need to increase the file size limit of my network drive?

Is it possible to import a raster of a PDF file?

Our office does scanning of data entry forms, and we lack any proprietary software that is able to do automated double-entry (primary entry is done by hand, of course). We are hoping to provide a tool for researchers to highlight regions on forms and use scanned versions to determine what participant entry was.
To do this, all I need for a very rough attempt is a file to read in PDFs as raster files, with coordinates as X, Y components, and B&W white "intensities" as a Z-axis.
We use R mainly for statistical analysis and data management, so options in R would be great.
You could use the raster package from R. However, it doesnt support .pdf files, but .tif,.jpg,.png (among many others).
But coverting your pdfs into pngs shouldn't be a big problem: Look here for more information.
Once you have your png files ready, you can do the following:
png <- raster("your/png/file.png")
and then use the extract() function to get your brigthness value from the picture. I.e. let's say your png is 200x200px and you want to extract a pixel value from row 100 and column 150:
value <- extract(png, c(150,100))

QGLWidget. Update display while drawing.

Each file contains 3D points which I want to display. I want to see the display after each file is read. After file1, I want to see the points before adding more points from file2. Before adding more points from file3 I want to see the points from file 1 and 2.
How can I do that with QGLWidget functions?
I searched around and found updateGL() function. I thought of updating the draw function every time before new points are added but that would be inefficient. Is there any way to save the context (or what it is called)?
I am using the library libQGLViewer, which uses QGLViewer class, inherits from QGLWidget.
You should call updateGL() everytime you want the viewport to be redrawn.
Even a basic modern GPU can render millions of points, so don't be worried about inefficiency - loading the point data from the files will be orders of magnitude slower than rendering them.

Resources