I really really need some advice. I have a Raster with many pixels. Each pixel has one value. Now I want to do a spatial analysis of these pixels. I want to see in which region have the most pixels and were not. Sounds simple, but it's not.
I had an idea to do this with the kernal density but it does not work with rasterlayer. It doesn't work either with ppp, because you can't transform a raster into this data type. I'm really lost. I don't know what could work. So I would be very grateful if I could get some help.
My Pixels looks like this:
There must be a way to show the regions with the most pixels and so on. But I don't know how I can do that.
Short answer: convert your raster object to a pixel image of class im in the spatstat package. Then use Smooth.im. Example:
library(spatstat)
Z <- as.im(my_raster_data)
S <- Smooth(Z)
plot(S)
Long answer: you're using the term "pixel" in a nonstandard sense. The pixels are the small squares which make up the image. Your illustration shows a pixel image in which the majority of the pixels have the value 0 (represented by white colour), but a substantial number of individual pixels have values greater than 0 (ranging from 0 to 0.3).
If I understand correctly, you would like to generate a colour image or heat map which has a brighter/warmer colour in those places where more of the pixels have positive values.
The simplest way is to use Gaussian smoothing of the pixel values in the image. This will calculate a spatially-varying average of the values of the nearby pixels, including the zero pixels. To do this, convert the raster to a pixel image of class im in the spatstat package
Z <- as.im(my_raster_object)
then apply Smooth.im
S <- Smooth(Z)
plot(S)
Look at the help for Smooth.im for options to control the degree of smoothing.
If you wanted to ignore the actual colours (pixel values) in the input data, you could just transform them to binary values before smoothing:
B <- (Z > 0)
SB <- Smooth(B)
plot(SB)
Related
I have a background on mathematics and Machine Learning, but I'm quite new on image compression. The other way I was thinking in the optimal way to compress an image just using a lookup table. This means, given an original image which has N unique values, change it to a new image with M unique values being M<N. Given a fixed value of M, my question was how to pick those values. I realized that if we take as figure of merit the total error (MSE) of all the pixels, all the information has to be in the histogram of the pixel intensities. Somehow, the most common values should be mapped to a closer value than the uncommon values, making the higher regions of the histogram more "dense" in the new values that the low regions.Hence I was wondering if it exists a mathematical formula that:
-Given the histogram h(x) of all the pixels intensities
-Given the number of uniques new values M
Defines the set of new M values {X_new} that minimizes the total error.
I tried to define the loss function and take the derivative, but it appeared some argmax operations that I don't know how to derivate them. However, my intution tells me that it should exist a closed formula.....
Example:
Say we have an image with just 10 pixels, with values {1,1,1,1,2,2,2,2,3,3}. We initially have N=3
and we are asked to select the M=2 unique values that minimizes the error. It is clear, that we have to pick the 2 most common ones, so {X_new}={1,2} and the new image will be "compressed" as {1,1,1,1,2,2,2,2,2,2}. If we are asked to pick M=1, we will pick {X_new}=2 to minimize the error.
Thanks!
This is called color quantization or palettization. It is essentially a clustering problem, usually in the 3D RGB space. Each cluster becomes a single color in the downsampled image. The GIF and PNG image formats both support palettes.
There are many clustering algorithms out there, with a lot of research behind them. For this, I would first try k-means and DBSCAN.
Note that palettization would only be one part of an effective image compression approach. You would also want to take advantage of both the spatial correlation of pixels (often done with a 2-D spatial frequency analysis such as a discrete cosine transform or wavelet transform), as well as taking advantage of the lower resolution of the human eye in color discrimination as opposed to grayscale acuity.
Unless you want to embark on a few years of research to improve the state of the art, I recommend that you use existing image compression algorithms and formats.
I am pretty new to R, and have been attempting to use the mask function on a raster image of 250mx250m resolution. My problem is that for some reason I am getting overhang, as there are pixels which lie both inside and outside of the polygon. Is there a way to tighten the tolerance level of mask so that only the pixels within a certain percentage inside the polygon are accepted?
green is my polygon, blue is the resulting mask
I am guessing that you are using the rasterize function from the raster package.
The grid cells are rather large relative to the polygons you are using. rasterize uses the center of the cell to determine if it is covered. However, if you use argument getCover=TRUE you will get a value between 1 to 100 indicating the percentage of each cell that is covered. You could then use a threshold of your choice.
Source: Masking low quality raster with polygons in R gives weird overhang?
While I use R quite a bit, just started an image analysis project and I am using the EBImage package. I need to collect a lot of data from circular/elliptical images. The built-in function computeFeatures gives the maximum and minimum radius. But I need all of the radii it computes.
Here is the code. I have read the image, thresholded and filled.
actual.image = readImage("xxxx")
image = actual.image[,2070:4000]
image1 = thresh(image)
image1 = fillHull(image1)
As there are several objects in the image, I used the following to label
image1 = bwlabel(image1)
I generated features using the built in function
features = data.frame(computeFeatures(image1,image))
Now, computeFeatures gives max radius and min radius. I need all the radii of all the objects it has computed for my analysis. At least if I get the coordinates of boundaries of all objects, I can compute the radii through some other code.
I know images are stored as matrices and can come up with a convoluted way to find the boundaries and then compute radii. But, was wondering if there a more elegant method?
You could try extracting each object + some padding, and plotting the x and y axis intensity profiles for each object. The intensity profiles is simply the sum of rows / columns which can be computed using rowSums and colSums in R
Then you could find where it dropps by splitting each intensity profiles in half and computing the nearest minimum value.
Maybe an example would help clear things up:
Hopefully this makes sense
I have been using Matlab 2011b and contourf/contourfm to plot 2D data on a map of North America. I started from the help page for contourfm on the mathworks website, and it works great if you use their default data called "geoid" and reference vector "geoidrefvec."
Here is some simple code that works with the preset data:
figure
axesm('MapProjection','lambert','maplo',[-175 -45],'mapla',[10 75]);
framem; gridm; axis off; tightmap
load geoid
%geoidrefvec=[1 90 0];
load 'TECvars.mat'
%contourfm(ITEC, geoidrefvec, -120:20:100, 'LineStyle', 'none');
contourfm(geoid, geoidrefvec, -120:20:100, 'LineStyle', 'none');
coast = load('coast');
geoshow(coast.lat, coast.long, 'Color', 'black')
whitebg('w')
title(sprintf('Total Electron Content Units x 10^1^6 m^-^2'),'Fontsize',14,'Color','black')
%axis([-3 -1 0 1.0]);
contourcbar
The problem arises when I try to use my data. I am quite sure the reference vector determines where the data should be plotted on the globe but I was not able to find any documentation about how this vector works or how to create one to work with different data.
Here is a .mat file with my data. ITEC is the matrix of values to be plotted. Information about the position of the grid relative to the earth can be found in the cell array called RT but the basic idea is. ITEC(1,1) refers to Lat=11 Long=-180 and ITEC(58,39) refers to Lat = 72.5 Long = -53 with evenly spaced data.
Does anyone know how the reference vector defines where the data is placed on the map? Or perhaps there is another way to accomplish this? Thanks in advance!
OK. So I figured it out. I realized that, given that there are only three dimensions in the vector, the degrees between latitude data must be the same as the degrees between longitude data. That is, the spacing between each horizontal data point must be the same as the spacing between each vertical point. For instance, 1 degree.
The first value in the reference vector is the distance (in degrees) between each data point (I think...this works in my case), and the two second values in the vector are the minimum latitude and minimum longitude respectively.
In my case the data was equally spaced in each direction, but not the same spacing vertically and horizontally. I simply interpolated the data to a 1x1 grid density and set the first value in the vector to 1.
Hopefully this will help someone with the same problem.
Quick question though, since I answered my own question do I get the bounty? I'd hate to loose 50 'valuable' reputation points haha
I have a gray-scale image and I want to make a function that
closely follows the image
is always grater than it the image
smooth at some given scale.
In other words I want a smooth function that approximates the maximum of another function in the local region while over estimating the that function at all points.
Any ideas?
My first pass at this amounted to picking the "high spots" (by comparing the image to a least-squares fit of a high order 2-D polynomial) and matching a 2-D polynomial to them and their slopes. As the first fit required more working space than I had address space, I think it's not going to work and I'm going to have to come up with something else...
What I did
My end target was to do a smooth adjustment on an image so that each local region uses the full range of values. The key realization was that an "almost perfect" function would do just fine for me.
The following procedure (that never has the max function explicitly) is what I ended up with:
Find the local mean and standard deviation at each point using a "blur" like function.
offset the image to get a zero mean. (image -= mean;)
divide each pixel by its stdev. (image /= stdev;)
the most image should now be in [-1,1] (oddly enough most of my test images have better than 99% in that range rather than the 67% that would be expected)
find the standard deviation of the whole image.
map some span +/- n*sigma to your output range.
With a little manipulation, that can be converted to find the Max function I was asking about.
Here's something that's easy; I don't know how good it is.
To get smooth, use your favorite blurring algorithm. E.g., average points within radius 5. Space cost is order the size of the image and time is the product of the image size with the square of the blurring radius.
Take the difference of each individual pixel with the original image, find the maximum value of (original[i][j] - blurred[i][j]), and add that value to every pixel in the blurred image. The sum is guaranteed to overapproximate the original image. Time cost is proportional to the size of the image, with constant additional space (if you overwrite the blurred image after computing the max.
To do better (e.g., to minimize the square error under some set of constraints), you'll have to pick some class of smooth curves and do some substantial calculations. You could try quadratic or cubic splines, but in two dimensions splines are not much fun.
My quick and dirty answer would be to start with the original image, and repeat the following process for each pixel until no changes are made:
If an overlarge delta in value between this pixel and its neighbours can be resolved by increasing the value of the pixel, do so.
If an overlarge slope change around this pixel and its neighbours can be resolved by increasing the value of the pixel, do so.
The 2D version would look something like this:
for all x:
d = img[x-1] - img[x]
if d > DMAX:
img[x] += d - DMAX
d = img[x+1] - img[x]
if d > DMAX:
img[x] += d - DMAX
dleft = img[x-1] - img[x]
dright = img[x] - img[x+1]
d = dright - dleft
if d > SLOPEMAX:
img[x] += d - SLOPEMAX
Maximum filter the image with an RxR filter, then use an order R-1 B-spline smoothing on the maximum-filtered image. The convex hull properties of the B-spline guarantee that it will be above the original image.
Can you clarify what you mean by your desire that it be "smooth" at some scale? Also, over how large of a "local region" do you want it to approximate the maximum?
Quick and dirty answer: weighted average of the source image and a windowed maximum.