How to reduce an image collection by numbers matching conditions in Google Earth Engine - google-earth-engine

Is there way to reduce an image collection by numbers how many images matching conditions?
For example, I want to create a new image to visualize how many days meet some conditions in a period.

When you want to convert an image collection to an image by combining all the pixels in some way for each spatial location, the solution is ImageCollection.reduce. In this case, we can use the ee.Reducer.count reducer to count the number of pixels meeting the condition.
JavaScript example:
var daysMeetingConditionImage = imageCollectionOfDayImages
.map(function (image) {
// Apply the condition, whatever it is, as a mask.
// Masked-off pixels will not be counted.
return image.updateMask(
image.select('B2').gt(7) // Change this to whatever your per-pixel condition is
);
})
.reduce(ee.Reducer.count());

Related

Best for my counter? An array of objects or a simple object?

I need to keep track of and display how many times a price point is being used, e.g.: "$15 (2 options)" Given that i need to display the price and the count and increment the count each time a new option is added at the given price, should I go with:
or this:
Note: do not pay attention to the field names "array" and "object-priceCount", this is for testing purposes only.
I'd actually keep a map for these counters, so that you can update an individual counter with:
updateDoc(docRef, { 'priceCounts.USD15': increment(1));

DynamoDB data structure / architecture to support set of particular queries

I currently have a lambda function pushing property data into a DynamoDB with streams enabled.
When a new item is added, the stream triggers another Lambda function which should query against a second DynamoDB table to see if there is a 'user query' in the table matching the new object in the stream.
The items in the first table which are pushed into the stream look like this...
{
Item: {
partitionKey: 'myTableId',
bedrooms: 3,
latitude: 52.4,
longitude: -2.6,
price: 200000,
toRent: false,
},
}
The second table contains active user queries. For example one user is looking for a house within a 30 mile radius of his location, between £150000 and £300000.
An example of this query object in the second table looks like this...
{
Item: {
partitionKey: 'myTableId',
bedrooms: 3,
minPrice: 150000,
maxPrice: 300000,
minLatitude: 52.3,
maxLatitude: 52.5
minLongitude: -2.5,
maxLongitude: -2.7,
toRent: false,
userId: 'userId',
},
}
When a new property enters the stream, I want to trigger a lambda which queries against the second table. I want to write something along the lines of...
get me all user queries where bedrooms == streamItem.bedrooms AND minPrice < streamItem.price AND maxPrice > streamItem.price AND minLatitude < streamItem.latitude AND maxLatitude > streamItem.latitude.
Ideally I want to achieve this via queries and filters, without scanning.
I'm happy to completely restructure the tables to suit the above requirements.
Been reading and reading and haven't found a suitable answer, so hoping an expert can point me in the right direction!
Thank you in advance
There's no silver bullet with DynamoDB here. Your only tools are the PK/SK lookup by value and range, filters to brute force things after that, and GSIs to give an alternate point of view. You're going to have to get creative. The details depend on your circumstances.
Like if you know you're getting all those specific values every time, you can construct a PK that is bed##rent# and an SK of price. Then for those three attributes you can do exact index-based resolution and filter for the geo attributes.
If you wanted, you could quantize the price range values (store pre-determined price ranges as singular values) and put that into the PK as well. Like divide prices into 50k chunks, each of which gets a name of the leading value. If someone wanted 150,000 to 250,000 then you'd lookup using two PKs, the "150" and "200" blocks.
You get PK/SK + GSI + filter. That's it. So it's up to you to invent a solution using them, aiming for efficiency.

Reapply actions/action history with Redux

I'm using Redux (with React) to handle state in my application. I have the following scenario:
Load a list of items
Apply transform(s) to a list (arbitrary number of transforms)
Reduce displayed items in list
Increase displayed items in list
At step 4: How do I best achieve to again increase displayed items, with transforms from step 1 still applied/reapplied?
An example:
Load list with 50 items
Uppercase items
Filter items to display items with less than 4 chars (=> results in
30 items)
Apply filter again to display items with less than 10 chars (=> should result in 50 items with all still uppercased)
Based on your description, the only actual state that should be kept in the store is the initial data and the information about current filters. For example, your state shape might look like:
{
items: ['April', 'Jake', 'Mary', 'Zooey', 'Dan'],
filters: {
isUppercase: false,
maxLength: 10
}
}
As you change the data, the items reducer would handle adding and deleting items. As you change the filters, the filter reducer would record the new filter setting.
Now comes the important part. The actual filtering happens when the data is selected for the components.
We suggest to store the minimal possible state in Redux. In this case the list itself and the information about the filters is the minimal possible state. The current list with the filters applied can always be calculated from the state, so it shouldn’t be in the state.
Instead, you can write a pure function that selects the data according to the current state:
function filterItems(items, filters) {
return items.filter(item => {
// return true or false depending on your logic
})
}
Now, if you use React, you can call it in your render() method:
var filteredItems = filterItems(this.props.items, this.props.filters)
You might find that re-computing this on every render can be inefficient. Thankfully, the solution is simple: memoization. Make sure the derived data is only recalculated when the inputs change. Reselect is a tiny library that lets you do that, and it is often used together with Redux.
You can find more information about this topic with some examples in the official Computing Derived Data recipe on Redux website, and in the Reselect README.

How can I add layers to Here Maps?

I am starting with here-api,I follow the examples and I add some markers in the map, but i nedd and a layer switcher, to select multiple layers with differents markers, and shown it in the map but i cant do it. The markers are, static and the map not reload the firt markers.
I try to put more than one maps, in tabs, but not work. Some idea about it?
Sorry for my english.
Regards.
As much as I know, Here JS API does not support this kind of layers out of the box, but you can implement is quite simply.
You can use something called a Group.
From the documentation:
Groups are logical containers which can hold a collection of child objects (markers or spatials, but
also sub-groups). Groups make it easy to add, remove, show or hide whole sets of map objects in
an atomic operation, without the need to manipulate each object individually. In addition, a group
allows you to calculate a bounding box enclosing all the objects it contains and to listen for events
dispatched by the group's child objects.
It means that you can add some objects (markers, polylines, polygons) into one group and some into another group. Then you can use addObject and removeObject methods on the map accordingly to add or remove this group (group extends Object class).
group = new H.map.Group();
group.addObject(marker1);
group.addObject(marker2);
// add to map
map.addObject(group);
// remove from map
map.removeObject(group);

modify field value in a crossfilter after insertion

I need to modify a field value for all records in a crossfilter before inserting new records.
The API doesn't say anything about it. Is there a way to do that ?
Even if it's a hack that would be really useful to me.
Looking at the code, the data array is held as a private local variable inside the crossfilter function so there's no way to get at it directly.
With that said, it looks like Crossfilter really tries to minimize the number of copies of the data it makes. So callback functions like the ones passed into crossfilter.dimension or dimension.filter are passed the actual records themselves from the data array (using the native Array.map) so any changes to make to the records will be made to the main records.
With that said, you obviously need to be very careful that you're not changing anything that is relied on by the existing dimensions, filters or groups. Otherwise you'll end up with data that doesn't agree with the internal Crossfilter structures and chaos will ensue.
The cool thing about .remove is it only removes entries that match the currently applied filters. So if you create a 'unique dimension' that returns a unique value for every entry in your dataset (like an ID column), you can have a function like this:
function editEntry(id, changes) {
uniqueDimension.filter(id); // filter to the item you want to change
var selectedEntry = uniqueDimension.top(1)[0]; // get the item
_.extend(selectedEntry, changes); // apply changes to it
ndx.remove(); // remove all items that pass the current filter (which will just be the item we are changing
ndx.add([selectedEntry]); // re-add the item
uniqueDimension.filter(null); // clear the filter
dc.redrawAll(); // redraw the UI
}
At the least you could do a cf.remove() then readd the adjusted data with cf.add().

Resources