Add source to existing layer in mapbox GL - asynchronous

The title pretty much says what I intend to do.
I am using Firebase as backend for markers on a map and use the on('child_added') method to monitor these. For every node on a specific location in the database, the on('child_added') will fire once.
This also applies to new nodes being created, hence this is perfect for asynchronously adding new markers to the map as they are added to the database.
In order display these on a map, mapbox GL requires me to transform the data to geojson, create a source and then add this source to a layer. The code below shows this and it actually displays the markers on the map.
markersRef.on('child_added', function(childSnapshot) { //fires once for every child node
var currentKey = childSnapshot.key; //the key of current child
var entry = childSnapshot.val(); //the value of current child
//creates a geojson object from child
var geojson = {
"type": "FeatureCollection",
"features": [{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [entry.position.long, entry.position.lat]
}
}],
"properties": {
title: entry.title,
text: entry.text
}
};
//creates a source with the geojson object from above
map.addSource(currentKey, { //currentKey is the name of this source
"type": "geojson",
"data": geojson,
cluster: true //clusters the points in this source
});
//adds the source defined above to a layer that will be displayed on a map
map.addLayer({
"id": currentKey, // Sets id as current child's key
"source": currentKey, // The source layer defined above
});
});
The problem is that the markers will be in individual sources, making them appear on different layers. Therefore, I cannot cluster them or e.g. search across them.
What I look for is a way to add a source to an existing layer. This would enable me to create a layer outside the on('child_added') method and then add the sources to this layer.
I have looked at the mapbox GL docs but I cannot find anything in there that will enable me to do this. It seems very limited in this respect compared to mapbox js.
I see this as a pretty important feature and don't understand why this is not possible. I hope some of you have a workaround or a way to achieve asynchronously adding markers to a map in mapbox GL.

I have the same problem. I did some searching on this and I found the setData attribute for GeoJSONSource:
https://www.mapbox.com/mapbox-gl-js/api/#geojsonsource#setdata
map.addSource("points", markers);
map.addLayer({
"id": "points",
"type": "symbol",
"source": "points",
"layout": {
"icon-image": "{icon}-15",
"icon-allow-overlap": true,
"icon-ignore-placement": true,
"icon-size": 2,
"icon-offset": [0, -10],
}
});
Then later I update the source, without creating a new layer like so:
map.getSource('points').setData(newMarkers)
So this updates the source without creating a new layer. Then you can search over this layer. The only problem I encountered was that setData erases all the previous data (there is no "addData" functionality) so you need to save the previous markers and add them again. Let me know if you find a workaround for this.

As the documentation at https://www.mapbox.com/mapbox-gl-js/api/#geojsonsource states: A GeoJSON data object or a URL to one. The latter is preferable in the case of large GeoJSON files.
What happens here is that geojson sources loaded via a url are loaded using a background thread worker so they do not affect the main thread, basically always load your data via url or a mapbox style to offload all JSON parsing and layer loading to another thread. Thus anytime you have a change event fired from your firebase monitoring you can simply reload the url you are using to initially load the source.
In addition, the founder of Leaflet and amazing Mapbox Developer Vladimir Agafonkin discusses this here: https://github.com/mapbox/mapbox-gl-js/issues/2289, and it is essentially what they do in their real-time example: https://www.mapbox.com/mapbox-gl-js/example/live-geojson/.
Furthermore, here is an example with socket.io I use client side:
const url = {server url that retrieves geojson},
socket = {setup all your socket initiation, etc};
socket.on('msg', function(data) {
if (data) {
//Here is where you can manipulate the JSON object returned from the socket server
console.log("Message received is: %s", JSON.stringify(data));
if(data.fetch){
map.getSource('stuff').setData(url)
}
} else {
console.log("Message received is empty: so it is %s", JSON.stringify(data));
}
});
map.on('load', function(feature) {
map.addSource('events', {
type: 'stuff',
data: url
});
map.addLayer({
"id": "events",
"type": "symbol",
"source": "events",
"layout": {
"icon-image": "{icon}"
}
});
});

Related

Adobe Analytics 2.0 API endpoint to get report suite events, props, and evars

I'm having a hard time finding a way in the 2.0 API that I can get a list of Evars, Props and Events for a given report suite. The 1.4 version has the reportSuite.getEvents() endpoint and similar for Evars and Props.
Please let me know if there is a way to get the same data using the 2.0 API endpoints.
The API v2.0 github docs aren't terribly useful, but the Swagger UI is a bit more helpful, showing endpoints and parameters you can push to them, and you can interact with it (logging in with your oauth creds) and see requests/responses.
The two API endpoints in particular you want are metrics and dimensions. There are a number of options you can specify, but to just get a dump of them all, the full endpoint URL for those would be:
https://analytics.adobe.io/api/[client id]/[endpoint]?rsid=[report suite id]
Where:
[client id] - The client id for your company. This should be the same value as the legacy username:companyid (the companyid part) from v1.3/v1.4 API shared secret credentials, with the exception that it is suffixed with "0", e.g. if your old username:companyid was "crayonviolent:foocompany", the [client id] would be "foocompany0", because..reasons? I'm not sure what that's about, but it is what it is.
[endpoint] - Value should be "metrics" to get the events, and dimensions to get the props and eVars. So you will need to make 2 API endpoint requests.
[rsid] - The report suite id you want to get the list of events/props/eVars from.
Example:
https://analytics.adobe.io/api/foocompany0/metrics?rsid=fooglobal
One thing to note about the responses: they aren't like the v1.3 or v1.4 methods where you query for a list of only those specific things. It will return a json array of objects for every single event and dimension respectively, even the native ones, calculated metrics, classifications for a given dimension, etc. AFAIK there is no baked in way to filter the API query (that's in any documentation I can find, anyways..), so you will have to loop through the array and select the relevant ones yourself.
I don't know what language you are using, but here is a javascript example for what I basically do:
var i, l, v, data = { prop:[], evar: [], events:[] };
// dimensionsList - the JSON object returned from dimensions API call
// for each dimension in the list..
for (i=0,l=dimensionsList.length;i<l;i++) {
// The .id property shows the dimension id to eval
if ( dimensionsList[i].id ) {
// the ones we care about are e.g. "variables/prop1" or "variables/evar1"
// note that if you have classifications on a prop or eVar, there are entries
// that look like e.g. "variables/prop1.1" so regex is written to ignore those
v = (''+dimensionsList[i].id).match(/^variables\/(prop|evar)[0-9]+$/);
// if id matches what we're looking for, push it to our data.prop or data.evar array
v && v[1] && data[v[1]].push(dimensionsList[i]);
}
}
// metricsList - the JSON object returned from metrics API call
// basically same song and dance as above, but for events.
for (var i=0,l=metricsList.length;i<l;i++) {
if ( metricsList[i].id ) {
// events ids look like e.g. "metrics/event1"
var v = (''+metricsList[i].id).match(/^metrics\/event[0-9]+$/);
v && data.events.push(metricsList[i]);
}
}
And then the result data object will have data.prop,data.evar, and data.events, each an array of the respective props/evars/events.
Example object entry for an data.events[n]:
{
"id": "metrics/event1",
"title": "(e1) Some event",
"name": "(e1) Some event",
"type": "int",
"extraTitleInfo": "event1",
"category": "Conversion",
"support": ["oberon", "dataWarehouse"],
"allocation": true,
"precision": 0,
"calculated": false,
"segmentable": true,
"supportsDataGovernance": true,
"polarity": "positive"
}
Example object entry for an data.evar[n]:
{
"id": "variables/evar1",
"title": "(v1) Some eVar",
"name": "(v1) Some eVar",
"type": "string",
"category": "Conversion",
"support": ["oberon", "dataWarehouse"],
"pathable": false,
"extraTitleInfo": "evar1",
"segmentable": true,
"reportable": ["oberon"],
"supportsDataGovernance": true
}
Example object entry for a data.prop[n]:
{
"id": "variables/prop1",
"title": "(c1) Some prop",
"name": "(c1) Some prop",
"type": "string",
"category": "Content",
"support": ["oberon", "dataWarehouse"],
"pathable": true,
"extraTitleInfo": "prop1",
"segmentable": true,
"reportable": ["oberon"],
"supportsDataGovernance": true
}

JSON Path not working properly with athena

I have a lambda function that converts my logs to this format:
{
"events": [
{
"field1": "value",
"field2": "value",
"field3": "value"
}, (...)
]
}
When I query it on S3, I get in this format:
[
{
"events": [
{ (...) }
]
}
]
And I'm trying to run a custom classifier for it because the data I want is inside the objects kept by 'events' and not events itself.
So I started with the simplest path I could think that worked in my tests (https://jsonpath.curiousconcept.com/)
$.events[*]
And, sure, worked in the tests but when I run a crawler against the file, the table created includes only an events field with a struct inside it.
So I tried a bunch of other paths:
$[*].events
$[*].['events']
$[*].['events'].[*]
$.[*].events[*]
$.events[*].[*]
Some of these does not even make sense and absolutely every one of those got me an schema with an events field marked as array.
Can anyone point me to a better direction to handle this issue?

Post request to firebase without unique key

I want to post new data to my firebase API, but everytime I do so, a new key, like -L545gZW7E6Ed6iqXRok is generated with my object inside it. I would like to save my object directly to the API without this new key. This SO question answers how to do it using the set() method, but I would like to achieve this using Postman. I am posting directly to firebase using Postman.
url: https://my-firebase-project.firebaseio.com/galaxies.json with method POST.
//current saving like this in firebase
"0000001" : {
"active": false,
"name": "tp-milky-way",
"time": 60
},
"-L545gZW7E6Ed6iqXRok": {
"0000011": {
"active": false,
"name": "tp-andromeda",
"time": 60
}
}
//I want it without the key
"0000001" : {
"active": false,
"name": "tp-milky-way",
"time": 60
},
"0000011" : {
"active": false,
"name": "tp-andromeda",
"time": 60
}
EDIT: I found out I can use PUT with the entire json object that was originally 'put' to firebase with the additions or deletions, and firebase compares the new put request with what's already on there and updates accordingly. I don't know the behaviour is as I understand it or if there isn't a better way to add data without auto-generated keys.
When you use the POST verb, Firebase generates a new location. This is in line with REST-ful idioms: POST is used to create a new object in a server-defined new location.
If you want to write to an existing location, or a new location you control, use the PUT verb. In this case the data will be written to exactly the location you specify in the URL, and it will overwrite any existing data at that location.
If you want to update part of the data at an existing location, but leave other pieces of the data unmodified, use the PATCH verb.
If your HTTP client doesn't support specifying a verb, you can optionally pass the verb as HTTP-Method-Override header.

meteor autocomplete server-side

I'm writing a meteor app and I'm trying to add an autocomplete feature to a search box. The data is very large and is on the server, so I can't have it all on the client. It's basically a database of users. If I'm not wrong, the mizzao:autocomplete package should make that possible, but I can't seem to get it to work.
Here's what I have on the server:
Meteor.publish('autocompleteViewers', function(selector, options) {
Autocomplete.publishCursor(viewers.find(selector, options), this);
this.ready();
});
And here are the settings I use for the search box on the client:
getSettings: function() {
return {
position: 'bottom',
limit: 5,
rules: [{
subscription: 'autocompleteViewers',
field: '_id',
matchAll: false,
options: '',
template: Template.vLegend
}],
};
}
But I keep getting this error on the client:
Error: Collection name must be specified as string for server-side search at validateRule
I don't really understand the problem. When I look at the package code, it just seems like it's testing whether the subscription field is a string and not a variable, which it is. Any idea what the problem could be? Otherwise is there a minimum working example I could go from somewhere? I couldn't find one in the docs.
Error: Collection name must be specified as string for server-side search at validateRule
You get this error because you don't specify a Collection name in quotes.
getSettings: function() {
return {
position: 'bottom',
limit: 5,
rules: [{
subscription: 'autocompleteViewers',
field: '_id',
matchAll: false,
collection: 'viewers', // <- specify your collection, in your case it is a "viewers" collection.
options: '',
template: Template.vLegend
}],
};
}
For more information please read here.
Hope this helps!

Marionette js itemview not defined: then on browser refresh it is defined and all works well - race condition?

Yeah it's just the initial browser load or two after a cache clear. Subsequent refreshes clear the problem up.
I'm thinking the item views just aren't fully constructed in time to be used in the collection views on the first load. But then they are on a refresh? Don't know.
There must be something about the code sequence or loading or the load time itself. Not sure. I'm loading via require.js.
Have two collections - users and messages. Each renders in its own collection view. Each works, just not the first time or two the browser loads.
The first time you load after clearing browser cache the console reports, for instance:
"Uncaught ReferenceError: MessageItemView is not defined"
A simple browser refresh clears it up. Same goes for the user collection. It's collection view says it doesn't know anything about its item view. But a simple browser refresh and all is well.
My views (item and collection) are in separate files. Is that the problem? For instance, here is my message collection view in its own file:
messagelistview.js
var MessageListView = Marionette.CollectionView.extend({
itemView: MessageItemView,
el: $("#messages")
});
And the message item view is in a separate file:
messageview.js
var MessageItemView = Marionette.ItemView.extend({
tagName: "div",
template: Handlebars.compile(
'<div>{{fromUserName}}:</div>' +
'<div>{{message}}</div>' +
)
});
Then in my main module file, which references each of those files, the collection view is constructed and displayed:
main.js
//Define a model
MessageModel = Backbone.Model.extend();
//Make an instance of MessageItemView - code in separate file, messagelistview.js
MessageView = new MessageItemView();
//Define a message collection
var MessageCollection = Backbone.Collection.extend({
model: MessageModel
});
//Make an instance of MessageCollection
var collMessages = new MessageCollection();
//Make an instance of a MessageListView - code in separate file, messagelistview.js
var messageListView = new MessageListView({
collection: collMessages
});
App.messageListRegion.show(messageListView);
Do I just have things sequenced wrong? I'm thinking it's some kind of race condition only because over 3G to an iPad the item views are always undefined. They never seem to get constructed in time. PC on a hard wired connection does see success after a browser refresh or two. It's either the load times or the difference in browsers maybe? Chrome IE and Firefox on a PC all seem to exhibit the success on refresh behavior. Safari on iPad fails always.
PER COMMENT BELOW, HERE IS MY REQIRE BLOCK:
in file application.js
require.config({
paths: {
jquery: '../../jquery-1.10.1.min',
'jqueryui': '../../jquery-ui-1.10.3.min',
'jqueryuilayout': '../../jquery.layout.min-1.30.79',
underscore: '../../underscore',
backbone: '../../backbone',
marionette: '../../backbone.marionette',
handlebars: '../../handlebars',
"signalr": "../../jquery.signalR-1.1.3",
"signalr.hubs": "/xyvidpro/signalr/hubs?",
"debug": '../../debug',
"themeswitchertool": '../../themeswitchertool'
},
shim: {
'jqueryui': {
deps: ['jquery']
},
'jqueryuilayout': {
deps: ['jquery', 'jqueryui']
},
underscore: {
exports: '_'
},
backbone: {
deps: ["underscore", "jquery"],
exports: "Backbone"
},
marionette: {
deps: ["backbone"],
exports: "Marionette"
},
"signalr": {
deps: ["jquery"],
exports: "SignalR"
},
"signalr.hubs": {
deps: ["signalr"],
exports: "SignalRHubs"
},
"debug": {
deps: ["jquery"]
},
"themeswitchertool": {
deps: ["jquery"]
}
}
});
require(["marionette", "jqueryui", "jqueryuilayout", "handlebars", "signalr.hubs", "debug", "themeswitchertool"], function (Marionette) {
window.App = new Marionette.Application();
//...more code
})
Finally, inside the module that uses creates the collection views in question, the list of external file dependencies is as follows:
var dependencies = [
"modules/chat/views/userview",
"modules/chat/views/userlistview",
"modules/chat/views/messageview",
"modules/chat/views/messagelistview"
];
Clearly the itemViews are listed before collectionViews. This seems correct to me. Not sure what accounts for the collectionViews needing itemViews before they are defined. And why is all ok after a browser refresh?
The sequence in which you load files is most likely wrong: you need to load the item view before the collection view.
Try putting all of your code in the same file in the proper order, and see if it works.
The free preview to my book on Marionette can also guide you to displaying a collection view.
Edit based on calirification:
The dependencies listed for the module are NOT loaded linearly. That is precisely what RequireJS was designed to avoid. Instead the way to get the files loaded properly (i.e. in the correct order), is by defining a "chain" of dependencies that RequireJS will compute and load.
What you need to do is define (e.g.) your userlistview to depend on userview. In this way, they will get loaded in the proper order by RequireJS. You can see an example of a RequireJS app here (from by book on RequireJS and Marionette). Take a look at how each module definition decalre which modules it depends on (and that RequireJS therefore needs to load before). Once again, listing the modules sequentially within a dependecy array does NOT make them get loaded in that sequence, you really need to use the dependency chain mechanism.

Resources