Pickable not working with Deck.GL custom Trip Routes layer - webgl2

I was working on deck.gl trip routes example
http://deck.gl/#/examples/custom-layers/trip-routes
I want to implement pickable functionality in the layers.
On setting pickable: true in layer properties and increasing the pickable radius, the results were not the same as expected.
Then I followed https://github.com/uber/deck.gl/blob/master/docs/get-started/interactivity.md and https://github.com/uber/deck.gl/blob/master/docs/developer-guide/picking.md to implement custom picking. However, that didn't work.

Related

Google map LocalContextMapView : no refresh when panning the map

when using new beta feature LocalContextMapView from Google map, as described in https://developers.google.com/maps/documentation/javascript/local-context
How does one refresh the localContext when the map is panned/zoomed ?
LocalContextMapView place search is strictly bound by the map viewport by default. You can use the locationRestriction parameter to set a bounds to be much larger than the map's initial viewport.
You can find the sample implementation here: https://developers.google.com/maps/documentation/javascript/local-context/samples/location-restriction
And, since Local Context Search is still in beta, I highly suggest that you file a feature request for having a function to set locationRestriction programatically/dynamically.
For example, having a localContextMapView.setLocationRestriction() property would be a great addition to the Local Context Library. You can use Google Public Issue Tracker to file a feature request.

QObjectPicker slows down application

QObjectPicker which is set to point picking slows down my application due to the fact that I use entities with many many points, triangles, and meshes.
Is there a way to make my application faster. I assume it is possible to turn off picker and turn it on only when a user clicks a mouse (or triggers it in some other way).
My code is below.
I set picking settings in Qt3DExtras.Qt3DWindow:
render_settings = self.renderSettings()
picking_settings = render_settings.pickingSettings()
picking_settings.setFaceOrientationPickingMode(Qt3DRender.QPickingSettings.FrontAndBackFace)
picking_settings.setPickMethod(Qt3DRender.QPickingSettings.PointPicking)
picking_settings.setPickResultMode(Qt3DRender.QPickingSettings.NearestPick)
In my entities (Qt3DCore.QEntity) I implemented picker:
self.picker = Qt3DRender.QObjectPicker(self)
self.picker.setHoverEnabled(True)
self.picker.setDragEnabled(True)

Why it uses d->eventFilters.prepend(obj) not append(obj) in function(QObject::installEventFilter)

Why it uses d->eventFilters.prepend(obj) not append(obj) in function(QObject::installEventFilter),i want to know why design it in such way.I just curious about it.
void QObject::installEventFilter(QObject *obj)
{
Q_D(QObject);
if (!obj)
return;
if (d->threadData != obj->d_func()->threadData) {
qWarning("QObject::installEventFilter(): Cannot filter events for objects in a different thread.");
return;
}
// clean up unused items in the list
d->eventFilters.removeAll((QObject*)0);
d->eventFilters.removeAll(obj);
d->eventFilters.prepend(obj);
}
It's done that way because the most recently installed event filter is to be processed first, i.e. it needs to be at the beginning of the filter list. The filters are invoked by traversing the list in sequential order from begin() to end().
The most recently installed filter is to be processed first because the only two simple choices are to either process it first or last. And the second choice is not useful: when you filter events, you want to decide what happens before anyone else does. Well, but then some new user's filter will go before yours, so how that can be? As follows: event filters are used to amend functionality - functionality that already exists. If you added a filter somewhere inside the existing functionality, you'd effectively be interfacing to a partially defined system, with unknown behavior. After all, even Qt's implementation uses event filters. They provide the documented behavior. By inserting your event filter last, you couldn't be sure at all what events it will see - it'd all depend on implementation details of every layer of functionality above your filter.
A system with some event filter installed is like a layer of skin on the onion - the user of that system only sees the skin, not what's inside, not the implementation. But they can add their own skin on top if they wish so, and implement new functionality that way. They can't dig into the onion, because they don't know what's in it. Of course that's a generalization: they don't know because it doesn't form an API, a contract between them and the implementation of the system. They are free to read the source code and/or reverse engineer the system, and then insert the event filter anywhere in the list they wish. After all, once you get access to QObjectPrivate, you can modify the event filter list as you wish. But then you're responsible for the behavior of not only what you added on top of the public API, but of many of the underlying layers too - and your responsibility broadens. Updating the toolkit becomes next to impossible, because you'd have to audit the code and/or verify test coverage to make sure that something somewhere in the internals didn't get broken.

A-Frame: How to provide sound fadeout to eliminate audio click upon sound termination

A-frame provides easy to use and powerful audio capabilites via its <sound> component.
After playing around with various sound options such as native html5 for my game (in progress), I came to the conclusion that A-frame sound is the best option because it automatically provides spatialized sound (e.g. that varies with head rotation), as well as varying in intensity as you near the sound source -- things that increase VR presence and all for the cost of defining a simple html tag.
Unfortunately, A-frame doesn't provide a fadeout utility to taper the sound upon stoppage, and thus can generates a distinctly audible and annoying click on some waveforms, esp. sounds that are of variable length and not tapered in the waveform itself (for instance, a space ship's thrust). This is a well known problem with computer audio.
I was able to find some html5 audio solutions and a really good three.js audio three.js audio solution, but I could find none specific to A-frame.
What's the best way to taper out a sound in A-frame to reduce/eliminate this click?
Introduction
A-frame sound audio wraps the three.js positional audio API, which in turns wraps native html 5 audio. Most solutions out there are tailored for either pure html5 or for pure three.js. Since A-frame is a hybrid of the two apis, none of the provided solution are great fits for A-frame.
After two false starts at coming up with something, I disovered tween.js, which is not only built-in to A-frame (don't even have to download the library), but is also a useful API to know for other forms of computer animation. I provide the main solution here as well as a plunker in the hopes that others can find something useful.
Note that you don't need to do this for short burst sounds like bullets firing. These sounds have a fixed lifetime, so presumably whoever creates the waveform makes sure to taper them in and out. Also, I only deal with fade out, not fade in becuase the sound I needed only had problems with fadeout. A general solution would include fadein as well.
Solution
1) We start off with creating a real basic scene onto which we can our audio:
<a-scene>
<a-assets>
<audio id="space-rumble" src="https://raw.githubusercontent.com/vt5491/public/master/assets/sounds/space-rumble.ogg" type="audio/ogg"></audio>
crossorigin="anonymous"
type="audio/ogg"></audio>
</a-assets>
<a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9"
sound="src: #space-rumble; volume: 0.9"
></a-box>
</a-scene>
The cube and scene in this solution are really just placeholders -- you don't need to enter VR mode to click the buttons and test the sound.
2) The code presents three buttons: one to start the sound, one to "hard" stop it using the A-frame default, and a third to "easy" stop it using tween to taper it down to zero. A fourth input allows you to vary the taper time. While it might look like quite a bit of code, keep in mind about 50% is just html boilerplate for the buttons, and is not part of the solution "proper":
// created 2017-10-04
function init() {
let main = new Main();
}
function Main() {
let factory = {};
console.log("entered main");
factory.boxEntity = document.querySelector('a-box');
factory.sound = factory.boxEntity.components.sound;
factory.volume = {vol: factory.sound.data.volume};
factory.boxEntity.addEventListener('sound-loaded', ()=> {console.log('sound loaded')});
factory.startBtn =document.querySelector('#btn-start');
factory.startBtn.onclick = ( function() {
this.sound.stopSound();
let initVol = factory.sound.data.volume;
this.volume = {vol: initVol}; //need to do this every time
this.sound.pool.children[0].setVolume(initVol);
console.log(`onClick: volume=${this.sound.pool.children[0].getVolume()}`);
this.sound.currentTime = 0.0;
if( this.tween) {
this.tween.stop();
}
this.sound.playSound();
}).bind(factory);
factory.hardStopBtn =document.querySelector('#btn-hard-stop');
factory.hardStopBtn.onclick = (function() {
this.sound.stopSound();
}).bind(factory);
factory.easyStopBtn =document.querySelector('#btn-easy-stop');
factory.easyStopBtn.onclick = (function() {
let sound = factory.sound;
this.tween = new TWEEN.Tween(this.volume);
this.tween.to(
{vol: 0.0}
, document.querySelector('#fade-out-duration').value);
this.tween.onUpdate(function(obj) {
console.log(`onUpdate: this.vol=${this.vol}`);
sound.pool.children[0].setVolume(this.vol);
console.log(`onUpdate: pool.children[0].getVolume=${sound.pool.children[0].getVolume()}`);
});
// Note: do *not* bind to parent context as tween passes it's info via 'this'
// and not just via callback parms.
// .bind(factory));
this.tween.onComplete(function() {
sound.stopSound();
console.log(`tween is done`);
});
this.tween.start();
// animate is actually optional in this case. Tween will count down on it's
// own clock, but you might want to synchronize with your other updates. If this
// is an a-frame component, then you can just use the 'tick' method.
this.animate();
}).bind(factory);
factory.animate = () => {
let id = requestAnimationFrame(factory.animate);
console.log(`now in animate`);
let result = TWEEN.update();
// cancelAnimationFrame is optional. You might want to invoke this to avoid
// the overhead of repeated animation calls. If you are putting this in an
// a-frame 'tick' callback, and there's other tick activity, you
// don't want to call this.
if(!result) cancelAnimationFrame(id);
}
return factory;
}
Analysis
Here are some relevant items to be aware of.
Mixed API's
I am calling some native A-frame level calls:
sound.playSound()
sound.stopSound()
and one html5 level call:
this.sound.currentTime = 0.0;
but most of the "work" is in three.js level calls:
this.sound.pool.children[0].setVolume(initVol);
This does make it a little confusing, but no single api is "complete" and thus I had to use all three. In particular, we have to do a lot at the level that is wrapped by A-frame. I learned most of this by looking at the aframe source for the sound component
Sound Pools
Aframe allows multiple threads for each sound, so that you can have the same sound fire off before the prior one has completed. This is controlled by the poolSize property on the sound component. I'm only dealing with the first sound. I should probably loop over the pool elements like so:
this.pool.children.forEach(function (sound) {
..do stuff
}
});
But doing the first one has worked well enough so far. Time will tell if this is sustainable.
'this' binding
I chose to implement all the functionality using a factory object pattern, and not placing all the methods and variables in the global document space. This mimics the enviornment you would have if you're implementing in Angular2 or as a native A-frame component. I mention this because we now have callbacks nested inside function nested inside a wrapping "main" function. Thus be aware that "this" binding can come into play. I bound most of the support functions to the factory object, but do not bind the tween callbacks, as they are passed information in their "this" context, and not passed via parms. I had to resort to closures for the callbacks to get access to the instances variables of the containing class. This is just standard javascript "callback hell" stuff, but just keep in mind it can get confusing if you're not careful.
canceled animation
If you have a tick function already, use that to call TWEEN.update(). If you're only fading out sound, then it's overkill to have an animation loop running all the time, so in this example I dynamically start and stop the animation loop.
tween can be chained.
Tweens can be chained in jquery fluent API style as well.
Conclusion
Using tween.js to phase out the sound definitely feels like the right solution. It takes care of a lot of the overhead, and design considerations. It also feels much faster, smoother, and robust than the native html5 calls I previously used. However, it's pretty obvious that it's not trivial to get this working at the application level. A fadeout property, implemented in Tween.js, seems like it should be part of the A-frame sound component itself. But until that time, maybe some people will find some of what I provide here useful in some form. I'm only currently learning about html audio myself so apologies if I'm making this seem harder than it really is.

How to create a custom layer in google earth so I can set it's visibility

I am trying to render a whole heap of vectors in the google earth plugin. I use the parseKml method to create my Kml Feature object and store it in an array. The code looks something like below. I loop over a list of 10,000 kml objects that I return from a database and draw it in the plugin.
// 'currentKml' is a kml string returned from my DB.
// I iterate over 10,000 of these
currentKmlObject = ge.parseKml(currentKml);
currentKmlObject.setStyleSelector(gex.dom.buildStyle({
line: { width: 8, color: '7fff0000' }
}));
ge.getFeatures().appendChild(currentKmlObject);
// After this, I store teh currentKml object in an array so
// I can manipulate the individual features.
This seems to work fine. But when I want to turn the visibility of all these features on or off at once, I have to iterate over all of these kml objects in my array and set their individual visibilities on or off. This is a bit slow. If I am zoomed out, I can slowly see each of the lines disappearing and it takes about 5 - 10 seconds for all of them to disappear or come back.
I was wondering if I could speed up this process by adding a layer and adding all my objects as children of this layer. This way I set the visibility of the whole layer on or off.
I have been unable to find out how to create a new layer in code though. If someone can point the appropriate methods, it would be great. I am not sure if a layer is the right approach to speed up the process either. If you also have any other suggestions on how I can speed up the process of turning on/off all these objects in the map at once, that would be very helpful as well.
Thanks in advance for you help.
Ok, found out how to do this by myself.
In the google earth extensions libarary I use the 'buildFolder' method.
var folder = gex.dom.buildFolder({ name: folderName });
ge.getFeatures().appendChild(folder);
Now, when I iterate over my object array, I add them to the folder instead using the following
folder.getFeatures().appendChild(currentKmlObject);
This way, later on I can turn the visibility on and off at the folder level using
folder.setVisibility(false); // or true
And this works quite well as well. IThere is no delay, I can see all the objects turning on and off at once. It is quite quick and performant.

Resources