Google vr view using click on hotspot to change content - google-vr

Once I have an embedded vr view on my website, how do I implement clicking on a hotspot?
function onVrViewLoad() {
var v;
v = new VRView.Player('#vrview', {
image: '/images/firstImage.jpg',
is_stereo: false
});
v.on('ready', function(){
v.addHotspot('hotspotOne', {
pitch: 0, // In degrees. Up is positive.
yaw: 180, // In degrees. To the right is positive.
radius: 0.05, // Radius of the circular target in meters.
distance: 2 // Distance of target from camera in meters.
});
});
v.on('click', function(event) {
if (event.id == 'hotspotOne') {
v.setContentInfo({
image: '/images/secondImage.jpg',
is_stereo: false
});
}
});}
The hotspot shows up and can be clicked, but it does not change the image. If I replace the v.setContentInfo() to window.location.href='/secondImage.html' where I have set up another vr viewer with that image, it loads the page. So I know the click event is registering correctly, but the setContentInfo() is not.

Whilst the documentation says to use setContentInfo(), I've found it's actually setContent() that works. This gave me a headache for a while!
v.setContent({
image: '/images/secondImage.jpg',
is_stereo: false
});

Related

Aframe mesh rotation and animation

I am trying rotation for mesh in an aframe gltf model but its seems to be not working. Is it possible to rotate a mesh of gltf model added on runtime in the scene? I am getting mesh where pivot is set but unable to apply rotation to it.
Issue: I have a door model with two meshes. Left door and right door. I want to rotate door 180 degree when user clicks on door mesh. I got the click event on entire 3d object as of now and checking which mesh is clicked; checking its parent and trying to rotate the left door but not working. Any idea what am i missing.
so
object.parent
returns me parent object type which I am trying to rotate. Is it the right way?
Here is what I got so far.
const newElement = document.createElement('a-entity')
// The raycaster gives a location of the touch in the scene
const touchPoint = event.detail.intersection.point
newElement.setAttribute('position', touchPoint)
//const randomYRotation = Math.random() * 360
//newElement.setAttribute('rotation', '0 ' + randomYRotation + ' 0')
newElement.setAttribute('visible', 'false')
newElement.setAttribute('scale', '4 4 4')
newElement.setAttribute('gltf-model', '#animatedModel')
this.el.sceneEl.appendChild(newElement)
newElement.addEventListener('model-loaded', () => {
// Once the model is loaded, we are ready to show it popping in using an animation
newElement.setAttribute('visible', 'true')
newElement.setAttribute('id','model')
newElement.setAttribute('class','cantap')
newElement.setAttribute('hold-drag','')
newElement.setAttribute('two-finger-spin','')
newElement.setAttribute('pinch-scale','');
/* newElement.setAttribute('animation', {
property: 'scale',
to: '4 4 4',
easing: 'easeOutElastic',
dur: 800,
}) */
newElement.addEventListener('click', event => {
const animationList = ["Action", "Action.001"];
/* newElement.setAttribute('animation-mixer', {
clip: animationList[0],
loop: 'once',
})
newElement.addEventListener('animation-loop',function() {
newElement.setAttribute('animation-mixer', {
timeScale : 0
})
}); */
var object = event.detail.intersection.object;
document.getElementById("btn").innerHTML = object.parent;
/* object.setAttribute('animation', {
property: 'rotation',
to: '0 180 0',
loop: true,
dur: 6000,
dir: 'once'
});*/
object.parent.setAttribute('rotation', {x: 0, y: 180, z: 0});
/* object.traverse((node) =>{
console.log(node.name);
document.getElementById("btn").innerHTML = ;
}); */
console.log(this.el.getObject3D('mesh').name);
// name of object directly clicked
console.log(object.name);
// name of object's parent
console.log(object.parent.name);
// name of object and its children
});
})
The trick to doing anything to parts of a gltf model is to traverse the gltf and isolate the object inside that you want to manipulate.
You can do this by writing a component, attached to the gltf entity, that gets the underlying threejs object, and traverses all the objects within the gltf group, and then you can select an object by its name.
You do this inside of a "model-loaded" event listener, like this
el.addEventListener("model-loaded", e =>{
let tree3D = el.getObject3D('mesh');
if (!tree3D){return;}
console.log('tree3D', tree3D);
tree3D.traverse(function(node){
if (node.isMesh){
console.log(node);
self.tree = node;
}
});
This selects one of the models, assigns it to a variable, which can be later used, to rotate the model (or do whatever you like with it).
tick: function(){
if(this.tree){
this.tree.rotateY(0.01);
}
}
here is the glitch

Aframe How do I simulate a 6dof controller on the Oculus GO controller using the touchpad to move the controller forward, back, left, and right?

enter image description hereWhat if we could simulate 6dof control on the Oculus Go controller? Imagine that we turn the controller 3d model into a hand, it still just rotates, but imagine that we use the touchpad to move the hand foward, left, right, or backwards in space, the touch pad gives you movement along a z and x axis in space, but the accelerometer/gyro gives you a y and x axis. so the accelerometer/gyro serves as the arm, and the touch pad serves as the hand/wrist, a hand that can move forward and back and only twist left and right, and grip with the trigger, the hand can't tilt up or down but the arm can make up for that. So how do I build this?
There is https://www.npmjs.com/package/aframe-thumb-controls-component that provides events for pressing thumbpad in directions (e.g., thumbupstart, thumbleftend).
Write a component that listens to those events, sets a property (e.g., this.buttons.left = true, then the tick handler needs to update the this.el.object3D.position based on which buttons are held down.
Also need to take into consideration the direction the camera is facing. https://github.com/aframevr/aframe/blob/master/src/components/wasd-controls.js is a good starting point is it is similar, where it listens to key downs and translates position. Just need to modify to use thumb-controls instead.
More hints:
<script src="https://unpkg.com/aframe-thumb-controls-component#1.1.0/dist/aframe-thumb-controls-component.min.js">
AFRAME.registerComponent('thumb-movement-controls', {
init: function () {
this.buttons = {
left: false, right: false, up: false, down: false
};
this.el.addEventListener('thumbleftstart', () => {
this.buttons.left = true;
});
this.el.addEventListener('thumbleftend', () => {
this.buttons.left = false;
});
},
tick: function () {
// Really simplified movement. Does not take into account camera heading or velocity / time, but wasd-controls shows how.
if (this.buttons.left) {
this.el.position.x -= 0.001;
}
}
});
<a-entity daydream-controls thumb-controls thumb-movement-controls></a-entity>

Google maps expand limitation of rectangle boundary

Please find the google mapsApi documentation https://developers.google.com/maps/documentation/javascript/shapes#editable
Please zoomout to world view and then expand the region selection towards right in single attempt. At some point you could observe that the selection became unstable and it selects entirely different section of the world.
By default the rectangle selection tool seems to look for shortest possible path to complete the shape. This creates a strange behavior when attempting to draw a very very large region.
I wanted to click and drag a very large region that covered a large geography. I was dragging West to East. Once the size of the object was very large, the selection reserved and was covering a completely different section of the world.
I attempt to expand a boundary to include the entire world. When the boundary goes far enough, again the region appears to be the minimal/smaller area.
Expected behavior was the selector to continue expanding in the direction the user intends. In this case I would expect the selector to continue its west to east expansion.
https://developers.google.com/maps/documentation/javascript/shapes#editable
var bounds = {north: 44.599, south: 44.490, east: -78.443, west: -78.649 }; // Define a rectangle and set its editable property to true. var rectangle = new google.maps.Rectangle({bounds: bounds, editable: true});
Please tries to expands rectangle to further right
Is there a solution to resolve the scenario mentioned?
Please let me know if further details required.
As I said in my comment, when you drag it "too far", the rectangle left and right coordinates (longitude) get inverted.
In other words, if you drag it too far to the right, right will become left and left will be where you dragged the right side to. And the opposite in the other direction. So by comparing where was the left with where is the right or vice-versa, you can detect if your rectangle left and right got inverted and invert it again... This way you can achieve what you want.
And of course if you drag the right side further to the right than where the left was (or the other way around), it will reset, as you can't have a rectangle overlapping itself around the globe.
The UI can be a bit confusing though, as you can see the rectangle lines get inverted but you can't do much about that.
var map;
function initialize() {
var mapOptions = {
center: new google.maps.LatLng(0, 0),
zoom: 2,
zoomControl: false
};
map = new google.maps.Map(document.getElementById('map-canvas'), mapOptions);
// Set origin bounds
var originBounds = new google.maps.LatLngBounds(
new google.maps.LatLng(-20, -100),
new google.maps.LatLng(20, 20)
);
// Get left/right coords
var left = originBounds.getSouthWest().lng();
var right = originBounds.getNorthEast().lng();
// Create editable rectangle
var rectangle = new google.maps.Rectangle({
bounds: originBounds,
fillColor: 'white',
fillOpacity: .5,
editable: true,
map: map
});
// Check for rectangle bounds changed
google.maps.event.addListener(rectangle, 'bounds_changed', function() {
// Get currents bounds and left/right coords
var newBounds = rectangle.getBounds();
var newLeft = newBounds.getSouthWest().lng();
var newRight = newBounds.getNorthEast().lng();
if ((newRight === left) || (newLeft === right)) {
// User dragged "too far" left or right and rectangle got inverted
// Invert left and right coordinates
rectangle.setBounds(invertBounds(newBounds));
}
// Reset current left and right
left = rectangle.getBounds().getSouthWest().lng();
right = rectangle.getBounds().getNorthEast().lng();
});
}
function invertBounds(bounds) {
// Invert the rectangle bounds
var invertedBounds = new google.maps.LatLngBounds(
new google.maps.LatLng(bounds.getNorthEast().lat(), bounds.getNorthEast().lng()),
new google.maps.LatLng(bounds.getSouthWest().lat(), bounds.getSouthWest().lng())
);
return invertedBounds;
}
initialize();
#map-canvas {
height: 150px;
}
<div id="map-canvas"></div>
<script src="https://maps.googleapis.com/maps/api/js"></script>

Restrict panning and zoom on Here Maps 3.0

I generated a map with Here maps JS Api 3.0. I want to restrict the zoom to a min/max value and the panning to a given rectangle. Is there a way to do that?
for example:
map.setMinZoom(4);
map.setMaxZoom(14);
map.setPanRestriction(rectangle);
I am guessing you're trying to restrict the viewport to where you now have you image overlay...
The easiest way is to listen to view model and and viewport updates similar the other example:
Custom map overlay heremaps js api v3
Now you can look at the map's center coordinate and see if it is within the boundaries you wish to display. If not, set it to within the boundaries. Something along those lines may work for you:
var bounds = new H.geo.Rect(45, -45, -45, 45);
map.getViewModel().addEventListener('sync', function() {
var center = map.getCenter(),
adjustLat,
adjustLng;
if (!bounds.containsPoint(center)) {
if (center.lat > bounds.getTop()) {
adjustLat = bounds.getTop();
} else if (center.lat < bounds.getBottom()) {
adjustLat = bounds.getBottom();
} else {
adjustLat = center.lat;
}
if (center.lng < bounds.getLeft()) {
adjustLng = bounds.getLeft();
} else if (center.lng > bounds.getRight()) {
adjustLng = bounds.getRight();
} else {
adjustLng = center.lng;
}
map.setCenter({
lat: adjustLat,
lng: adjustLng
});
}
});
//Debug code to visualize where your restriction is
map.addObject(new H.map.Rect(bounds));

Openseadragon image cordinates

iam doing a project using openseadragon check out the below example.
a samle openseadragon image
In the Onclick method want to find the cordinates(px,py) of the image.Is there any method?? please help this is ma first openseadragon project.
thanks
When you get a click, it'll be in window pixel coordinates. You can then translate it into viewport coordinates (which go from 0.0 on the left to 1.0 on the right). You can then translate those into image coordinates. Here's how it would look all together:
viewer.addHandler('canvas-click', function(event) {
var viewportPoint = viewer.viewport.pointFromPixel(event.position);
var imagePoint = viewer.viewport.viewportToImageCoordinates(viewportPoint.x, viewportPoint.y);
  console.log(imagePoint.x, imagePoint.y);
});
For more info on the coordinate systems, see: http://openseadragon.github.io/examples/viewport-coordinates/
The following code, adapted from #iangilman's answer, worked for me with OpenSeadragon 2.0.0. It seems that the second argument of the handler function got removed in more recent versions. I added the quick === true condition to keep it from firing on a drag start. It might also be a good idea to switch of the default single-click-to-zoom behaviour in the gestureSettingsMouse object.
viewer = OpenSeadragon({
id: "osd1",
prefixUrl: "/path/to/seadragon/images/",
tileSources: "/path/to/tif/images/image.tif.dzi",
showNavigator: true,
gestureSettingsMouse: {
clickToZoom: false,
dblClickToZoom: true
}
});
viewer.addHandler('canvas-click', function(target) {
if(target.quick === true){
var viewportPoint = viewer.viewport.pointFromPixel(target.position);
var imagePoint = viewer.viewport.viewportToImageCoordinates(viewportPoint.x, viewportPoint.y);
console.log(parseInt(imagePoint.x), parseInt(imagePoint.y));
}
});

Resources