I'm making a virtual tour using AFrame, with a <a-sky> for the 360° images, some <a-circle> for hotspots, and <a-text> below circles for indications.
My goal is to make texts always parallel to the screen. I already try the aframe-look-at-component on the camera, but it's not what I was looking for because they face a point instead of facing the screen.
So my next idea was to create an invisible cursor, and copy his rotation the the texts, but I'm not sure of this because I don't know if the cursor update his rotation or if it's only base on the cam rotation.
Anyway the main source of this problem was I don't know how to change the rotation of my text after creation, I tried mytext.object3D.rotation, mytext.setAttribute('rotation', newRotation), and also object3D.lookAt(), but either it didn't matter, or it wasn't what I was looking for.
What is the best way to achieve this ?
Here my hotspot component (which create the texts based on some props):
AFRAME.registerPrimitive('a-hotspot', {
defaultComponents: {
hotspot: {}
},
mappings: {
for: 'hotspot.for',
to: 'hotspot.to',
legend: 'hotspot.legend',
'legend-pos': 'hotspot.legend-pos',
'legend-rot': 'hotspot.legend-rot'
}
});
AFRAME.registerComponent('hotspot', {
schema: {
for: { type: 'string' },
to: { type: 'string' },
legend: { type: 'string' },
'legend-pos': { type: 'vec3', default: {x: 0, y: -0.5, z:0}},
'legend-rot': { type: 'number', default: 0 },
positioning: { type: 'boolean', default: false }
},
init: function () {
this.shiftIsPress = false
window.addEventListener('keydown', this.handleShiftDown.bind(this))
window.addEventListener('keyup', this.handleShiftUp.bind(this))
this.tour = document.querySelector('a-tour');
if (this.data.legend)
this.addText();
this.el.addEventListener('click', this.handleClick.bind(this));
},
// Creating the text, based on hotspots props
addText: function () {
var hotspot = this.el,
position = new THREE.Vector3(hotspot.object3D.position.x, hotspot.object3D.position.y, hotspot.object3D.position.z),
text = document.createElement('a-text'),
loadedScene = document.querySelector('a-tour').getAttribute('loadedScene')
position.x += this.data['legend-pos'].x
position.y += this.data['legend-pos'].y
position.z += this.data['legend-pos'].z
console.log(this.data['legend-rot'])
// Set text attributes
text.id = `text_${this.data.for}_to_${this.data.to}`
text.setAttribute('position', position)
text.setAttribute('color', '#BE0F34')
text.setAttribute('align', 'center')
text.setAttribute('value', this.data.legend)
text.setAttribute('for', this.data.for)
if (loadedScene && loadedScene !== this.data.for) text.setAttribute('visible', false)
// Insert text after hotspot
hotspot.parentNode.insertBefore(text, hotspot.nextSibling)
},
// This part is supposed to edit the rotation
// to always fit to my idea
tick: function () {
if (this.el.getAttribute('visible')) {
var cursorRotation = document.querySelector('a-cursor').object3D.getWorldRotation()
//document.querySelector(`#text_${this.data.for}_to_${this.data.to}`).object3D.lookAt(cursorRotation)
this.updateRotation(`#text_${this.data.for}_to_${this.data.to}`)
}
},
// This parts manage the click event.
// When shift is pressed while clicking on hotspot, it enable another component
// to stick a hotspot to the camera for help me to place it on the scene
// otherwise, it change the 360° image and enbable/disable hotspots.
handleShiftDown: function (e) {
if (e.keyCode === 16) this.shiftIsPress = true
},
handleShiftUp: function (e) {
if (e.keyCode === 16) this.shiftIsPress = false
},
handleClick: function (e) {
var target = 'target: #' + this.el.id
var tour = this.tour.components['tour']
if (this.shiftIsPress)
tour.el.setAttribute('hotspot-helper', target)
else
tour.loadSceneId(this.data.to, true);
}
});
I really don't know what to do..
EDIT: I found a part solution:
If I had geometry to my text (and material with alphaTest: 1 for hide it), setAttribute('rotation') work, and I base it on camera rotation. The problem is that after that, the camera is locked, don't understand why ^^
var cursorRotation = document.querySelector('a-camera').object3D.rotation
document.querySelector(`#text_${this.data.for}_to_${this.data.to}`).setAttribute('rotation', cursorRotation)
Thanks,
Navalex
I finally found the solution !
Instead of document.querySelector('a-camera').object3D.rotation, I used document.querySelector('a-camera').getAttribute('rotation') and it's work nice !
Be sure to check out the example here: https://stemkoski.github.io/A-Frame-Examples/sprites.html
The 'box' sign is always visible to user
Related
This may have been asked numerous times, but I can't get a clear "newbie" plan of action.
Building Aframe experiences to showcase some interiors for numerous client presentations—this will be show on a desktop browser only, and I need to be able to control pan/rotate/turn-around camera movement with left and right arrow keys instead of relying on the mouse, as many clients have found this cumbersome. I just need to control this like an old first-person shooter with four arrow buttons.
Is there a simple way to do this? I've seen various permutations of this question but no simple solution so far. Thanks!
A simple keyboard input look component:
AFRAME.registerComponent('kbd-look-controls', {
schema: {
speed: {type: 'number', default: 2}
},
init: function () {
this.bindFunctions();
this.addEventListeners();
this.keyPressed = {
'ArrowUp': false,
'ArrowDown': false,
'ArrowLeft': false,
'ArrowRight': false
}
},
remove: function () {
this.removeEventListeners();
},
tick: function(time, delta) {
var data = this.data;
var object3D = this.el.object3D;
const angleDelta = 0.01 * data.speed * (delta / 16);
if (this.keyPressed['ArrowUp']) {
object3D.rotation.x = object3D.rotation.x + angleDelta;
}
if (this.keyPressed['ArrowDown']) {
object3D.rotation.x = object3D.rotation.x - angleDelta;
}
if (this.keyPressed['ArrowLeft']) {
object3D.rotation.y = object3D.rotation.y + angleDelta;
}
if (this.keyPressed['ArrowRight']) {
object3D.rotation.y = object3D.rotation.y - angleDelta;
}
},
bindFunctions() {
this.onKeyUp = this.onKeyUp.bind(this);
this.onKeyDown = this.onKeyDown.bind(this);
},
addEventListeners() {
window.addEventListener('keydown', this.onKeyDown);
window.addEventListener('keyup', this.onKeyUp);
},
removeEventListeners() {
window.removeEventListener('keydown', this.onKeyDown);
window.removeEventListener('keyup', this.onKeyUp);
},
onKeyUp: function (evt) {
this.keyPressed[evt.code] = false;
},
onKeyDown: function (evt) {
this.keyPressed[evt.code] = true;
}
})
Sample usage:
<a-entity camera kbd-look-controls="speed: 2.5" position="0 1 0"></a-entity>
This is one approach to getting to achieve your functionality.
Using wasd-controls component as well can be undesirable, since the wasd-controls also listens to the arrow keys.
Doesn't work with the look-controls component since it's also adjusting the rotation.
I want to be able to move in my VR world by looking around and holding down the cardboard button to move. I have tried for 2 hours and couldn't figure it out. I really don't want to use teleportation as my solution.
I'd throw this in an aframe component, and use the three.js API:
In the init check whether the mouse is up or down.
In the tick find out the rotation as a world matrix using extractRotation(mesh.matrix), apply it to a forward vector using direction.applyMatrix4(matrix), and add it to the current camera position.
AFRAME.registerComponent("foo", {
init: function() {
this.mouseDown = false
this.el.addEventListener("mousedown", (e) => {
this.mouseDown = true
})
this.el.addEventListener("mouseup", (e) => {
this.mouseDown = false
})
},
tick: function() {
if (this.mouseDown) {
let pos = this.el.getAttribute("position")
let mesh = this.el.object3D
var matrix = new THREE.Matrix4();
var direction = new THREE.Vector3(0, 0, -0.1);
matrix.extractRotation(mesh.matrix);
direction.applyMatrix4(matrix)
direction.add(new THREE.Vector3(pos.x, pos.y, pos.z))
this.el.setAttribute("position", direction)
}
}
})
Working fiddle here.
the recent v0.3.0 blog post mentions WebVR 1.0 support allowing "us to have different content on the desktop display than the headset, opening the door for asynchronous gameplay and spectator modes." This is precisely what I'm trying to get working. I'm looking to have one camera in the scene represent the viewpoint of the HMD and a secondary camera represent a spectator of the same scene and render that view to a canvas on the same webpage. 0.3.0 removes the ability to render a-scene to a specific canvas in favor of embedded component. Any thoughts on how to accomplish two cameras rendering a single scene simultaneously?
My intention is to have a the desktop display show what a user is doing from a different perspective. My end goal is to be able to build a mixed reality green screen component.
While there may be a better or cleaner way to do this in the future, I was able to get a second camera rendering by looking at examples of how this is done in the THREE.js world.
I add a component to a non-active camera called spectator. in the init function I set up a new renderer and attach to div outside the scene to create a new canvas. I then call the render method inside the tick() part of the lifecycle.
I have not worked out how to isolate the movement of this camera yet. The default look controls of the 0.3.0 aframe scene still control both camera
Source code:
https://gist.github.com/derickson/334a48eb1f53f6891c59a2c137c180fa
I've created a set of components that can help with this. https://github.com/diarmidmackenzie/aframe-multi-camera
Here's an example showing usage with A-Frame 1.2.0 to display the main camera on the left half of the screen, and a secondary camera on the right half.
<!DOCTYPE html>
<html>
<head>
<script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
<script src="https://cdn.jsdelivr.net/gh/diarmidmackenzie/aframe-multi-camera#latest/src/multi-camera.min.js"></script>
</head>
<body>
<div>
<a-scene>
<a-entity camera look-controls wasd-controls position="0 1.6 0">
<!-- first secondary camera is a child of the main camera, so that it always has the same position / rotation -->
<!-- replace main camera (since main camera is rendered across the whole screen, which we don't want) -->
<a-entity
id="camera1"
secondary-camera="outputElement:#viewport1;sequence: replace"
>
</a-entity>
</a-entity>
<!-- PUT YOUR SCENE CONTENT HERE-->
<!-- position of 2nd secondary camera-->
<a-entity
id="camera2"
secondary-camera="outputElement:#viewport2"
position="8 1.6 -6"
rotation="0 90 0"
>
</a-entity>
</a-scene>
</div>
<!-- standard HTML to contrl layout of the two viewports-->
<div style="width: 100%; height:100%; display: flex">
<div id="viewport1" style="width: 50%; height:100%"></div>
<div id="viewport2" style="width: 50%; height:100%"></div>
</div>
</body>
</html>
Also here as a glitch: https://glitch.com/edit/#!/recondite-polar-hyssop
It's also been suggested that I post the entire source code for the multi-camera component here.
Here it is...
/* System that supports capture of the the main A-Frame render() call
by add-render-call */
AFRAME.registerSystem('add-render-call', {
init() {
this.render = this.render.bind(this);
this.originalRender = this.el.sceneEl.renderer.render;
this.el.sceneEl.renderer.render = this.render;
this.el.sceneEl.renderer.autoClear = false;
this.preRenderCalls = [];
this.postRenderCalls = [];
this.suppresssDefaultRenderCount = 0;
},
addPreRenderCall(render) {
this.preRenderCalls.push(render)
},
removePreRenderCall(render) {
const index = this.preRenderCalls.indexOf(render);
if (index > -1) {
this.preRenderCalls.splice(index, 1);
}
},
addPostRenderCall(render) {
this.postRenderCalls.push(render)
},
removePostRenderCall(render) {
const index = this.postRenderCalls.indexOf(render);
if (index > -1) {
this.postRenderCalls.splice(index, 1);
}
else {
console.warn("Unexpected failure to remove render call")
}
},
suppressOriginalRender() {
this.suppresssDefaultRenderCount++;
},
unsuppressOriginalRender() {
this.suppresssDefaultRenderCount--;
if (this.suppresssDefaultRenderCount < 0) {
console.warn("Unexpected unsuppression of original render")
this.suppresssDefaultRenderCount = 0;
}
},
render(scene, camera) {
renderer = this.el.sceneEl.renderer
// set up THREE.js stats to correctly count across all render calls.
renderer.info.autoReset = false;
renderer.info.reset();
this.preRenderCalls.forEach((f) => f());
if (this.suppresssDefaultRenderCount <= 0) {
this.originalRender.call(renderer, scene, camera)
}
this.postRenderCalls.forEach((f) => f());
}
});
/* Component that captures the main A-Frame render() call
and adds an additional render call.
Must specify an entity and component that expose a function call render(). */
AFRAME.registerComponent('add-render-call', {
multiple: true,
schema: {
entity: {type: 'selector'},
componentName: {type: 'string'},
sequence: {type: 'string', oneOf: ['before', 'after', 'replace'], default: 'after'}
},
init() {
this.invokeRender = this.invokeRender.bind(this);
},
update(oldData) {
// first clean up any old settings.
this.removeSettings(oldData)
// now add new settings.
if (this.data.sequence === "before") {
this.system.addPreRenderCall(this.invokeRender)
}
if (this.data.sequence === "replace") {
this.system.suppressOriginalRender()
}
if (this.data.sequence === "after" ||
this.data.sequence === "replace")
{
this.system.addPostRenderCall(this.invokeRender)
}
},
remove() {
this.removeSettings(this.data)
},
removeSettings(data) {
if (data.sequence === "before") {
this.system.removePreRenderCall(this.invokeRender)
}
if (data.sequence === "replace") {
this.system.unsuppressOriginalRender()
}
if (data.sequence === "after" ||
data.sequence === "replace")
{
this.system.removePostRenderCall(this.invokeRender)
}
},
invokeRender()
{
const componentName = this.data.componentName;
if ((this.data.entity) &&
(this.data.entity.components[componentName])) {
this.data.entity.components[componentName].render(this.el.sceneEl.renderer, this.system.originalRender);
}
}
});
/* Component to set layers via HTML attribute. */
AFRAME.registerComponent('layers', {
schema : {type: 'number', default: 0},
init: function() {
setObjectLayer = function(object, layer) {
if (!object.el ||
!object.el.hasAttribute('keep-default-layer')) {
object.layers.set(layer);
}
object.children.forEach(o => setObjectLayer(o, layer));
}
this.el.addEventListener("loaded", () => {
setObjectLayer(this.el.object3D, this.data);
});
if (this.el.hasAttribute('text')) {
this.el.addEventListener("textfontset", () => {
setObjectLayer(this.el.object3D, this.data);
});
}
}
});
/* This component has code in common with viewpoint-selector-renderer
However it's a completely generic stripped-down version, which
just delivers the 2nd camera function.
i.e. it is missing:
- The positioning of the viewpoint-selector entity.
- The cursor / raycaster elements.
*/
AFRAME.registerComponent('secondary-camera', {
schema: {
output: {type: 'string', oneOf: ['screen', 'plane'], default: 'screen'},
outputElement: {type: 'selector'},
cameraType: {type: 'string', oneOf: ['perspective, orthographic'], default: 'perspective'},
sequence: {type: 'string', oneOf: ['before', 'after', 'replace'], default: 'after'},
quality: {type: 'string', oneOf: ['high, low'], default: 'high'}
},
init() {
if (!this.el.id) {
console.error("No id specified on entity. secondary-camera only works on entities with an id")
}
this.savedViewport = new THREE.Vector4();
this.sceneInfo = this.prepareScene();
this.activeRenderTarget = 0;
// add the render call to the scene
this.el.sceneEl.setAttribute(`add-render-call__${this.el.id}`,
{entity: `#${this.el.id}`,
componentName: "secondary-camera",
sequence: this.data.sequence});
// if there is a cursor on this entity, set it up to read this camera.
if (this.el.hasAttribute('cursor')) {
this.el.setAttribute("cursor", "canvas: user; camera: user");
this.el.addEventListener('loaded', () => {
this.el.components['raycaster'].raycaster.layers.mask = this.el.object3D.layers.mask;
const cursor = this.el.components['cursor'];
cursor.removeEventListeners();
cursor.camera = this.camera;
cursor.canvas = this.data.outputElement;
cursor.canvasBounds = cursor.canvas.getBoundingClientRect();
cursor.addEventListeners();
cursor.updateMouseEventListeners();
});
}
if (this.data.output === 'plane') {
if (!this.data.outputElement.hasLoaded) {
this.data.outputElement.addEventListener("loaded", () => {
this.configureCameraToPlane()
});
} else {
this.configureCameraToPlane()
}
}
},
configureCameraToPlane() {
const object = this.data.outputElement.getObject3D('mesh');
function nearestPowerOf2(n) {
return 1 << 31 - Math.clz32(n);
}
// 2 * nearest power of 2 gives a nice look, but at a perf cost.
const factor = (this.data.quality === 'high') ? 2 : 1;
const width = factor * nearestPowerOf2(window.innerWidth * window.devicePixelRatio);
const height = factor * nearestPowerOf2(window.innerHeight * window.devicePixelRatio);
function newRenderTarget() {
const target = new THREE.WebGLRenderTarget(width,
height,
{
minFilter: THREE.LinearFilter,
magFilter: THREE.LinearFilter,
stencilBuffer: false,
generateMipmaps: false
});
return target;
}
// We use 2 render targets, and alternate each frame, so that we are
// never rendering to a target that is actually in front of the camera.
this.renderTargets = [newRenderTarget(),
newRenderTarget()]
this.camera.aspect = object.geometry.parameters.width /
object.geometry.parameters.height;
},
remove() {
this.el.sceneEl.removeAttribute(`add-render-call__${this.el.id}`);
if (this.renderTargets) {
this.renderTargets[0].dispose();
this.renderTargets[1].dispose();
}
// "Remove" code does not tidy up adjustments made to cursor component.
// rarely necessary as cursor is typically put in place at the same time
// as the secondary camera, and so will be disposed of at the same time.
},
prepareScene() {
this.scene = this.el.sceneEl.object3D;
const width = 2;
const height = 2;
if (this.data.cameraType === "orthographic") {
this.camera = new THREE.OrthographicCamera( width / - 2, width / 2, height / 2, height / - 2, 1, 1000 );
}
else {
this.camera = new THREE.PerspectiveCamera( 45, width / height, 1, 1000);
}
this.scene.add(this.camera);
return;
},
render(renderer, renderFunction) {
// don't bother rendering to screen in VR mode.
if (this.data.output === "screen" && this.el.sceneEl.is('vr-mode')) return;
var elemRect;
if (this.data.output === "screen") {
const elem = this.data.outputElement;
// get the viewport relative position of this element
elemRect = elem.getBoundingClientRect();
this.camera.aspect = elemRect.width / elemRect.height;
}
// Camera position & layers match this entity.
this.el.object3D.getWorldPosition(this.camera.position);
this.el.object3D.getWorldQuaternion(this.camera.quaternion);
this.camera.layers.mask = this.el.object3D.layers.mask;
this.camera.updateProjectionMatrix();
if (this.data.output === "screen") {
// "bottom" position is relative to the whole viewport, not just the canvas.
// We need to turn this into a distance from the bottom of the canvas.
// We need to consider the header bar above the canvas, and the size of the canvas.
const mainRect = renderer.domElement.getBoundingClientRect();
renderer.getViewport(this.savedViewport);
renderer.setViewport(elemRect.left - mainRect.left,
mainRect.bottom - elemRect.bottom,
elemRect.width,
elemRect.height);
renderFunction.call(renderer, this.scene, this.camera);
renderer.setViewport(this.savedViewport);
}
else {
// target === "plane"
// store off current renderer properties so that they can be restored.
const currentRenderTarget = renderer.getRenderTarget();
const currentXrEnabled = renderer.xr.enabled;
const currentShadowAutoUpdate = renderer.shadowMap.autoUpdate;
// temporarily override renderer proeperties for rendering to a texture.
renderer.xr.enabled = false; // Avoid camera modification
renderer.shadowMap.autoUpdate = false; // Avoid re-computing shadows
const renderTarget = this.renderTargets[this.activeRenderTarget];
renderTarget.texture.encoding = renderer.outputEncoding;
renderer.setRenderTarget(renderTarget);
renderer.state.buffers.depth.setMask( true ); // make sure the depth buffer is writable so it can be properly cleared, see #18897
renderer.clear();
renderFunction.call(renderer, this.scene, this.camera);
this.data.outputElement.getObject3D('mesh').material.map = renderTarget.texture;
// restore original renderer settings.
renderer.setRenderTarget(currentRenderTarget);
renderer.xr.enabled = currentXrEnabled;
renderer.shadowMap.autoUpdate = currentShadowAutoUpdate;
this.activeRenderTarget = 1 - this.activeRenderTarget;
}
}
});
I write JS app where I draw a lot of polylines using array of points, but in avery point I have some additional properties in this point (GPS data, speed etc).
I want to show these additional props onmouseover or onmouseclick event.
I have two ways:
use the standard polylines and event handler. But in this case I can't to determine additional properties for start point of this polyline cause I can't to save these props in polyline properties. There is one solution - save in array additional properties and try to find them by LatLng of first point of the polyline, but it's too slow I guess..
extend polyline and save additional properties in new Object, but I can't to extend mouse events :(
To extend polyline I use this code:
function myPolyline(prop, opts){
this.prop = prop;
this.Polyline = new google.maps.Polyline(opts);
}
myPolyline.prototype.setMap = function(map) {
return this.Polyline.setMap(map);
}
myPolyline.prototype.getPath = function() {
return this.Polyline.getPath();
}
myPolyline.prototype.addListener= function(prop) {
return this.Polyline.addListener();
}
myPolyline.prototype.getProp= function() {
return this.prop;
}
myPolyline.prototype.setProp= function(prop) {
return this.prop = prop;
}
and create new object in for loop (i - index of current point in array of points) like that:
var polyline_opts = {
path: line_points,
strokeColor: color,
geodesic: true,
strokeOpacity: 0.5,
strokeWeight: 4,
icons: [
{
icon: lineSymbol,
offset: '25px',
repeat: '50px'
}
],
map: map
};
var add_prop = {
id: i,
device_id: device_id
};
...
devices_properties[device_id].tracks[(i-1)] = new myPolyline(add_prop, polyline_opts);
Where:
line_points - array of points (just two points),
i - current point index
devices_properties[device_id].tracks - array of extended polylines (with add properties) by my device_id index
After that I set event handler like that:
var tmp = devices_properties[device_id].tracks[(i-1)];
google.maps.event.addListener(tmp.Polyline, 'click', function(e) {
...
console.log(tmp.prop.id);
...
}
But in this case I always get the same id in console..
When I use
google.maps.event.addListener(devices_properties[device_id].tracks[(i-1)].Polyline, 'click', function(e) {
...
console.log(???); // How to get parent of polyline fired the event?
...
}
I don't know how to get parent of polyline fired the event?
I answer my own question - It's done, I've just have some troubles with using "for" instead "$.each" :)
Before I use:
for ( i = 1; i < devices_properties[device_id].history_points.length; i++ ) {
...
create myPolyline
...
}
and it's doesn't work - created one event handle.
After:
$.each(devices_properties[device_id].history_points, function(i, tmp){
...
create myPolyline ()
...
}
and it works - create a lot of event handlers.
To handle event I use this:
google.maps.event.addListener(c_polyline.Polyline, 'mouseover', function(e) {
var prop = c_polyline.getProp();
...
console.log(prop.id, prop.device_id);
}
I use GMAP3 plugin to render driving direction. And would like to add a clear button so it can be clear but I haven't been able to find the right syntax in GMAP3. Here is the my js code, modified from the sample in gmap3.net. I have markers plotted already and latlng are retreived from plotted markers instead of from clicks position on the map.
function removePath() {
$(mapID).gmap3({
action: 'clear',
name: 'directionRenderer'
// tag: 'path' // works too with tag instead of name
});
function updatePath() {
$(mapID).gmap3({
action: 'getRoute',
options: {
origin: m1.getPosition(),
destination: m2.getPosition(),
travelMode: google.maps.DirectionsTravelMode.DRIVING
},
callback: function (results) {
if (!results) return;
$(mapID).gmap3({
action: 'setDirections',
directions:results,
});
}
});
};
function updateDirection(mm) { // Directions between m1 and m2
var mmID = $(mm).prop('id');
...
if (mmID == 'clearDirection') {
...
removePath();
return;
};
...
if (m1 && m2) { updatePath(); };
};
function initmap() {
$(mapID).gmap3(
{
action: 'init',
options: defaultMapOptions
},
// add direction renderer to configure options (else, automatically created with default options)
{ action: 'addDirectionsRenderer',
preserveViewport: true,
markerOptions: { visible: false },
options: {draggable:true},
tag: 'path'
},
// add a direction panel
{ action: 'setDirectionsPanel',
id: 'directions'
}
);
};
A is in place in HTML documents as directions panel. It has a a wrapper which is hidden when the route is cleared by using jquery css property change. The wrapper div's display property is changed back to 'block' whenever value is assigned to either m1 or m2.
<body>
...
<div id="direction_container" class="shadowSE">
....
<div id="directions"></div>
....
</div>
</body>
Its absolutely working fine.
$map.gmap3({ action: 'clear', name: 'directionRenderer' });
*Instructions-
If you later draw the route then you must write below code otherwise directions not display.
$map.gmap3({ action: 'addDirectionsRenderer', preserveViewport: true,
markerOptions: { visible: false} },
{ action: 'setDirectionsPanel', id: 'directions' });
Thanks...
Use this:
$(mapID).gmap3({action:"clear", name:"directionRenderer"});
The chosen answer above didn't work for me. I'm unsure if it's version related, but the solution I'm using is more simple:
$(your-selector).gmap3({clear: {}});
Afterwards, you can draw a new route without reconnecting the directions rendered with the map.