Cluster multiple edges between two nodes - vis.js

I'm using vis.js for network visualization.
My idea is to develop a solution like Google Maps Zoom in the sense that it clusters edges and nodes when not zoomed.
I want to cluster nodes apart from clustering multiple edges between the same two nodes.
Like a cluster nodes, when clustered edge is zoomed or clicked, I want to show all different edges with more information.
I haven't found an answer in vis.js documentation for clustering, issues and questions. Is this feature available?

As far as I understand, the term "clustering" is applicable to nodes only in vis.js' vocabulary. However, what you can do is to hide edges.
You have to set an on click handler where you grab the selected edge, get its from and to nodes (you have to decide what to do if there's more than one selected edge, though), find the edges connecting them and hide all but one.
network.on('click',function(eventParams){
var edges = eventParams.edges;
if(edges.length == 0)
return;
var edge = edges[0],
fromID = edge.from,
toID = edge.to;
// get the nodes by ids, find all edges connecting them, hide all but the selected one
});
To advance it further to toggling, you have to check if any of the connecting edges are hidden show all if at least one is hidden, or hide all but one otherwise.

Related

Here map does marker belong to cluster?

Maybe someone knows if it's possible to get clustered markers list. Or to know if marker is inside cluster or not. The issue is currently each marker on my map have associated polyline as well. But when two or more markers gets clustered I want to hide polylines for those markers. Would it be possible somehow?
One way to implement this would be to add the markers to a h.map.Group() while rendering them and then you would be able to find if the marker is part of the group as well as the proximity of the two markers. Another way is to look at this marker clustering example to combine the markers if they are nearby.

How to display duplicate maker in cluster here map

I am using here map and I use clustering. But I have problem for displaying maker whose cordinates are same /dublicate . When I zoom in clustering , unfortunately the makers are not visible but cluster is still visible. How to display these makers when cluster is zoomed ?
My clustering options is as following
var clusteredDataProvider = new H.clustering.Provider(dataPoints, {
clusteringOptions : {
eps : 16,
minWeight : 2
},
theme : new PusulaClusterTheme()
});
We had the same issue. If you absolutely need the markers to show the exact very same spot up to the inch so to speak, then I don't know what you could do. But we wanted to show markers for each house on a street and sometimes we had multiple families in a house so we could not get the multiple ones to properly show.
We opened a ticket with HERE and this was their reply:
"...when you are placing multiple markers at the same geo-point, it is
just that they are stacked one on top of the other. Since they are all
at the same coordinate only the top most one will be displayed. So
to enable multiple marker to be shown at the same coordinate, you will
need to have some logic to avoid overlapping of markers. There is no
method straight off the shelf in JS API that can do this for you, but maybe
you can use the method map.getObjectsat(X,Y) to check if there are
already any markers at the point. If there is an existing one, then use
some logic to slightly change the coordinate value of the new marker
to be added at the point.
We ended up copying a solution we found here on Stackoverflow see this link that was written for Google Maps, but is just as relevant here with HERE. It uses a function to randomly change the last digit or 2 of the coordinates if they are multiple, and that way all your multiple coordinates will be a little bit unique and spaced out.

Create directions on a map based on custom data

So what I'm trying to do is the following:
Have a map (such as Google Maps or questMaps). It doesn't matter at all which API I need to use.
On that map have an overlay on the streets. So say (for example) the street has bad lightning at night, it will be colored red. If it has good lightning it will have a green overlay.
Based on the overlay the map creates a custom route (for example the user only wants to walk on the green/well lit streets).
I have no idea how to accomplish this (especially step 3).
First, you'll have to decide what data you need. How do you categorize certain streets as lit or unlit? What if some parts of a street are well lit and some have no lights? Do you need to know the location of every streetlight in your area? What if lights burn out?
After figuring out what data you need, you need to build your dataset. I'd be VERY surprised if this data already exists, so you will probably need to gather it yourself. Either go around town and take notes, or crowdsource the project, or figure out some other way.
Once you have gathered your data, learn the drawing API of whatever mapping tool you wish to use. They all should have functions in their API for drawing colored lines (for streets) or points (for streetlights) on top of an existing map.
Finally, learn the navigational API of the mapping tool you chose. You're right, this is a hard step. I know Google Maps lets you specify certain waypoints when requesting directions; maybe your app can calculate well-lit waypoints and feed them to Google Maps' Directions service to influence the route it generates.
Good luck!
For custom routing, you need to read up on "Graph Theory". This ignores the geography of the street map, and considers it as a set of junctions (nodes or vertices in the graph theory jargon) connected by edges. You can assign weights to edges - these could be lengths, travel times, ones and zeroes etc. Anything. They can have no relation to the position on the map.
So for your application, you'd assign a large weight to unlit streets, and a small weight to lit streets, then use a standard minimum-weight algorithm to get a route from one node to another.

How to map 2D coordinates from store image to the actual shelves of the store?

We need to build a model of the shop floor in which we can relate pixel coordinates(x, y) from
camera images to the actual objects in the 3D space of the store. The camera images, which will act as sources for generating such a model, suffer from fish-eye distortions. Hence straight lines actually appear as curves in the camera images and the walls appear to meet each other at not exactly right angles.
We are sub-dividing the region into polygons. Each polygon on the image refers to a particular region such as a shelf, display area, checkout counter etc. By mapping the pixels that fall in each polygon, we want to relate it as belonging to the shelf corresponding to that region.
Any ideas how to go about it?
Following is a sample image of the store with some polygons marked:
EDIT:
We are not looking to find out the 3D coordinates, we just need to know which shelf is any polygon mapped to. So if the user clicks on a polygon, we can say he clicked on which shelf.
We are able to manage the above for big polygons like the ones shown in the image, but the shelves away from the camera can be as small as a few pixels so we need some kind of a probabilistic result saying if the user clicked at (x,y) what is the probability that he was trying to click on Shelf-A or what is the probability that he was trying to click on Shelf-B and so on.
Basically, what we are looking for is a probability function which would return the probabilities of click on nearby objects when a small polygon(or a pixel) is clicked on the 2D image.
EDIT2:
One thing which is not apparent from the sample image is that the polygon size could be really small(as small as a few pixels) and polygons in turn could be really close to each other.
Moreover, the use case is that a customer in the store picks a product from one of the shelves. The application user would click on a point in the image from which he thinks the products is picked up. Now since the polygons are so small and so close, the user can only guess the exact point of pickup, so we can only know at best that it could be any one of the 3-4 polygons close to the point of click. So the question is how to calculate probabilities for these 3-4 polygons given the click?
As suggested here distance of the click from the center of polygon and its area could be parameters in calculation of this probability, what I am wondering is if there is algorithm to do so.
We are not looking to find out the 3D coordinates, we just need to know which shelf is any polygon mapped to. So if the user clicks on a polygon, we can say he clicked on which shelf.
I assume you have a mapping from polygon to shelf name. For example, as a list of pairs (polygon, shelf name). You can make it by hand once, if the cameras are fixed and don't move. Then your problem is only finding which polygon does a point belong to.
If you use OpenCV, then you can use its PointPolygonTest function. Otherwise you may write a similar function yourself. See, for example, a Ray casting algorithm. Then look through the list until you find a polygon which the point lies withing.
To further optimize the program you may precalculate polygons' extents. An extents allows you to quickly say when the point is definitely not inside the polygon, and consider only the remaining polygons. But with so few polygons as you have in the image, I would not bother.
Basically, what we are looking for is a probability function which would return the probabilities of click on nearby objects when a small polygon(or a pixel) is clicked on the 2D image.
Just run an experiment, try to click a single highlighted pixel, accumulate some statistics on where the operator does actually clicks. Once you have this, it's easy to predict the number of out-of-object clicks and how far they are likely to be off.
Without such experiment with exactly the same kind of person, the same usage conditions and the same pointing device you are going to use, you cannot really tell how much off the clicks are going to be. I believe that many people are sniper clickers if the mouse is good and they can see the image well. If they are forced to use touch interface or some other pointing device, the precision may be lower.
Few comments
fish eye can be corrected by applying some transformations to the image, see for example this page for some resources including panotools
to get the 3D coordinates only and image from one camera is not enough, additional info is necessary
marking a same point on two images of the same scene from different cameras can give you full 3D info (you do need to know position of each camera relative to each other)
if you are looking for tools to do it, see https://superuser.com/questions/30053/is-there-any-free-open-source-software-that-converts-photos-to-3d-models
EDIT
After update to the question, assuming there already exist a set of polygons and you want to eliminate user errors (or improve precision) you might
try to guess the desired click polygon by calculating distance to centre of weight of polygons close to click
use visual cues (flash the polygon selected and require second click)
collect statistics on errors and for certain polygons require validation
What you want is a space-filling-curce for example a Z-Curce or a Hilbert-Curve. A space-filling-curve sub-divide the plane into smaller tiles and reduce the complexity of 2-Dimensions into 1-Dimension in a way that each tile get's a new order. What might interessting for your problem is that the Hilber-Curve traverse the plane not in binary order but it use a gray code so that every tile is different in 1-Bit from the other tiles. That makes it easy to decide whether the user has clicked this or that object.

Custom Map With Directions

I want to make a map program that gives directions around a campus (residence halls, football field, etc), and within buildings (to offices, cafeteria, etc). Is there anything existing that would help facilitate that?
The alternative seems to be that I would have to create my own map of points and paths around campus and do path-finding for directions.
EDIT: To clarify, I'm wanting to know about how to add spatial awareness to a pathfinding program, in order to generate walking directions for the path. Example: for a hallway full of offices that has two nodes that allow a path to enter the hallway, how do you know if a certain office is on the left from one node and on the right from another?
If I use polygons for the nodes instead of waypoints, I can create a navigation mesh that can be used for pathfinding and directions. For directions and using a rectangle node, if I give the rectangle numbers for its sides from 1 to 4 going clockwise from the top, I know that if I enter side 2 and leave side 1, it's a right hand turn. Or, if I enter side 3 (say, the bottom) and leave side 4, it's a left.
This is pretty hard to answer without knowing what sort of interface you want. Is it supposed to be a Google Maps-type application? Or something simpler? No matter what you're probably going to have to define paths - what things are impassable.
You could do a lot of work and define what's impassable and then use a path-finding algorithm to walk across lawns; but that'd be more work than the simple approach:
Make a map of campus with all the routes greyed out
Define the points and paths in PHP/Perl/Ruby/Python/Coldfusion/ASP.Net/Whatever
Get the Start and Destination from the user
Run Dijkstra's Algorithm
Display the map of campus with overlays highlighting the route segments to light up their path.

Resources