How to graphically connect nodes by drawing on canvas and tracking the new connection in the events in vis.js? - vis.js

E.g. I may want to connect "Audi' node with existing "Cars" node to make a connection between the two. This should fire an event so that the underlying data source can be updated.

VIS.js provide some good examples on its page. Check this link and you should find everything you need.
Example: http://visjs.org/examples/network/other/manipulation.html
The important element is the "manipulation" configuration of the network with its callbacks for addNode, editNode, etc.
Dokumentation http://visjs.org/docs/network/manipulation.html

Related

How to know if a user clicked a link using its network traffic

I have large traffic files that I'm trying to analyze in order to get statistical features of users.
One of the features that I would like to extract is links clicking in specific sites (for examples - clicking on popups and more)
My first idea was to look in the packets' content and search for hrefs and links, save them all in some kind of data structure with their time stamps, and then iterate again over the packets to search for requests at any time close to the time the links appeared.
Something like in the following pseudo code (in the following code, the packets are sorted by flows (flow: IP1 <=> IP2)):
for each packet in each flow:
search for "href" or "http://" or "https://"
save the links with their timestamp
for each packet in each flow:
if it's an HTTP request and its URL matches any URL in the list and the
time is close enough, record it
The problem with this code is that some links are dynamically generated while the page is loading (using javascript or so), and cannot be found using the above method.
I have also tried to check the referrer field in the HTTP header and look for packets that were referred by the relevant sites. This method generates a lot of false positives because of iframes and embedded objects.
It is important to mention that this is not my server, and my intention is to make a tool for statistical analysis of users behavior (thus, I can't add some kind of click tracker to my site).
Does anyone have an idea what can I do in order to check if the users clicked on links according to their network traffic?
Any help will be appreciated!
Thank you

No custom metrics being logged?

I am logging custom metrics using TrackMetric:
var telemetry = new TelemetryClient();
telemetry.TrackMetric($"Cache Size", cache.Count());
But nothing appears in the portal:
The output window when debugging shows the metrics being sent. I'm not sure how else to debug this.
There can be several things that make new items show up later than you'd like:
latency in AI pipeline itself, which is usually just a couple minutes or less (you can always check http://aka.ms/aistatus to see if there's any non-normal latency going on)
if you added a new custom property or new custom metric, it might take time for that new field to show up as a field in the metadata that the charts/etc use to build themselves. depending on timing here, especially if this is a brand new app, it can take up to ~15 minutes for a new property to show up in metadata if the stars are all unaligned... but normally much less than that.
once it is available in metadata, you might need to refresh in the portal if you've already opened Metrics Explorer for that AI resource for it to re-request metadata to see your field (normally just the "refresh" command on Metrics Explorer or an Overview blade is good enough to get that working, but doing a full refresh in the browser works as a last resort)

Why EC_DISPLAY_CHANGED is sent even though a monitor change / switch didn't occur?

On the initial graph start, appoxly after 10 video samples, i keep receiving from the GraphManager the EC_DISPLAY_CHANGE event, even though, i didn't physically move the graph from one monitor to another, I only started it on the secondary monitor.
I tried to search for additional information regarding the causes the cause CGraphManager to send it but couldn't find any.
I've additionally used the following code snippet to handle the particular event by myself.
if (FAILED(hr = m_spMediaEventEx->CancelDefaultHandling(EC_DISPLAY_CHANGED)))
return hr;
Thanks for the help
EC_DISPLAY_CHANGE on MSDN:
If the display mode changes, the video renderer might need to choose another format. By sending this message, the renderer signals to the filter graph manager that it needs to be reconnected. During the reconnection, the renderer can select a new format.
The typical scenario is a video renderer expecting to be shown up on primary monitor, and then positioned onto secondary. The renderer generates the event in order to update itself through filter graph transition. You see the event after a few samples are already streamed because the event is handled asynchronously. To work this around, use IVMRMonitorConfig::SetMonitor and friends to position the renderer correctly well in advance.
Note that under normal circumstances, the event and reconnected is just a small delay and should be handled transparently.
By canceling default behavior, you are canceling the following exactly. And you are expected to take care yourself of what default action is trying to fix.
Default Action
The filter graph manager temporarily stops the graph, and then disconnects and reconnects the video renderer. It does not pass the event to the application.

Google Maps Pinpoint Display

My friend has a website named as http://jobifly.com .Now the issue is that when you enter the site you will see Google maps on the navigation panel and that shows the jobs which are available, but its Pin-pointing them one by one, can't it show all of them together?
Awaiting Response..
Regards,
Zain Sohail
The problem with the page is that all the locations will be gecoded when the page loads. To avoid the OVER_QUERY_LIMIT this will be done with a delay.
To show them together you must wait until all locations have been geocoded.
So your friend may either:
use the current geocoding-strategy and initialize the map when all addresses have been geocoded(I don't think that it is an option for your friend)
or store the LatLng's somewhere when they have been geocoded, so he can use them directly without a delay (Note: this is only permitted for up to 30 days)
It's also possible to use a FusionTable to store these data, the geocoding would be done automatically there.
Another option to be able to store the LatLng's permanent: give the users that offer a job a map where they can select the location manually by clicking on the map, the returned LatLng may be stored without restrictions.

Google Maps - Caching - Methods

Ok! So I have spoken to a google representative about this issue, however since I am not enterprise level, he can't push me to tech support and suggested that I use the SO for answers. Here is the question...
In Google Maps Terms it states the following:
(b) No Pre-Fetching, Caching, or Storage of Content. You must not pre-fetch, cache, or store
any Content, except that you may store: (i) limited amounts of Content for the purpose of
improving the performance of your Maps API Implementation if you do so temporarily (and in
no event for more than 30 calendar days), securely, and in a manner that does not permit
use of the Content outside of the Service; and (ii) any content identifier or key that
the Maps APIs Documentation specifically permits you to store. For example, you must not
use the Content to create an independent database of "places" or other local listings
information.
This led me to originally believe that google would not allow caching of any type of information. However, then I read the following:
When to Use Client-Side Geocoding
The basic answer is "almost always." As geocoding limits are per user session, there is no risk that your application will reach a global limit as your userbase grows. Client-side geocoding will not face a quota limit unless you perform a batch of geocoding requests within a user session. Therefore, running client-side geocoding, you generally don't have to worry about your quota.
Two basic architectures for client-side geocoding exist.
Run the geocoding and display entirely in the browser. For instance, the user enters an address on your page. Your application geocodes it. Then your page uses the geocode to create a marker on the map. Or your app does some simple analysis using the geocode. No data is sent to your server. This reduces load on your server, but doesn't give you any sense of what your users are doing.
Run the geocode in the browser and then send it to the server. For instance, the user enters an address. Your application geocodes it in the browser. The app then sends the data to your server. The server responds with some data, such as nearby points of interest. This allows you to customize a response based on your own data, and also to cache the geocode if you want. This cache allows you to optimize even more. You can even query the server with the address, see if you have a recently cached geocode for it, and if you do, use that. If you don't, then return no result to the browser, and let it geocode the result and send it back to the server to for caching.
So one side says you cannot cache, the other side tells you, you should. Another solution it states is to always use clientside when you can, but then this becomes a grey area as well, because both examples state that you must have a user input data. What if the jquery read data from a div or span and then geocoded the information? The user wouldn't have actually done the geocode,but it was still done client-side? I'm trying to create a site that has a bunch of events generated by users and this site could get pretty loaded, so I am trying to determine the best practice in being able to do this. Google suggested here, so before you go and say this is "off-topic" please note, this is where they stated me to post.
Any feedback would be greatly appreciated.
The first quote does not explicitly forbid caching data at all. It is ambiguous as to how much you can cache (what number explicitly is "limited amounts"?) but it does not forbid caching.
You are allowed to cache the data if it helps improve the performance of your site as long as you retain the data for no longer than 30 days and do not make it available in any way to any other service except the service that originally retrieved the data.
Regarding user interaction - if your user explicitly enters a page with the expectation that they will be shown geocoded information I would assume that this would fulfill "user interaction".
As an example from a project I worked on last year I had it set up to do the following:
- Show markers on the map
- If the user clicked a marker they were shown a popup with data from the cache if available, otherwise a geocode would be performed and the returned information would be cached along with the date/time of the cache.
Another page of the site showed a history of these markers at 5 minute intervals throughout the day. If cached data was present (from clicking the map marker as in the previous part) this would be shown, otherwise a geocode would be performed and the data cached as before. The user clicking to run the report was (in my opinion) enough "user interaction" to not count as pre-fetching as the user had to manually select a timeframe before the report would be displayed.
A cronjob then ran every day at midnight which would go through each record with cached data over 25 days old and remove it.
As it was I was caching much less than 10% of the marker positions being shown (20+ markers being updated every minute, but the report was being run on maybe 3-5 markers each day and only geocoding data for every 5th point).

Resources