Google Maps Pinpoint Display - wordpress

My friend has a website named as http://jobifly.com .Now the issue is that when you enter the site you will see Google maps on the navigation panel and that shows the jobs which are available, but its Pin-pointing them one by one, can't it show all of them together?
Awaiting Response..
Regards,
Zain Sohail

The problem with the page is that all the locations will be gecoded when the page loads. To avoid the OVER_QUERY_LIMIT this will be done with a delay.
To show them together you must wait until all locations have been geocoded.
So your friend may either:
use the current geocoding-strategy and initialize the map when all addresses have been geocoded(I don't think that it is an option for your friend)
or store the LatLng's somewhere when they have been geocoded, so he can use them directly without a delay (Note: this is only permitted for up to 30 days)
It's also possible to use a FusionTable to store these data, the geocoding would be done automatically there.
Another option to be able to store the LatLng's permanent: give the users that offer a job a map where they can select the location manually by clicking on the map, the returned LatLng may be stored without restrictions.

Related

How to expire Branch.io link within specific time? (Deep linking via branch metrics)

I am using link to generate deep linking. I am using their public API's endpoint to generate links.
Here is their endpoint: https://api.branch.io/v1/url
I append my branch key and data that I need to associate in this link. Everything is working fine but I need to expire this link within one hour.
Reading up here: https://github.com/BranchMetrics/branch-deep-linking-public-api#creating-a-deep-linking-url
I added "duration" key also, but it didnt expire the link.
It will be great if anyone could help me in figuring out how to expire branch.io link.
Alex from Branch.io here: the duration parameter is used for something different, so it's not going to be able to do what you want. We don't have a built-in feature to expire links, but you could create something close to it yourself:
Add a custom link parameter containing a timestamp for when the link was created.
Check for that timestamp when handling the link at the destination, and do something different if it is more than an hour old. I'm guessing this would be inside your app, and also on whatever fallback URL you have specified for when the app isn't installed or the user is on desktop.
Mail from branch.io support team suggested this answer as below:
If you found out about the $exp_date parameter from here then the
parameters in that list are only used for iOS Spotlight Indexing but
will be used by Branch in the future. A better solution than
utilizing $exp_date is to code logic into your client to determine
what to do with link data based on date. This way, your deep links
will always work and always carry data through, and you won't have to
worry about users clicking empty links.
This way, you would include date as an extra meta key/value pair, and
examine this date in your client when receiving link params to
determine if you want to honor the link's contents or not.

How can I track visitors’ paths from one page to another with full URLs?

Say I have two pages on a site called “Page 1” and “Page 10”. I'd like to be able to see the paths visitors take to get from “Page 1” to “Page 10” with full URLs intact. Many of the URLs (including those for “Page 1” and “Page 10”) will include query strings that are important.
Is this possible? If so, how?
Try using behavior flow reports. The report basically shows you how visitors click through your website. There are a lot of ways to customize the report, with which you will need to play around to really answer your question. By default, the behavior flow focuses on entry and exit points of visitors, regardless how many times they hit the different subpages in between. However, I'm sure you can set appropriate filters and settings to answer your question.
I use two methods for tracking where people have been on my website:
Track and store the information in my own SQL database. (details below)
Lead Forensics (paid subscription, but you can do a trial).
For tracking and storing my own data, I record unique visitors based upon the IP Address they're connecting from and then have a separate table that records all page views that links back to the unique visitor table.
Lead Forensics data simply allows me to link up those unique visitors with actual companies that have viewed my website.
Doing it yourself means you don't have to rely on Google working for your records to work, and in my experience Google Analytics tends to round numbers so you don't get a true indication of numbers, and also you can remove bots and website trawlers from your data by tracking the user agent string.
As a somewhat ugly hack you could use transaction tracking. If you use the same transaction id multiple times subsequent products will be added to the existing data. So assign an ID at the start of the visits and on each page record a transaction with the current page url as product name (and the ID as transaction id). This will give you the complete path per user (I am frankly not to sure how this is useful - at some point you probably want aggregated data. Plus each transaction and product counts towards your quota for interaction counts, so on a large site you might run over the 10mio hits limit).
you can do it programatically
have a MAP in the backend which stores the userId (assuming u would have given a unique ID at the time of login to each user) with a list of Strings(each string being URL visited by that user)
whenever the user hits another URL from Page 1(and only from page1, check it using JS), send a POST request to backend with the new URL in its data section.
In the backend, check if the URL is of Page 10 and if not, add this URL as a string into the MAP for that corresponding user
Finally, when the user clicks on the Page 10 URL, you know the URLs in the way from Page 1 to Page 10 and so use them.
Though if I consider JS and I have not misunderstood your question, we can get the previous URL from request header information using document.referrer.
Are you trying to do it from 'Google Tag Manager'? I am not sure whether you are trying to trace the URLS in clientside or server side?

Do I have to use queryProfiles every time to get the profile id?

I am playing with Google Analytics API and found that when I get the web property list, I have a defaultProfileId very useful. It can just help me pass the queryProfiles call, to save one request and make the whole app works faster.
But I noticed that some web properties just don't have the defaultProfileId thing.
Just for the information, most of the situations happens to a tracking ID like UA-XXXX-1.
Any tips?
Thanks!
You are correct webProperty does not always return a defaultProfileId. I was also unable to find any information on the Web Properties page as to how the API decides what a Default Profile Id is. I submitted a bug report for it, with the Analytics dev team you can find it at: defaultProfileId - not always sent with a WebProperty. Lets hope they come with a response you are correct this is a very useful feature.
Yes you are probably going to have to query the profiles every time to get the correct profile you are after.
I just found this:
https://www.googleapis.com/analytics/v3/management/accounts/~all/webproperties/~all/profiles?oauth_token={Token}
There should be away of working that to make one request for accounts, one to get all the Web Properties , then one to get all the Profiles.

Google Maps - Caching - Methods

Ok! So I have spoken to a google representative about this issue, however since I am not enterprise level, he can't push me to tech support and suggested that I use the SO for answers. Here is the question...
In Google Maps Terms it states the following:
(b) No Pre-Fetching, Caching, or Storage of Content. You must not pre-fetch, cache, or store
any Content, except that you may store: (i) limited amounts of Content for the purpose of
improving the performance of your Maps API Implementation if you do so temporarily (and in
no event for more than 30 calendar days), securely, and in a manner that does not permit
use of the Content outside of the Service; and (ii) any content identifier or key that
the Maps APIs Documentation specifically permits you to store. For example, you must not
use the Content to create an independent database of "places" or other local listings
information.
This led me to originally believe that google would not allow caching of any type of information. However, then I read the following:
When to Use Client-Side Geocoding
The basic answer is "almost always." As geocoding limits are per user session, there is no risk that your application will reach a global limit as your userbase grows. Client-side geocoding will not face a quota limit unless you perform a batch of geocoding requests within a user session. Therefore, running client-side geocoding, you generally don't have to worry about your quota.
Two basic architectures for client-side geocoding exist.
Run the geocoding and display entirely in the browser. For instance, the user enters an address on your page. Your application geocodes it. Then your page uses the geocode to create a marker on the map. Or your app does some simple analysis using the geocode. No data is sent to your server. This reduces load on your server, but doesn't give you any sense of what your users are doing.
Run the geocode in the browser and then send it to the server. For instance, the user enters an address. Your application geocodes it in the browser. The app then sends the data to your server. The server responds with some data, such as nearby points of interest. This allows you to customize a response based on your own data, and also to cache the geocode if you want. This cache allows you to optimize even more. You can even query the server with the address, see if you have a recently cached geocode for it, and if you do, use that. If you don't, then return no result to the browser, and let it geocode the result and send it back to the server to for caching.
So one side says you cannot cache, the other side tells you, you should. Another solution it states is to always use clientside when you can, but then this becomes a grey area as well, because both examples state that you must have a user input data. What if the jquery read data from a div or span and then geocoded the information? The user wouldn't have actually done the geocode,but it was still done client-side? I'm trying to create a site that has a bunch of events generated by users and this site could get pretty loaded, so I am trying to determine the best practice in being able to do this. Google suggested here, so before you go and say this is "off-topic" please note, this is where they stated me to post.
Any feedback would be greatly appreciated.
The first quote does not explicitly forbid caching data at all. It is ambiguous as to how much you can cache (what number explicitly is "limited amounts"?) but it does not forbid caching.
You are allowed to cache the data if it helps improve the performance of your site as long as you retain the data for no longer than 30 days and do not make it available in any way to any other service except the service that originally retrieved the data.
Regarding user interaction - if your user explicitly enters a page with the expectation that they will be shown geocoded information I would assume that this would fulfill "user interaction".
As an example from a project I worked on last year I had it set up to do the following:
- Show markers on the map
- If the user clicked a marker they were shown a popup with data from the cache if available, otherwise a geocode would be performed and the returned information would be cached along with the date/time of the cache.
Another page of the site showed a history of these markers at 5 minute intervals throughout the day. If cached data was present (from clicking the map marker as in the previous part) this would be shown, otherwise a geocode would be performed and the data cached as before. The user clicking to run the report was (in my opinion) enough "user interaction" to not count as pre-fetching as the user had to manually select a timeframe before the report would be displayed.
A cronjob then ran every day at midnight which would go through each record with cached data over 25 days old and remove it.
As it was I was caching much less than 10% of the marker positions being shown (20+ markers being updated every minute, but the report was being run on maybe 3-5 markers each day and only geocoding data for every 5th point).

ASP.NET/Silverlight Location Points on Google/Bing Map

I realize that the question is pretty complicated and may require much research. Hope anybody can help me to get useful resources to achieve my goal.
I want to have a Google or Bing map on my ASP.NET 4 application (C#) to display all my logged in users as points on the map.
I understand that this involves five major problems
Get the location of the device (most likely standard laptop with IE9 browser) based on its unique IP address.
Integrate Google or Bing map with ASP.NET or Silverlight application.
Display the right portion of the map with the right Zooming depending on logged-in users locations.
Finally, mark the addresses as points on the map.
Note that the locations points should be dynamically reflected when any of the locations is changed.
The database is implemented using SQL SERVER 2005/2008R2
There are geo location services that can give you the latitude and longitude given an ip address.
As you mention you would be storing these in a database, getting all the current users would be simple database call.
Integrating a google map into a html page is very simple. You would only have to emit the necessary javascript from your page.
You should mark the points on the map first.
Google maps api has calls to fit the map to show all current points(fitBounds). I am guessing bing would have something similar.
To reflect the current points, you would have to fresh your locations from the database. I highly recommend an ajax call that returns json and using that to replot the points.

Resources