HERE geocoding API - Storing location geocodes in local DB - here-api

Would like to understand which Pricing option will support storing the Geocodes in local DB and Bulk geocoding.

You can go through the Terms and Conditions on Here data here - https://developer.here.com/terms-and-conditions#restrictions-on-here-materials.
Caching or storing any location data for the purpose of building a repository of location assets or scaling one request to serve multiple end users is prohibited. Customer may not use any HERE Services in a manner that pre-fetches, caches, or stores data or results, except:
As explicitly allowed by the caching headers (HTTP/1.1 standard) returned by HERE Services; or
To the extent Customer is storing or caching for no more than thirty (30) days only to the extent necessary for enabling or improving an end user's use of the HERE Services.
For more details you can contact Here Help via https://developer.here.com/help.

Related

How to cache an api response?

I'm using the api http://exchangeratesapi.io/ to get exchange rates.
Their site asks:
Please cache results whenever possible this will allow us to keep the service without any rate limits or api key requirements.
-source
Then I found this:
By default, the responses all of the requests to the exchangeratesapi.io API are cached. This allows for significant performance improvements and reduced bandwidth from your server.
-somebody's project on github, not sure if accurate
I've never cached something before and these two statements confuse me. When the API's site says to "please cache the results", it sounds like caching is something I can do in a fetch request, or somehow on the frontend. For example, some way to store the results in local storage or something. But I couldn't find anything about how to do this. I only found resources on how to force a response NOT to cache.
The second quote makes it sound like caching is something the API does itself on their servers, since they set the response to cache automatically.
How can I cache the results like the api site asks?
To clear your confusion on the conflicting statements you're referencing:
Caching just means to store the data. Examples of where the data can be stored are in memory, in some persistence layer (like Redis), or in the browser's local storage (like you mentioned). The intent behind caching can be to serve the data faster (compared to getting it from the primary data source) for future requests/fetches, and/or to save on costs for getting the same data repeatedly, among others.
For your case, the http://exchangeratesapi.io/ API is advising consumers to cache the results on their side (as you mentioned in your question, this can be in the browser's local storage, if you're calling the API front front-end code, or stored in memory or other caching mechanisms/structures on the server-side application code calling the API) to that they can avoid the need to introduce rate limiting.
The project from Github you're referencing, Laravel Exchange Rates, appears to be a PHP wrapper around the original API - so it's like a middleman between the API and a developer's PHP code. The intent is to make it easier to use the API from within PHP code, and avoid having to make raw HTTP requests to the API and avoid processing the responses; the Laravel Exchange Rates handles that for the developer.
In regards to the
By default, the responses all of the requests to the exchangeratesapi.io API are cached
statement you're asking about, it seems the library follows the advice of the API, and caches the results from the source API.
So, to sum up:
http://exchangeratesapi.io/ is the source API, and it advises consumers to cache results. If your code is going to be calling this API, you can cache the results in your own code.
The Laravel Exchange Rates PHP library is a wrapper around that source API, and does cache the results from the source API for the user. If you're using this library, you don't need to further cache.

How should I specify the resource database via HTTP Requests

I have a REST API that will be facilitating CRUD from multiple databases. These databases all represent the same data for different locations within the organization (IE We have 20 or so implementations of a software package and we want to read from all of the supporting databases via one API).
I was wondering what the "Best Practice" would be for facilitating what database to access resources from?
For example, right now in my request headers I have a custom "X-" header that would represent the database id. Unfortunately, this sort of thing feels a bit like a workaround.
I was thinking of a few other options:
I could bake the Database Id into the URI (/:db_id/resource/...)
I could modify the Accept Header like someone would with an API version
I could split up the API to be one service per database
Would one of the aforementioned options be considered "better" than the others, and if not what is considered the "best" option for this sort of architecture?
I am, at the moment, using ASP.NET Web API 2.
These databases all represent the same data for different locations within the organization
I think this is the key to your answer - you don't want to expose internal implementation details (like database IDs etc.) outside your API - what if you consolidate? or change your internal implementation one day?
However, this sentence reveals a distinction that is meaningful to the business - the location.
So - I'd make the location part of the URI:
/api/location/{locationId}/resource...
Then map the locationId internally to a database ID. LocationId could also be a name, or a code, or something unique that would be meaningful to the API client.
Then - if you later consolidate multiple locations to the same database or otherwise change your internal implementation, the clients don't have to change.
In addition, whoever is configuring the client applications, can do so thinking about something meaningful to the business - the location they are interested in.

Disadvantage of using session[""] in asp.net

In my project I use session to store user information ( username, password, personal image, and gender ) to be used in all pages of my project. I also use two other session to store small strings. Is there any disadvantage of using session ? also is there any risk of using session to store user password ?
Some things to take into account:
Don't store passwords. You should hash the incoming password, validate against the hash in your DB, and not hold on to it afterwards.
You should try to avoid using a write-access Session throughout the application, since you'll end up forcing asp.net to serialize incoming requests from the same session. Use read-only Session to avoid that. This could become apparent if you initiate multiple ajax calls simultaneously. More info here: https://connect.microsoft.com/VisualStudio/feedback/details/610820/session-based-asp-net-requests-are-serialized-and-processed-in-a-seemingly-inverse-order
Storing too much data in the Session could cause scalability issues, since all that information is held in memory on the server. If you switch over to SQL storage for sessions (common in webfarm/cloud deployments), then if the session is large every request on the server will have that Session data going back and forth between the server and the DB.
Content that goes into the session should be Serializable, just in case you decide to move over to a different persistent storage (such as sql server)
Using Sessions to retain information may not go well with stateless REST/WebApi endpoints (if you need to create any in the future)
Excessive use of Session for storage could make unit testing slightly more difficult (you will have to mock the Session)
By "personal image" I assume you are storing a url or such, and not an actual binary image. Avoid storing binary content. Only return the binary image file when the browser requests it, and don't store it in memory, the browser can cache that content easily.
You might also find the references linked in this answer to be useful in providing additional information: https://stackoverflow.com/a/15878291/1373170
The main problem with using Session and any machine depending properties is the scalability of the web site, so if you wanted to deploy your web site to a farm of servers then you can see the problem with depending on a machine state property since the request may be processed on different machines.
Hope that helps.

How to cache in spring mvc?

I use cache in spring mvc.But since
the server reset 2 times a day ,
the cached data will be destroyed .
How should the cached data is stored in
a folder that this does not happen ?
I hope you dont want to persist data on secondary storage since that will then involve disk IO and will again reduce your application's performance.
All you need is to store data in a distributed cache. A distributed cache will have dedicated servers for caching, so that even if you server resets/restarts, data will remain cached.
There are number of distributed caching solutions that provide integration with spring mvc like memcached being one of them. TayzGrid (an in-memory distributed datagrid) also provides integration with spring mvc. You can easily configure it as caching provider. And your same application will start using distributed cache without any code change required.

SignalR and Memcached for persistent Group data

I am using SignalR with my ASP.NET application. What my application needs is to pressist the groups data that is updated from various servers. According to SignalR documentation it's my responsibility to do this. It means that I need to use an external server/service that will collect the data from one or more servers and I can query that data from a single place.
I first thought that MemCached is the best candidate, because it's fast and the data that I need to put there is volatile. The problem is that I need to store collections, for example: collection A with user Ids, so I can have Collection A with 2000 user ids and Collection B with 40,000 ids. The problem is that I need to update this collection and remove and insert id very quickly. I afraid that because the commands will be initiated from several servers, and the fact that I might need to read the entire collection and update it on a either web servers, the data won't be consistent. Web Server A might update the data, but Server B will read the data before Server A finished updating it. There is a concurrency conflict.
I'm searching for the best way to implement this kind of strategy in my ASP.NET 4.5 application. I think that this might be a choice to use a in-memory database or that to insure no data integrity.
I want to ask you what is the best solution for my problem.
Here's an example for my problen:
MemCached Server - stores the collections (e.g. Collection A, B, C, D), each collection stores User Id's, which can be thousands of Ids and even much more.
Web Servers - My Amazon EC2 web servers with SignalR installed. Can be behind load balancer. Those servers need to gain access to the memcached server and get a complete collection items by the Collection name (e.g. "Collection_23"). They need to be able to remove items (User Id's) and add Items. All this should be fast as possible.
I hope that I explained myself right. Thanks.
Alternatively, you can use Redis, like Memcached everything is served from in-memory. Redis has many other capabilities beyond a simple key-value datastore; for your specific case you might use Redis transactions, which ensures data consistency.
In a comment in another post it shows a link to redis provider. The link is broken, it seems that it is now integrated in the main SignalR project: https://github.com/SignalR/SignalR/tree/master/src/Microsoft.AspNet.SignalR.Redis
You have the redis nuget here:
http://www.nuget.org/packages/Microsoft.AspNet.SignalR.Redis
and documentation here:
http://www.asp.net/signalr/overview/signalr-20/performance-and-scaling/scaleout-with-redis

Resources