Getting a Track from a Spotify ID using the libspotify,dll - asp.net

I can find tracks using the libspotify.dll just fine.
But if I have a Spotify Id like: 7KEYPk9d9hc1rrHVmztrUS. It does not matter if I set "track:" in front.
Actually I am using a .Net wrapper of the dll but that does not give me any results when I give it the Id.

Well. By the way LibSpotify is obsolete. But back to your problem:
If you have a look at their 'new' Web API there is a way to use a search feature (you do not need authorization for this)
https://api.spotify.com/v1/tracks/7KEYPk9d9hc1rrHVmztrUS
Related to LibSpotify I just found an example
https://developer.spotify.com/docs/libspotify/12.1.51/search_8c-example.html

Related

Translate API - different result from the web service

When using the translation API, I get a different translation (and worse) than if I use translate.google.com.
I am working on a project for a client, and the client was dissatisfied with the translation and noticed the difference.
Do these two service use different engines? I read that the API uses nmt-mode now, and that translate.google.com already uses the same engine.
Both set to translate from Norwegian to English.
Any more information that can clear this up?
Thanks!
The result differences between the translate.google.com and the Translation API calls are considered as an expected behavior that can be generated due to maintenance tasks and the logic used by the internal processes; However, the engines used for each service seems to be private information.
Based on this, it is normal to get some variances when using the API. I think you can use the model parameter option as an available workaround in case you want to specify which of the available models to use, as well as take a look on the Specifying a model official documentation to get detail information about this alternative.
It's almost about 3 years later and the problem still remains!
So I was trying to translate a dataset with the Google Translate API, but in the end it failed to translate some texts to the target language (in my case, Persian/Farsi). So I decided to check them to see if there's a pattern and maybe translate them using the web version of Google Translate.
As I was doing so, I figured that the web version actually could translate some of those untranslated texts, BUT not all. When trying to find a reason for such behaviour, I found out that most of them were names and not sentences. But as we know, names can easily be written with the target language characters as the translation. But why the API doesn't transform those names while the web version does? This photo will explain everything perhaps:
verified translation
As can be seen, some translations have a badge indicating that the translation has been verified, while some others don't.
So to recap, my guess is that maybe the API is set to only use verified translations, but as for the web version, even unverified translations are allowed since you can edit or report them.

Import.io - Can it replace Kimonolabs

I use Kimonolabs right now for scraping data from websites that have the same goal. To make it easy, lets say these websites are online shops selling stuff online (actually they are job websites with online application possibilities, but technically it looks a lot like a webshop).
This works great. For each website an scraper-API is created that goes trough the available advanced search page to crawl all product-url's. Let's call this API the 'URL list'. Then a 'product-API' is created for the product-detail-page that scrapes all necessary elements. E.g. the title, product text and specs like the brand, category, etc. The product API is set to crawl daily using all the URL's gathered in the 'URL list'.
Then the gathered information for all product's is fetched using Kimonolabs JSON endpoint using our own service.
However, Kimonolabs will quit its service end of february 2016 :-(. So, I'm looking for an easy alternative. I've been looking at import.io, but I'm wondering:
Does it support automatic updates (letting the API scrape hourly/daily/etc)?
Does it support fetching all product-URL's from a paginated advanced search page?
I'm tinkering around with the service. Basically, it seems to extract data via the same easy proces as Kimonolabs. Only, its unclear to me if paginating the URL's necesarry for the product-API and automatically keeping it up to date are supported.
Any import.io users here that can give advice if import.io is a usefull alternative for this? Maybe even give some pointers in the right direction?
Look into Portia. It's an open source visual scraping tool that works like Kimono.
Portia is also available as a service and it fulfills the requirements you have for import.io:
automatic updates, by scheduling periodic jobs to crawl the pages you want, keeping your data up-to-date.
navigation through pagination links, based on URL patterns that you can define.
Full disclosure: I work at Scrapinghub, the lead maintainer of Portia.
Maybe you want to give Extracty a try. Its a free web scraping tool that allows you to create endpoints that extract any information and return it in JSON. It can easily handle paginated searches.
If you know a bit of JS you can write CasperJS Endpoints and integrate any logic that you need to extract your data. It has a similar goal as Kimonolabs and can solve the same problems (if not more since its programmable).
If Extracty does not solve your needs you can checkout these other market players that aim for similar goals:
Import.io (as you already mentioned)
Mozenda
Cloudscrape
TrooclickAPI
FiveFilters
Disclaimer: I am a co-founder of the company behind Extracty.
I'm not that much fond of Import.io, but seems to me it allows pagination through bulk input urls. Read here.
So far not much progress in getting the whole website thru API:
Chain more than one API/Dataset It is currently not possible to fully automate the extraction of a whole website with Chain API.
For example if I want data that is found within category pages or paginated lists. I first have to create a list of URLs, run Bulk Extract, save the result as an import data set, and then chain it to another Extractor.Once set up once, I would like to be able to do this in one click more automatically.
P.S. If you are somehow familiar with JS you might find this useful.
Regarding automatic updates:
This is a beta feature right now. I'm testing this for myself after migrating from kimonolabs...You can enable this for your own APIs by appending &bulkSchedule=1 to your API URL. Then you will see a "Schedule" tab. In the "Configure" tab select "Bulk Extract" and add your URLs after this the scheduler will run daily or weekly.

Is there an API call to get a list of saved places in Google Maps?

I have a ton of saved places that appear on my Google Maps - but there is no way to manage, filter or search them. Is there a way to access these locations by API?
I scanned the maps api and can't find any reference. Is there another Google API that makes this available?
There do have a REST API can retrieve the saved places.
http://www.google.com/bookmarks/?output=xml
Visit this link to get more information.
https://www.google.com/bookmarks/
There are also api like:
https://www.google.com/bookmarks/find?q=conf&output=xml&num=10000
https://www.google.com/bookmarks/lookup?
But seems like they have been deprecated and most of document are not available anymore. Use them as you own risk.
Currently the list of saved places in My Maps is not available via an API. There is a feature request tracking this you can use to follow along # https://code.google.com/p/gmaps-api-issues/issues/detail?id=2953.
2022: I created a gist for parsing saved places from a shared list via python. It is really unstable because its a quick&dirty solution but maybe it will help someone: https://gist.github.com/ByteSizedMarius/8c9df821ebb69b07f2d82de01e68387d
Edit: The above answer did not yet take pagination into consideration. Please see my answer here.

Endpoint design for a data retrieval orientated ASP.NET webapi

I am designing a system that uses asp.net webapi to serve data that is used by a number of jquery grid controls. The grids call back for the data after the page has loaded. I have a User table and a Project table. In between these is a Membership table that stores the many to many relationships.
User
userID
Username
Email
Project
projectID
name
code
Membership
membershipID
projectID
userID
My question is what is best way to describe this data and relationships as a webapi?
I have the following routes
GET: user // gets all users
GET: user/{id} // gets a single user
GET: project
GET: project/{id}
I think one way to do it would be to have:
GET: user/{id}/projects // gets all the projects for a given user
GET: project/{id}/users // gets all the users for a given project
I'm not sure what the configuration of the routes and the controllers should look like for this, or even if this is the correct way to do it.
Modern standard for that is a very simple approach called REST Just read carefully and implement it.
Like Ph0en1x said, REST is the new trend for web services. It looks like you're on the right track already with some of your proposed routes. I've been doing some REST design at my job and here are some things to think about:
Be consistent with your routes. You're already doing that, but watch out for when/if another developer starts writing routes. A user wants consistent routes for using your API.
Keep it simple. A major goal should be discoverability. What I mean is that if I'm a regular user of your system, and I know there are users and projects and maybe another entity called "goal" ... I want to guess at /goal and get a list of goals. That makes a user very happy. The less they have to reference the documentation, the better.
Avoid appending a ton of junk to the query string. We suffer from this currently at my job. Once the API gets some traction, users might want more fine grained control. Be careful not to turn the URL into something messy. Something like /user?sort=asc&limit=5&filter=...&projectid=...
Keep the URL nice and simple. Again I love this in a well design API. I can easily remember something like http://api.twitter.com. Something like http://www.mylongdomainnamethatishardtospell.com/api/v1/api/user_entity/user ... is much harder to remember and is frustrating.
Just because a REST API is on the web doesn't mean it's all that different than a normal method in client side only code. I've read arguments that any method should have no more than 3 parameters. This idea is similar to (3). If you find yourself wanting to expand, consider adding more methods/routes, not more parameters.
I know what I want in a REST API these days and that is intuition, discoverability, simplicity and to avoid having to constantly dig through complex documentation.

Can anyone provide a good info on the various uses of hash(#) in urls?

I'm developing a software, which is going to provide in-deep information about url's.
While the get-params are simple, I'm having trouble with the hash.
At first it was used to mark places in the document to navigate to, but we're past that now. I've seen JS engines using it to store params similar to the get strings.
So, here's my question: is everything that comes after a hash free game, or are there any conventions about what it should look like?
Try these sites it could help. Fragment Identifier, Wikipedia or Pound Sign, Google
It's got a list of examples you could use.
It all depends on what you need. Hashes are used in modern web applications that make use of asynchronous calls to the server using ajax. This e.g. allows the user to copy the link and receive the same content after pasting (actions taken are put into hash which changes the url which otherwise would remain static).
You want to read http://www.jenitennison.com/blog/node/154

Resources