I'm trying to retrieve the full topic description/summary for some Freebase articles. I have been using the Freebase topic API, which returns this type of results: http://www.freebase.com/experimental/topic/standard?id=/en/jimi_hendrix
But I notice that the description is not complete and ends with "...". Is there a way to use some Freebase API to obtain the article's full description?
Does Freebase even store the complete description or does it just stores a portion of the description from Wikipedia?
Freebase just stores a portion of the Wikipedia description but there is usually more than what's given by the topic API.
To get the "full" text for a Wikipedia blurb associated with a Freebase topic you first need to query the Read API for a list of related articles like this:
{
"id": "/en/jimi_hendrix",
"/common/topic/article": [{}]
}
Try it in the Query Editor
Then choose one or more of the articles that it returns and feed its ID into the /trans/raw API like this:
http://api.freebase.com/api/trans/raw/m/043dz
You'll notice that the blurb of text that gets returned is a bit longer (1200 chars) and doesn't have the "..." but its still chopped off at the end.
When I display Freebase topic descriptions in a web page I have some code to clean it up before hand. I split it apart into paragraphs by looking for newlines and then if the last paragraph doesn't end with a period, exclamation mark or question mark I just throw away that paragraph. The way the Wikipedia blurbs are written, you usually only need the first paragraph anyways.
You can also fetch this directly from MQL with the "text" extension:
{
"id": "/en/jimi_hendrix",
"/common/topic/article": [{
"text": {
"maxlength": 16384,
"chars": null
}
}]
}
Note that you'll need to turn on MQL extensions for this to work - see here for an example of this in action.
Edit August 2012: while this works for the original freebase.com hosted APIs, the MQL extension functionality has been removed from the new googleapis.com hosted APIs, so this method shouldn't be relied on any more.
Related
I'm having issues with implementation of Enhanced conversions in Google Ads. Would be really appreciative if someone could give me some pointers to the right direction or any suggestion regarding what to try.
The error message I'm receiving is the one below:
"User address data field is incorrectly formatted
Make sure user address is correctly formatted and hashed using the SHA-256 algorithm. See instructions and double-check your setup."
I've been following the instructions specified here: https://support.google.com/google-ads/answer/10172785?hl=en#zippy=%2Cenable-enhanced-conversions-in-google-tag-manager-and-create-custom-javascript-variable
The part regarding "hashed using the SHA-256" appears to be incorrect, since Google in another part of their documentation say that the data shouldn't be se sent hashed, unless you're using the Google API (which is not the case here).
The only value not available in the data layer on the conversion page is "street" so I've left that field out entirely. The other attributes should be present in the data layer.
Which steps have I taken?
Step 1 - Selected "Include user-provided data from your website" in the existing Google Ads pixel/tag.
Step 2 - Configured the variable type "User-provided data" and selected "Code" as type
Step 3 - The data source has the below code:
function () {
return {
"email": {{data.email}} ,
"phone_number": {{Telephone}} ,
"address": {
"first_name": {{First Name}} ,
"last_name": {{Last name}} ,
"city": {{City}} ,
"postal_code": {{zipCode}} ,
"country": {{Country ID}}
}
}
}
Can anyone see if their perhaps is something in the code that's not correct or if their could be any other possible reason why the enhanced conversions aren't collecting data?
Thanks in advance!
I've looked in to using the CSS selector method, but the conversion page only has email data.
Also tried "Manual configuration", but this requires all values to be present, otherwise you can't save (e.g. if "Address" is missing, you can't save)
I've also made test purchases and from what I can see, the conversion tag is picking up the attributes for enhanced conversions correctly.
I'm trying to get images for restaurants using the HERE Places (Search) API.
I'm using the "Browse" entrypoint, and then using the href in there to get a restaurant's details. In it, I keep on getting this:
media: {
images: {
available:0
items: [ ]
}
The same for reviews and ratings.
Based on other posts here, I'm confused what the problem is, as one post seemingly says its a bug, and one post seemingly says it's just the way the API is.
First of all, "HERE Places API" is deprecated. You should migrate to "HERE Geocoding and Search API v7". Check this out https://developer.here.com/documentation/geocoding-search-api/migration_guide/index.html
As already explained in this question
Include Review,Rating and Images in places API , the API will return the place IDs of external suppliers (TripAdvisor, Yelp etc). This is true also for the latest "HERE Geocoding and Search API v7". With these IDs, you can retrieve other details (such images, reviews, etc) from external system APIs.
I'm using the Interpret and Evaluate methods from Project Academic Knowledge.
If you search for Composite(J.JN='jama') and include J.JId in the request, you'll get a response with the Id 172573765:
{
"logprob": -14.823,
"prob": 3.651345212E-07,
"Id": 2107832644,
"J": {
"JId": 172573765
}
}
You can find more details about that journal by opening: https://academic.microsoft.com/journal/172573765
However, there doesn't seem to be a way to retrieve that same information (Number of papers, number of citations, website, about) using the API. How can we get this (other than by accessing the URL of the journal)?
Project Academic Knowledge allows you to retrieve journal entities using the Evaluate method. The query is simply Id=JId. For example, to retrieve the journal name, publication and citation counts you'd use:
https://api.labs.cognitive.microsoft.com/academic/v1.0/evaluate?subscription-key=SUBSCRIPTION_KEY&attributes=Id,JN,DJN,CC,PC&expr=Id=172573765
See the journal entity documentation page for a list of the available attributes you can request.
Whenever I encounter code snippets on the web, I see something like
Meteor.subscribe('posts', 'bob-smith');
The client can then display all posts of "bob-smith".
The subscription returns several documents.
What I need, in contrast, is a single-document subscription in order to show an article's body field. I would like to filter by (article) id:
Meteor.subscribe('articles', articleId);
But I got suspicious when I searched the web for similar examples: I cannot find even one single-document subscription example.
What is the reason for that? Why does nobody use single-document subscriptions?
Oh but people do!
This is not against any best practice that I know of.
For example, here is a code sample from the github repository of Telescope where you can see a publication for retrieving a single user based on his or her id.
Here is another one for retrieving a single post, and here is the subscription for it.
It is actually sane to subscribe only to the data that you need at a given moment in your app. If you are writing a single post page, you should make a single post publication/subscription for it, such as:
Meteor.publish('singleArticle', function (articleId) {
return Articles.find({_id: articleId});
});
// Then, from an iron-router route for example:
Meteor.subscribe('singleArticle', this.params.articleId);
A common pattern that uses a single document subscription is a parameterized route, ex: /posts/:_id - you'll see these in many iron:router answers here.
I seem to be having problem pulling out the text content of the following query without making another call:
http://tinyurl.com/mgsewz2 via the mqlread api
{
"id": "/en/alcatraz_island",
"/common/topic/description": [{}],
"/common/topic/topic_equivalent_webpage": [],
"/common/topic/official_website": null
}
I can't retrieve the following
description
equivalent webpage (I'm looking for the en wiki page)
, but I can obtain the official_website url.
It looks like I can get it via the search api via output= but I can't walk through the entire set that I'm looking for without getting search request is too large error.
http://markmail.org/message/hd6pcveta7bhuz25#query:+page:1+mid:u7xegxlhtmhwiqbl+state:results
Thanks!
It you want to download large subsets of Freebase data, your best bet is to use the Freebase RDF Dumps. They contain all the properties that you listed above.