Google Schema.org Math solvers structured data for multiple fields - math

Trying to setup schema markup for a simple math solver action with two fields. Let's say addition.
1+1=2
Here is Google's doc and example:
{
"#context": "https://schema.org",
"#type": ["MathSolver", "LearningResource"],
"name": "An awesome math solver",
"url": "https://www.mathdomain.com/",
"usageInfo": "https://www.mathdomain.com/privacy",
"inLanguage": "en",
"potentialAction": [{
"#type": "SolveMathAction",
"target": "https://mathdomain.com/solve?q={math_expression_string}",
"mathExpression-input": "required name=math_expression_string",
"eduQuestionType": ["Polynomial Equation","Derivative"]
}],
"learningResourceType": "Math solver"
}
How do we add multiple variables for two numbers?
return {
'#context': 'https://schema.org',
'#type': ['MathSolver', 'LearningResource'],
...
potentialAction: [
{
'#type': 'SolveMathAction',
target: `domain.com/?num1={num1}&num2={num2}`,
'mathExpression-input': 'required name=num1 name=num2',
eduQuestionType: ['addition', 'sum']
},
],
learningResourceType: 'Math solver'
};
Schema.org says about mathExpression (note: mathExpression-input doesnt seem to exist) but does fall under Thing > Intangible EntryPoint
A mathematical expression (e.g. 'x^2-3x=0') that may be solved for > a specific variable, simplified, or transformed. This can take many > formats, e.g. LaTeX, Ascii-Math, or math as you would write with a > keyboard.
But can this be setup for URL params to accept multiple fields within the mathExpression-input instead of a single math expression?

Related

vega not plotting the data

I am brand new to Vega and I was trying to plot some charts on Vega (plugin ElasticSearch and Kibana). Below is the simple visulization I am trying to plot. I am following through the documentation to connect the existing data, however I am unable to get the visuals. It just shows Y and X axis labeled from the code below with blank plots. What am I doing wrong?
{
"$schema": "https://vega.github.io/schema/vega-lite/v2.json"
"data": {
url: {
%context%: true
index: test-data
}
format: {property: "hits.hits"}
},
"mark": {"type":"bar"}
"encoding": {
"x": {"field": "DEPT", "type": "ordinal"},
"y": {"field": "SALES", "type": "quantitative"}
}
}
The specification needs to be valid JSON. There are numerous things in your specification that make it invalid; for example:
all strings need to be enclosed in quotes (e.g. url and format)
all items need to be separated by commas (applies to nearly every line of your specification)
Finally, even if you change those syntax errors, the content of your specification doesn't follow the schema: for example, the "url" and the "format" properties of "data" both should be strings.
I would suggest beginning with the vega-lite tutorials, and go from there, modifying what you learn to work with your own data.

Writing specs for grammar in atom editor

I have written my own grammar in atom. I would like to write some specs for the same, but I am unable to understand how exactly to write one. I read the Jasmine documentation, but still not very clear. Can someone please explain how to write specs for testing out grammar in atom. Thanks
Grammars are availabe available under atom.grammars.grammarForScopeName("source.yourlanguage")
The grammar object it returns has methods you can feed code snippets (e.g. tokenizeLine, tokenizeLines).
These methods return arrays of tokens.
Testing is just verifying if these methods return what you expect.
E.g. (CoffeeScript alert):
grammar = atom.grammars.grammarForScopeName("source.yourlanguage")
{tokens} = grammar.tokenizeLine("# this is a comment line of some sort")
expect(tokens[0].value).toEqual "#"
expect(tokens[0].scopes).toEqual [
"source.yourlanguage",
"comment.line.number-sign.yourlanguage",
"punctuation.definition.comment.yourlanguage"
]
Happy testing!
Example specs
spec for MscGen (a simple language)
spec for Haskell (more complex)
The array returned by the grammar.tokenizeLine call above looks like this:
[
{
"value": "#",
"scopes": [
"source.yourlanguage",
"comment.line.number-sign.yourlanguage",
"punctuation.definition.comment.yourlanguage"
]
},
{
"value": " this is a comment line of some sort",
"scopes": [
"source.yourlanguage",
"comment.line.number-sign.yourlanguage"
]
},
{
"value": "",
"scopes": [
"source.yourlanguage",
"comment.line.number-sign.yourlanguage"
]
}
]
(Kept seeing this question pop up in the search results when I was looking for an answer to the same question - so just as well document it here.)

Freebase MQL query - Get data by a social link

I'm having a hard time trying to get data about a person from Freebase using his social link - by a MQL query.
How could this be done?
Something like:
https://www.googleapis.com/freebase/v1/mqlread?query={
"*":[{}],
"/common/topic/social_media_presence":[{
"value":"http://twitter.com/JustinBieber"
}]
}
Those links are really stored as keys and the links are generated from templates with they key plugged in. You can see all the keys here: https://www.freebase.com/m/06w2sn5?keys=
A modified version of your query would be:
[{
"key": [{
"namespace": {
"id": "/authority/twitter"
},
"value": "JustinBieber"
}],
"*": [{}]
}]
You can do the same thing with other namespaces like /authority/facebook or /authority/musicbrainz as well as the various language wikipedias e.g. /wikipedia/en
I'm not sure how complete the coverage or currency of the social media info is though...

Freebase Obtain All Information On One Subject

I'm trying to find the best way to get the information displayed on a Freebase page via a MQL query.
I've tried the topic API but that includes a lot of metadata.
I've also tried using links/reflection as in:
{
"id": "/en/samsung_electronics",
"/type/reflect/any_master": [{
"link": {
"master_property": null
},
"name": null,
"id": null
}],
"/type/reflect/any_reverse": [{
"link": {
"master_property": null
},
"name": null,
"id": null
}],
"/type/reflect/any_value": [{
"link": {
"master_property": null
},
"value": null
}]
}
But that means I'll be missing some information, such as the number of employees because that's given as a "Dated Integer" which, of course, doesn't get automatically expanded and I won't know what I would have to expand in general. My best attempts at expanding all objects by nesting that query once in itself were met with a
"code": 503,
"message": "Backend Error"
In RDF/SPARQL (e.g. DBpedia) I'd just do select ?p ?o where {URI ?p ?o} and select ?s ?p where {?s ?p URI}, am I missing such a simple way to do this in Freebase?
So to summarize, I'm looking for a way to get the information on a Freebase HTML page with as little overhead as possible and without missing anything.
The Topic API was designed specifically for this use case (and is what's used to construct the Freebase HTML page). It takes a filter parameter which can be used to tailor its output to include only parts of the schema which are of interest. What metadata is getting in your way? Why can't you just skip it?
If you'd prefer to use SPARQL, there's an RDF dump available that you could load in your own triple store and query with SPARQL.

Google Cloud Datastore runQuery returning 412 "no matching index found"

** UPDATE **
Thanks to Alfred Fuller for pointing out that I need to create a manual index for this query.
Unfortunately, using the JSON API, from a .NET application, there does not appear to be an officially supported way of doing so. In fact, there does not officially appear to be a way to do this at all from an app outside of App Engine, which is strange since the Cloud Datastore API was designed to allow access to the Datastore outside of App Engine.
The closest hack I could find was to POST the index definition using RPC to http://appengine.google.com/api/datastore/index/add. Can someone give me the raw spec for how to do this exactly (i.e. URL parameters, what exactly should the body look like, etc), perhaps using Fiddler to inspect the call made by appcfg.cmd?
** ORIGINAL QUESTION **
According to the docs, "a query can combine equality (EQUAL) filters for different properties, along with one or more inequality filters on a single property".
However, this query fails:
{
"query": {
"kinds": [
{
"name": "CodeProse.Pogo.Tests.TestPerson"
}
],
"filter": {
"compositeFilter": {
"operator": "and",
"filters": [
{
"propertyFilter": {
"operator": "equal",
"property": {
"name": "DepartmentCode"
},
"value": {
"integerValue": "123"
}
}
},
{
"propertyFilter": {
"operator": "greaterThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 50
}
}
},
{
"propertyFilter": {
"operator": "lessThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 100
}
}
}
]
}
}
}
}
with the following response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "FAILED_PRECONDITION",
"message": "no matching index found.",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "no matching index found."
}
}
The JSON API does not yet support local index generation, but we've documented a process that you can follow to generate the xml definition of the index at https://developers.google.com/datastore/docs/tools/indexconfig#Datastore_Manual_index_configuration
Please give this a shot and let us know if it doesn't work.
This is a temporary solution that we hope to replace with automatic local index generation as soon as we can.
The error "no matching index found." indicates that an index needs to be added for the query to work. See the auto index generation documentation.
In this case you need an index with the properties DepartmentCode and HourlyRate (in that order).
For gcloud-node I fixed it with those 3 links:
https://github.com/GoogleCloudPlatform/gcloud-node/issues/369
https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
and most important link:
https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml to write your index.yaml file
As explained in the last link, an index is what allows complex queries to run faster by storing the result set of the queries in an index. When you get no matching index found it means that you tried to run a complex query involving order or filter. So to make your query work, you need to create your index on the google datastore indexes by creating a config file manually to define your indexes that represent the query you are trying to run. Here is how you fix:
create an index.yaml file in a folder named for example indexes in your app directory by following the directives for the python conf file: https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml or get inspiration from the gcloud-node tests in https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
create the indexes from the config file with this command:
gcloud preview datastore create-indexes indexes/index.yaml
see https://cloud.google.com/sdk/gcloud/reference/preview/datastore/create-indexes
wait for the indexes to serve on your developer console in Cloud Datastore/Indexes, the interface should display "serving" once the index is built
once it is serving your query should work
For example for this query:
var q = ds.createQuery('project')
.filter('tags =', category)
.order('-date');
index.yaml looks like:
indexes:
- kind: project
ancestor: no
properties:
- name: tags
- name: date
direction: desc
Try not to order the result. After removing orderby(), it worked for me.

Resources