How can I use Paw as a GraphQL client - paw-app

I've already used most popular GraphiQL client, but it lacks saving and categorizing queries.
Does Paw support making graphql request, and writing queries with auto-complete and type-hinting?

Paw does work as a GraphQL client, but it's not as full-featured as you'd like. As far as I can tell, while you can make any kind of GraphQL request, it doesn't support any kind of auto-complete or type-hinting.
The GraphQL spec defines a way to query and execute data. However each website has their own way of letting you access that query interface.
For example, GitHub's API uses a POST request with a JSON payload to send almost all its GraphQL queries, but Facebook uses a more REST-like GET/POST/DELETE approach with path and query parameters.
Paw is more than capable of making and saving these types of requests.
Big queries
One thing to be aware of is that GraphQL requests can get large when trying to fetch multiple, nested data models (get me all the Comments on Posts that were written by User X). Having a large query inside a single field in Paw's can get unwieldy. I recommend using Paw's dynamic values and inserting a file that contains your GraphQL query. This way you can write your queries without having to jam them on a single line or use tons of escape sequences.

As of Paw 3.2, GraphQL is now supported in the Body tab of a request.
With the ability to retrieve and explore the schema, via introspection.

Related

AWS Lambda, Caching {proxy+}

Simple ASP.Net AWS Lambda is uploaded and functioning with several gets like:
{proxy+}
api/foo/bar?filter=value
api/foo/barlist?limit=value
with paths tested in Postman as:
//#####.execute-api.us-west-2.amazonaws.com/Prod/{proxy+}
Now want to enable API caching but when doing so only the first api call gets cached and all other calls now incorrectly return the first cached value.
ie //#####.execute-api.us-west-2.amazonaws.com/Prod/api/foo/bar?filter=value == //#####.execute-api.us-west-2.amazonaws.com/Prod/api/foo/barlist?limit=value; In terms of the cache these are return the same but shouldn't be.
How do you setup the caching in APIGateway to correctly see these as different requests per both path and query?
I believe you can't use {proxy+} because that is a resource/integration itself and that is where the caching is getting applied. Or you can (because you can cache any integration), but you get the result you're getting.
Note: I'll use the word "resource" a lot because I think of each item in API Gateway as the item in question, but I believe technically AWS documentation will say "integration" because it's not just the resource but the actual integration on said resource...And said resource has an integration and parameters or what I'll go on to call query string parameters. Apologies to the terminology police.
Put another way, if you had two resources: GET foo/bar and GET foo/barlist then you'd be able to set caching on either or both. It is at this resource based level that caching exists (don't think so much as the final URL path, but the actual resource configured in API Gateway). It doesn't know to break {proxy+} out into an unlimited number of paths unfortunately. Actually it's method plus resource. So I believe you could have different cached results for GET /path and POST /path.
However. You can also choose the integration parameters as cache keys. This would mean that ?filter=value and ?limit=value would be two different cache keys with two different cached responses.
Should foo/bar and foo/barlist have the same query string parameters (and you're still using {proxy+}) then you'll run into that duplicate issue again.
So you may wish to do foo?action=bar&filter=value and foo?action=barlist&filter=value in that case.
You'll need to configure this of course, for each query string parameter. So that may also start to diminish the ease of {proxy+} catch all. Terraform.io is your friend.
This is something I wish was a bit more automatic/smarter as well. I use {proxy+} a lot and it really creates challenges for using their caching.

How to use Solr Handlers when sending requests from Drupal

I'm using Solr 4.10.2 and Drupal 7.X, I have the Apache Solr Module Framework operating and sending the requests to Solr From Drupal. Currently when we perform a search, Drupal builds the query and sends it to Solr. Solr just executes the query and returns the results without using it's internal handlers which can be configured through SolrConfig.xml.
I would like to know if there is a way to just send the searched terms (without building a query) from Drupal and let Solr use the internal handlers declared in SolrConfig.xml to handle the request, build the query and then return the data?
The reason for this is we have been working on trying to boost some results when we perform a search (we want exact match first & fuzzy search results after) by changing the "weight" of some fields.
We know that from Back Office we can use the "Bias" function to boost some fields but this is too limited for what we are trying to achieve.
We also know we can change the query sent from Drupal directly from code side by using hook_apachesolr_modify_query() but we prefer changing as little code as possible and using the SolrConfig.xml /handlers which we already have configured to return the results as we want.
Ok, we figured out how to do this:
In order to choose the handler that is being used by Solr while sending a request from Drupal, we have to edit the "hook_apachesolr_query_alter" function and add the following code:
$query->addParam(‘qt’, ‘MyHandlerName’);
We did some extra coding to allow us to change the Handler directly from back office in order to be able to switch handlers without touching the code.

Best UI interface/Language to query MarkLogic Data

We will be moving from Oracle and use MarkLogic 8 as our datastore and will be using MarkLogic's Java api to talk with data.
I am exploring for any UI tool (like SQL Developer is there for Oracle), which can be used for ML. I found that ML's Query Manager can used for accessing data. But I see multiple options wrt language:
SQL
SPARQL
XQuery
JavaScript
We need to perform CRUD operations and search for data, and our testing team is aware of SQL (for Oracle), so I am confused which route I should follow and on what basis I should decide which one/two will be better to explore. We are most likely to use JSON document type.
Any help/suggestions would be helpful.
You already mention you will be using the MarkLogic Java Client API, that should provide most of the common needs you could have, including search, CRUD, facets, lexicon values, and also custom extension though REST extensions as the Client API will be leveraging the MarkLogic REST API. It saves you from having to code inside MarkLogic to a large extent.
Apart from that you can run ad hoc commands from the Query Console, using one of the above mentioned languages. SQL will require the presence of a so-called SQL view (see also your earlier question Using SQL in Query Manager in MarkLogic). SPARQL will require enabling the triple index, and ingestion of RDF data.
That leaves XQuery and JavaScript, that have pretty much identical expression power, and performance. If you are unfamiliar with XQuery and XML languages in general, JavaScript might be more appealing.
HTH!

Web API methods with lots of parameters

I am working on a Web API service for our Web application (current project). One controller in the Web API will be responsible to retrieve a list of entities of a certain type. So far so good. The problem is that we have a request to filter the list based on search criteria. This search/filter criteria has about a dozen different parameters, some of which can be null. So I figured I create a custom class (let's call it "EntityFilterCriteria") and I instantiate it on the Web application side with whatever filtering fields the user enters (I leave the ones that the user do not enter set to null). Now how do I pass this object to my Web API method? I do not want to build an URL with all parameters because it's going to be a huge URL, plus some parameters may be missing. I can't have a body in the GET HTTP command to serialize my EntityFilterCriteria object so what do I use? POST? It is not really a POST because nothing is updated on the server side. It is really a GET, as in "get me all the records that match this search criteria". What is the common approach in such situations?
Thanks,
Eddie
Serialize the class to JSON and post it to your server, the JSON string will be your post body. Once it is posted you can then deserialize into a class with the same signature. Most languages have either built-in support or free 3rd party modules that can do this for you.
Well the initial danger of using GET for such a request is that the longest url you can send which is guarenteed to work across all web browsers and servers is about 1400 bytes in length.
Personally I would use JSON to encode all of your filter parameters and submit them to the server in the body of a POST command due to the lack of size limit on the sent data.
While there is a semantic difference between GET and POST commands which was intentioned when the HTTP RFC's were written decades ago, those decades of practical use have shifted rather significantly (in the same way that barely anybody uses PUT or DELETE)
I don't see any drawbacks to use a POST method to execute your query and get the result back. For example, ElasticSearch uses this approach to execute queries against the database. See this link for example: http://exploringelasticsearch.com/searching_data.html. In REST, POST isn't necessary used to update data.
Hope it helps.
Thierry

HTTP Verbs and Content Negotiation or GET Strings for REST service?

I am designing a REST service and am trying to weight the pros and cons of using the full array of http verbs and content negotiation vs GET string variables. Does my choice affect cacheability? Neither solution may be right for every area.
Which is best for the crud and queries (e.g. ?action=PUT)?
Which is best for api version picking (e.g. ?version=1.0)?
Which is best for return data type(e.g. ?type=json)?
CRUD/Queries are best represented with HTTP verbs. A create and update is usually a PUT or POST. A retrieve would be a GET. Deletes would be a DELETE. Thats the generally mapping. The main point is that a GET doesn't cause side effects, and that the verbs do what you'd expect them to do.
Putting the action in the URI is OK if thats the -only- way to pass it (e.g, the http client library doesn't allow you to send non-GET/POST requests). Most libraries do, though, so its strongly advised not to pass the verb via the URL.
The "best" way to version the API would be using HTTP headers on a per-request basis; this lets clients upgrade/downgrade specific requests instead of every single one. Of course, that granularity of versioning needs to be baked in at the start and could severely complicate the server-side code. Most people just use the URL used the access the servers. A longer explanation is in a blog post by Peter Williams, "Versioning Rest Web Services"
There is no best return data type; it depends on your app. JSON might be easier for Ajax websites, whereas XML might be easier for complicated structures you want to query with Xpath. Protocol Buffers are a third option. Its also debated whether its better to put the return protocol is best specified in the URL or in the HTTP headers.
For the most part, headers will have the largest affect on caching, since proxies are suppose to respect them when told, as are user agents (obviously UA's behave differently, though). Caching based on URL alone is very dependent on the layers. Some user agents don't cache anything with a query string (Safari, iirc), and proxies are free to cache or not cache as they feel appropriate.

Resources