How to document a Hasura GraphQL API - hasura

My googling is surely failing me, but how would one document the fields and types in a GraphQL API produced by Hasura?

Since field and types are auto-generated, documentation for them is taken from comments added to Tables and Columns. You can find them on the console as well as add to metadata yaml. These comments will then appear on the generated GraphQL schema.
See comments key in the example here.

Related

How do I store related firebase data?

I'm making a firebase app where there's the concept of posts and authors.
I need to have a field called postedBy to give information about the post author. I'm currently confused on how to best implement this. Here's what I've thought about...
Store a postedBy field with the post author's ID as value. My issue with this is that I have to further send single requests for the user information, like name, profile picture etc.
I store a postedBy field with the exact clone of the author's data (name, profile URL, etc). My issue with this is what if the user changes their profile information? Do I have to loop through all the posts data to also ensure the changes?
What is the best way to solve an issue like this?
For your case, I would say that the best option would be probably to use only the ID as a value. This way, you can perform queries to return values using only the ID.
Using this approach should be the simplest and easiest way for you to create your application. Since the ID will be the point to connect your tables and to perform the queries, it should make your work easier.
In the below article, there is an example of a Blog application, that you can take a look and get some insights on how to configure your application as well - mainly about this part about author and post. :)
Learning Firebase: Creating a blog article object
Let me know if the information helped you!

Concrete5 Stacks & Blocks

I am working on existing concrete5 website and I don't have sufficient knowledge of concrete database structure.
I am creating api to fetch all the products which are added in "Stacks and Blocks".
I have cID, stID values and fetched the info from stacks table, Can anyone please let me know How to fetch the complete details including image from database.
I tried to make the relation between tables based on cID, stID.
Thanks in advance.
The images are stored in the application/files directory although you'll need to retrieve the path from the database to find them. as #1stthomas noted, without more details of the information you are working with, it's difficult to provide any specific advice.
FWIW, the API documentation for concrete5 can be found here.

aikau - implement export search results functionality

I have recently started developing with aikau in alfresco share.
I want to achieve a functionality wherein I can export search results to a CSV file.
For that, I can change the back-end repository web script to return csv data.
Now, At alfresco share end - I was successfully able to show the export link by adding a new widget to FCTSRCH_TOP_MENU_BAR. I used alfresco/renderers/PropertyLink to display this link. Now, the missing part for me is - how can I invoke the search web script passing additional param format=csv and alongwith that pass all the query parameters used to retrieve the results.
I am stuck with that. If I use the publishTopic as ALF_CRUD_GET_ALL and provide the URL there then it invokes the sample web script (I created to return sample csv response) and returns the response. However, the csv doesn't come as downloadable response. I am stuck here in order to how to achieve export csv functionality for search results.
It would be great if any of you can help me here and provide your guidance/suggestions.
This blog post provides an example on how you can custom the search page in Share. Although it specifically addresses changing the search queries the basic extension approach is more or less that same in that you will want to change the data that is used to send an XHR request. I think that the major difference here is that you may need to do more in-depth updates to the service - in particular with regards to the switch statement that is used to build the advanced search query object.
If you have extended or replaced the default search REST API then I would expect that you will need to call the same URL, but if you have provided an entirely new REST API to return the CSV data then you'll also need to change the URL that is used by the service.
In terms of providing a link for downloading the content we have previously implemented something in the DragAndDropModelCreationService (see the generateDownload function) but this only works with Chrome due to security limitations and the generation of files to download.
Your best bet may be temporarily store the CSV content on the repository in a hidden location and then use the standard download links to allow it to be downloaded - this would be more complex but would provide better cross browser support. Something similar is done for the "Download as ZIP" action.
OK, with the extra information provided I would do the following...
The information on the process of adding widgets to the search page are quite well detailed here (although you're not adding a view, you can follow the approach to add a new PropertyLink after the widget with the id "FCTSRCH_RESULTS_COUNT_LABEL").
The approach I would take would be to include an additional custom service on the page that subscribes to the "ALF_RETRIEVE_DOCUMENTS_REQUEST_SUCCESS" topic (which is published on a completed search). It should save the the search response in a variable in preparation for users clicking on the PropertyLink.
This custom service should also subscribe to a topic that is published by the PropertyLink (called say "DOWNLOAD_CSV"). This custom service could then generate a file download using the approach described in my previous answer using the CSV data that will have been provided in the payload. As I said though, this may only work with some browsers due to security reasons.
If your custom search WebScript were able to store the CSV data as a node on the Repository then you could just provide a the NodeRef of the CSV data in the search response and the PropertyLink could just publish the "ALF_DOWNLOAD" topic for the DocumentService to handle the download.
Trying to generate a file to download on the client side is going to be an issue for most browsers I think.

How To Specify the fields to Wordpress API v2

I'm using the WordPress REST API in my project and sending a GET request to:
http://myblog/wp-json/wp/v2/posts
and It's working quite alright but I want to specify the fields though I don't know how. I have looked at the documentation and still don't know how to go about it. For example, using the public API:
https://public-api.wordpress.com/rest/v1.1/sites/www.mysite.com/posts?number=100&fields=title,excerpt,featured_image
returns only the specified fields. How do I this with the v2 API?
Here's how to access the a list of titles and excerpts using REST API v2:
https://www.example.com/wp-json/wp/v2/posts?_fields[]=title&_fields[]=excerpt&per_page=100&offset=100
https://developer.wordpress.org/rest-api/extending-the-rest-api/modifying-responses/
As it states there, the REST API v2 returns a certain set of default fields, and if you want different ones, then you have to implement this as described in that document.
The easy solution to this issue would be to use the ACF to REST API or an equivalent plugin that can extend the REST API for you. I have used this plugin on many sites successfully.
If this is not possible then you will need to modify the response as has been outlined by other answers. You can read more about that here.
You can use ?_fields[]=title&_fields[]=excerpt

Drupal 6 - Views2 - How to build a view of non-nodes

I have a need to build views in drupal of non-nodes, actually objects external to drupal. The api that I am calling against passes me back a stdClass object.
Anyone have ideas on how to get Views2 to display non-node objects?
My understanding of Views 2 is that it is meant to work with information stored in a database.
If you don't have access to the database against which the API was written, then consider writing the objects the API returns into a table. The easiest thing would probably be to create nodes from the objects. Then you could access them with Views 2.
This is similar to the approach taken by the Acitivty Stream module (http://drupal.org/project/activitystream). It creates nodes from the data returned by various APIs. Check out the module's code for examples of how to create the nodes:
http://cvs.drupal.org/viewvc.py/drupal/contributions/modules/activitystream/activitystream.module?view=markup
On the other hand, if you have access to the source database, you might consider exposing the tables of that database to Views directly. This is the approach taken in the latest Views 2 integration code included with CiviCRM v2.2.3, which you can review here:
http://svn.civicrm.org/civicrm/trunk/drupal/modules/views/
CiviCRM is a Drupal module that writes data to tables outside of the Drupal database -- not into nodes. The views integration code exposes most of those tables to Drupal.
Hope this helps.
-- Andrew B.
According to the Views 3 roadmap, Views will eventually work with non-SQL data sources. In the meantime, some very preliminary work has been done in this area, using the Flikr API as a proof-of-concept.
Fixed in latest 6.x-1.x-dev branch. VBO now supports users and comments in addition to nodes. A special hook_object_info can be used to support any other type of object. Please try it and let me know!
you have to expose custom data to views like described here:
http://www.darrenmothersele.com/drupal-blog/drupal-views2-handlers
http://views-help.doc.logrus.com/help/views/api-tables
Views is built for working with nodes + CCK exclusively. If you want to create views for custom pages, you'll need to code some additional module + theme pages.

Resources