Get Dynamics 365 Finance & Operations data entity via HTTP GET as CSV output - axapta

Given a Finance & Operations environment, the environmentdomain.com/data/EntityName URL allows OData data retrieval in JSON format.
Is there a way to give extra query string parameters to download the results as CSV using HTTP GET only?
A workaround is described here, however it has more overhead for ad hoc situations.

Unfortunately, the supported features from the OData specification for the D365FO OData requests do not support the system query option $format.
So no, as far as I can tell, there is no query string parameter that would return the HTTP GET request response in csv format.
Additional workarounds
Since the questions mentions a workaround that has some overhead for ad hoc situations, here are two more suggestions how the response can be changed to CSV format with less overhead.
Postman
Postman is often used for ad hoc testing of the D365FO OData API. Convert a JSON reponse to CSV describes how a Javascript test can be added to a Postman request to convert the JSON response to CSV format and write it to the console.
PowerShell
The Invoke-RestMethod cmdlet can be used to send HTTP GET requests to the D365FO API. The result can then be used with the Export-Csv cmdlet to create a CSV file.
I strongly recommend you use the d365fo.integrations PowerShell module written by #splaxi specifically to interact with the D365FO OData API instead of Invoke-RestMethod. The Get-D365ODataEntityData can be used to send an HTTP GET request.

Related

How to transform web server soap xml data into csv

Can anyone help how to transform soap xml data into csv, I have gone through google but have not get anything related this topic, mostly topic I'm getting from google is how to transform xml file into csv, here I'm not required xml file transformation, I just need to know how to directly can call to soap web server and transfer its data into csv.
You can get the data using a SOAP client. After that you can just search for converting c# object or object list into CSV which should be easy to find.
Check out this post for SOAP client example.

How to make HTTP Get request in Github for example Get Issues using R?

So I want to gather some information about an Institute where students upload projects and stuff in Github.
So I want to gather all that an analyse it. But in order to do that I have to request it from git hub. I want to do that using R. Or secondary using Python.
But i dont quite understand how could I use that in R as a get Request.
So if anyone could show me an example I would appreciate that.
Thanx!
Requests is a HTTP library for python that is very robust and easy to use. JSON and XML responses can be parsed easily.
Use it as below
import requests
r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
Use Github credentials'user' and 'pass' for authentication.
To get a list of reporsitories by a user or oranisation you can use Github's REST APIs.
Github has a REST API documentation. An HTTP GET request as below can fetch repositories for the specified org.
GET /orgs/:org/repos
find it here https://developer.github.com/v3/repos/#list-organization-repositories

How can I use Paw as a GraphQL client

I've already used most popular GraphiQL client, but it lacks saving and categorizing queries.
Does Paw support making graphql request, and writing queries with auto-complete and type-hinting?
Paw does work as a GraphQL client, but it's not as full-featured as you'd like. As far as I can tell, while you can make any kind of GraphQL request, it doesn't support any kind of auto-complete or type-hinting.
The GraphQL spec defines a way to query and execute data. However each website has their own way of letting you access that query interface.
For example, GitHub's API uses a POST request with a JSON payload to send almost all its GraphQL queries, but Facebook uses a more REST-like GET/POST/DELETE approach with path and query parameters.
Paw is more than capable of making and saving these types of requests.
Big queries
One thing to be aware of is that GraphQL requests can get large when trying to fetch multiple, nested data models (get me all the Comments on Posts that were written by User X). Having a large query inside a single field in Paw's can get unwieldy. I recommend using Paw's dynamic values and inserting a file that contains your GraphQL query. This way you can write your queries without having to jam them on a single line or use tons of escape sequences.
As of Paw 3.2, GraphQL is now supported in the Body tab of a request.
With the ability to retrieve and explore the schema, via introspection.

Web API methods with lots of parameters

I am working on a Web API service for our Web application (current project). One controller in the Web API will be responsible to retrieve a list of entities of a certain type. So far so good. The problem is that we have a request to filter the list based on search criteria. This search/filter criteria has about a dozen different parameters, some of which can be null. So I figured I create a custom class (let's call it "EntityFilterCriteria") and I instantiate it on the Web application side with whatever filtering fields the user enters (I leave the ones that the user do not enter set to null). Now how do I pass this object to my Web API method? I do not want to build an URL with all parameters because it's going to be a huge URL, plus some parameters may be missing. I can't have a body in the GET HTTP command to serialize my EntityFilterCriteria object so what do I use? POST? It is not really a POST because nothing is updated on the server side. It is really a GET, as in "get me all the records that match this search criteria". What is the common approach in such situations?
Thanks,
Eddie
Serialize the class to JSON and post it to your server, the JSON string will be your post body. Once it is posted you can then deserialize into a class with the same signature. Most languages have either built-in support or free 3rd party modules that can do this for you.
Well the initial danger of using GET for such a request is that the longest url you can send which is guarenteed to work across all web browsers and servers is about 1400 bytes in length.
Personally I would use JSON to encode all of your filter parameters and submit them to the server in the body of a POST command due to the lack of size limit on the sent data.
While there is a semantic difference between GET and POST commands which was intentioned when the HTTP RFC's were written decades ago, those decades of practical use have shifted rather significantly (in the same way that barely anybody uses PUT or DELETE)
I don't see any drawbacks to use a POST method to execute your query and get the result back. For example, ElasticSearch uses this approach to execute queries against the database. See this link for example: http://exploringelasticsearch.com/searching_data.html. In REST, POST isn't necessary used to update data.
Hope it helps.
Thierry

Excel issuing multiple requests to OData service

I've built an OData endpoint using a generic .ashx handler to run some SQL against a SQL Server database and format the payload using ODataLib 5.6. It appears to work as expected and I can view the results in a browser and can import the data into Excel 2013 successfully using the From OData Data Feed option on the Data ribbon.
However, I've noticed that Excel is actually issuing two GET requests when inspecting the HTTP traffic in Fiddler. This is causing some performance concerns since the SQL statement is being issued twice and the XML feed is being sent across the wire twice. The request headers look identical in both requests. The data is not duplicated in Excel. Is there a way to prevent the endpoint from being called multiple times by Excel? I can provide a code snippet or the Fiddler trace if needed.
My suggestion would be to use Power Query for this instead of ADO .Net.
The reason of raising the "duplicated" calls is that ADO .Net is not aware enough to identify the data at the first time. So it gets the schema back first, knowing the details about the data, and it can get and recognize the real data back with the second call. The first call is through the ADO.NET Provider GetSchema call, but that particular provider determines the schema by looking at the data.

Resources