I've built an OData endpoint using a generic .ashx handler to run some SQL against a SQL Server database and format the payload using ODataLib 5.6. It appears to work as expected and I can view the results in a browser and can import the data into Excel 2013 successfully using the From OData Data Feed option on the Data ribbon.
However, I've noticed that Excel is actually issuing two GET requests when inspecting the HTTP traffic in Fiddler. This is causing some performance concerns since the SQL statement is being issued twice and the XML feed is being sent across the wire twice. The request headers look identical in both requests. The data is not duplicated in Excel. Is there a way to prevent the endpoint from being called multiple times by Excel? I can provide a code snippet or the Fiddler trace if needed.
My suggestion would be to use Power Query for this instead of ADO .Net.
The reason of raising the "duplicated" calls is that ADO .Net is not aware enough to identify the data at the first time. So it gets the schema back first, knowing the details about the data, and it can get and recognize the real data back with the second call. The first call is through the ADO.NET Provider GetSchema call, but that particular provider determines the schema by looking at the data.
Related
Given a Finance & Operations environment, the environmentdomain.com/data/EntityName URL allows OData data retrieval in JSON format.
Is there a way to give extra query string parameters to download the results as CSV using HTTP GET only?
A workaround is described here, however it has more overhead for ad hoc situations.
Unfortunately, the supported features from the OData specification for the D365FO OData requests do not support the system query option $format.
So no, as far as I can tell, there is no query string parameter that would return the HTTP GET request response in csv format.
Additional workarounds
Since the questions mentions a workaround that has some overhead for ad hoc situations, here are two more suggestions how the response can be changed to CSV format with less overhead.
Postman
Postman is often used for ad hoc testing of the D365FO OData API. Convert a JSON reponse to CSV describes how a Javascript test can be added to a Postman request to convert the JSON response to CSV format and write it to the console.
PowerShell
The Invoke-RestMethod cmdlet can be used to send HTTP GET requests to the D365FO API. The result can then be used with the Export-Csv cmdlet to create a CSV file.
I strongly recommend you use the d365fo.integrations PowerShell module written by #splaxi specifically to interact with the D365FO OData API instead of Invoke-RestMethod. The Get-D365ODataEntityData can be used to send an HTTP GET request.
How can I use a data source that is just a plain HTTP data source? I.e. https://cnlohr.com/data_sources/ccu_test where it's just a number?
I could potentially wrap it in JSON, but I can't find any basic JSON, REST, or raw HTTP data source for Grafana Connect.
Ah! Apparently the CSV Plugin here DOES work. I just had to re-create it a few times to get around the internal server error: https://grafana.com/grafana/plugins/marcusolsson-csv-datasource/
Once added to your system, add it as a new integration/connection. Be sure to make each query only output one number (you will need multiple queries, one for each column). Then you can save each as a recorded query.
I am planning to create sqlite table on my android app. The data comes from the the server via webservice.
I would like to know what is the best way to do this.
Should I transfer the data from the webservice in a sqlite db file and merge it or should i get all the data as a soap request and parse it in to table or should I use rest call.
The general size of the data is 2MB with 100 columns.
Please advise the best case where I can quickly get this data, with less load on the device.
My Workflow is:
Download a set of 20000 Addresses and save them to device sqlite database. This operation is only once, when you run the app for the first time or when you want to refresh the whole app data.
Update this record when ever there is a change in the server.
Now I can get this data either in JSON, XML or as pure SqLite File from the server . I want to know what is the fastest way to store this data in to Android Database.
I tried all the above methods and I found getting the database file from server and copying that data to the database is faster than getting the data in XML or JSON and parsing it. Please advise if I am right or wrong.
If you are planning to use sync adapters then you will need to implement a content provider (or atleast a stub) and an authenticator. Here is a good example that you can follow.
Also, you have not explained more about what is the use-case of such a web-service to decide what web-service architecture to suggest. But REST is a good style to write your services and using JSON over XML is advisable due to data format efficiency (or better yet give protocol-buffer a shot)
And yes, sync adapters are better to use as they already provide a great set of features that you will want to implement otherwise when written as a background service (e.g., periodic sync, auto sync, exponential backoff etc.)
To have less load on the device you can implement a sync-adapter backed by a content provider. You serialize/deserialize data when you upload/download data from server. When you need to persist data from the server you can use the bulkInsert() method in content-provider and persist all your data in a transaction
I am having an issue where locally or pushed out to production that if I submit the request in 2 different browsers (e.g. chrome and firefox), I am returned data from the first request into the 2nd. I don't use threading, nor static variables. I have a solution that consists of
Data
Domain
Service
Web
The Web project calls a service function, that uses EF4 from an EDMX file. For some reason multiple requests hitting the server at the same time is returning data from the first request. Now we currently have 6 developers looking at this. The parameters are unique and not passing the wrong ones. I am looking for any introspection into maybe it's something inherent that we don't fully understand in the framework. For example we are using:
Ninject
AutoMapper
EF4
This is a very strange issue that is baffling us and if you have read this far, hard to explain.
We are recieving HL7 2.x using the BTAHL7 accelerator. I want to dump the raw HL7 message to a sql table, with some discrete data including control id and others. My receive location is using the BTAHL72XRecievePipeline component. Is it possible to subscribe to the raw message, instead of the parsed xml format?
You'll have to us a custom pipeline component, something like this: http://codebetter.com/jefflynch/2006/04/08/biztalk-server-2006-archive-pipeline-component/
You can retrieve the raw message as the first step in the pipeline.
The UltraPort MS SQL Schema Engine does exactly what you're looking for. That's all that it does, it's very fast and very good at it, and has free fully functional trial. It sets up in literally minutes and they've got really good customer service. If you call in they'll walk you through a 10-15 minute example of importing HL7 messages (and actually encourage you to use your own HL7 data if you have any). 10-15 minutes will answer 90% of any questions you might ever have and it includes downloading and installing the software.
Home Page: http://www.hermetechnz.com/EasyHL7/prod_sql.asp
Online Help: http://www.hermetechnz.com/Documentation/UltraPort/MSSQL/index.html
It stores both the unparsed HL7 message as well as breaking it into parsed data tables as well as (optionally) storing the unparsed SEGMENTS as individual rows.
Hope this helps.