Why is dataverse taking so long to answer to simple requests? - .net-core

I'm using Microsoft.PowerPlatform.Dataverse.Client to do requests on my dynamics 365 environment and every request takes a long time to finish, but the requests are processed nearly instantly on dataverse side. My project is .net core 5.
For exemple, in this picture you can see that it takes 17 seconds for the creation of a Contact on my project side (from sending request to receiving response):
There is the code with loggings:
However, I see on dataverse side that the creation is done in less than a second:
Why is it taking so long to return a response knowing that the contact has been created nearly instantly?
Would it be faster to create the contact without the entity references and then update it? If so, why?

Related

ASP.Net API App - continual HTTP 502.3 errors

My team and I have been at this for 4 full days now, analyzing every log available to us, Azure Application Insights, you name it, we've analyzed it. And we can not get down to the cause of this issue.
We have a customer who is integrated with our API to make search calls and they are complaining of intermittent but continual 502.3 Bad Gateway errors.
Here is the flow of our architecture:
All resources are in Azure. The endpoint our customers call is a .NET Framework 4.7 Web App Service in Azure that acts as the stateless handler for all the API calls and responses.
This API app sends the calls to an Azure Service Fabric Cluster - that cluster load balances on the way in and distributes the API calls to our Search Service Application. The Search Service Application then generates and ElasticSearch query from the API call, and sends that query to our ElasticSearch cluster.
ElasticSearch then sends the results back to Service Fabric, and the process reverses from there until the results are sent back to the customer from the API endpoint.
What may separate our process from a typical API is that our response payload can be relatively large, based on the search. On average these last several days, the payload of a single response can be anywhere from 6MB to 12MB. Our searches simply return a lot of data from ElasticSearch. In any case, a normal search is typically executed and returned in 15 seconds or less. As of right now, we have already increased our timeout window to 5 minutes just to try to handle what is happening and reduce timeout errors for the fact their searches are taking so long. However, we increased the timeout via the following code in Startup.cs:
services.AddSingleton<HttpClient>(s => {
return new HttpClient() { Timeout = TimeSpan.FromSeconds(300) };
});
I've read in some places that you actually have to do this in the web.config file as opposed to here, or at least in addition to it. Not sure if this is true?
So The customer who is getting the 502.3 errors have significantly increased the volumes they are sending us over the last week, but we believe we are fully scaled to be able to handle it. They are still trying to put the issue on us, but after many days of research, I'm starting to wonder if the problem is actually on their side. Could it be possible that they are not equipped to take the increased payload on their side. Can it be that their integration architecture is not scaled enough to take the return payload from the increased volumes? When we observe our resources usages (CPU/RAM/IO) on all of the above applications, they are all normal - all below 50%. This also makes me wonder if this is on their side.
I know it's a bit of a subjective question, but I'm hoping for some insight from someone who may have experienced this before, but even more importantly, from someone who has experience with a .Net API app in Azure which return large datasets in it's responses.
Any code blocks of our API app, or screenshots from Application Insights are available to post upon request - just not sure what exactly anyone would want to see yet as I type this.

Understanding App Insights end to end for occassional long response times

Background: I have an ASP.NET Core App and have an API method that takes a file name of a blob that the frontend has uploaded to Azure Blob. It then needs to create a thumbnail version of the blob and return the name of the newly uploaded thumbnail Blob. Sometimes, for exactly the same file size it can take up to 40 seconds to complete. Mostly, it's around 400ms.
Below is the end to end from App Insights, I have a few things I don't understand:
1) The request duration is 37.5 s but yet the other operations add up to nowhere near this time
2) Why are there calls to master db? We are using EF6 with multiple contexts
3) The app is using an Azure App Service and SQL Azure. I don't understand why the response time is so inconsistent.
Any help would be much appreciated!
I've noticed multiple time that the first request after an application is deployed to Azure or after a long period that no requests were made to the application, it takes significantly longer to get a response.
As far as I remember it was related to start-up time of the site (if you're using an App Service on Windows based underlying VM it still uses IIS as a reverse proxy).
I solved the issue by configuring health checks that occasionally perform requests to the app.
Also, in addition to Application Insights (which logs information only after the application has started), you can try the tools listed here to see more information.
Hope it helps!
1.
The way the request timeline is displayed gives you only the time-span for the whole request (37.5s) and the individual time-spans for each dependency.
A dependency being another call that sends its run-time to the application insights.
In your example each call to the database is automatically tracked as a dependency. The code running after each database call is not though.
So e.g. requesting a database entry which takes 200ms and then issuing a Thread.Sleep of 2 seconds and requesting another database entry which takes 300ms would result in a 2 second gap between the two database-call dependencies which will each be listed with 200/300ms respectively.
You can use TelemetryClient.TrackDependency to wrap parts of your own code into its own dependency. This way you will see your own code as an entry on the request timeline.
2.
Depending on your EntityFramework database-initialisier EF will connect to the master db on context creation. (E.g. to create the database if it does not exist).
3.
Try tracking your own code to find out what parts of it are slow. EF has a few performance issues to consider, try to understand the performance caveats of the libs you use. If your calls are inconsistently slow it might be an issue with resources being over-utilized or caches being emptied too early (like for EF warm vs. cold queries).

How to efficiently stream video over HTTP directly from SQL Server?

I'm trying to implement a video-streaming service. I use ASP.NET Web API, and as I've searched, PushStreamContent is exactly what I want, and it works very fine, sending HTTP response 206 (partial content) to the client, keeping the connection alive and pushing (writing) streams of bytes to the output.
However, I can't scale. Because I can't retrieve partial binary data from database. For example consider that I have a 300MB video in my SQL Server table (varbinary field) and I use Entity Framework to get the record, and then push it to the client using PushStreamContent.
However, this hugely impacts RAM. And for each seeking action that client does, the RAM uses another extra 600MB of space. Look at it in action:
1) First request for video
2) Second request (seeking to the middle of the video)
3) Third request (seeking into the last quarter of the video)
This can not be scaled at all. 10 users watching this movie, and our server is down.
What should I do? How can I stream video directly from SQL Server table without loading the entire video into RAM with Entity Framework and then pushing it to client via PushStreamContent?
You could combine the SUBSTRING function with VARBINARY fields, to return portions of your data. But I suspect you'd prefer a solution that doesn't require jumping from one chunk to the next.
You may also want to review this similar question.

Background task in ASP.NET

I am writing a web application using ASP.NET (not MVC), with .NET v4 (not v4.5).
I fetch some of the data which I must display from a 3rd-party web service, one of whose methods takes a long time (several seconds) to complete. The information to be fetched/prefetched varies depending on the users' initial requests (because different users ask for details about different objects).
In a single-user desktop application, I might:
Display my UI as quickly as possible
Have a non-UI background task to fetch the information in advance
Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it
To do something similar using ASP.NET, I guessed I can:
Use a BackgroundWorker, passing the Session instance as a parameter to the worker
On completion of the worker's task, write fetched data to the Session
If the user's request for data arrives before the task is complete, then block until it it has completed
Do you foresee problems, can you suggest improvements?
[There are other questions on StackOverflow about ASP.NET and background tasks, but these all seem to be about fetching and updating global application data, not session-specific data.]
Why not use same discipline as in a desktop application:
Load the page without the data from the service ( = Display my UI as quickly as possible)
Fetch the service data using an ajax call (= Have a non-UI background task to fetch the information in advance)
this is actually the same, although you can show an animated gif indicating you are still in progress... (Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it)
In order to post an example code it will be helpful to know if you are using jquery? plain javascript? something else? no javascript?
Edit
I am not sure if this was your plan but Another idea is to fetch the data on server side as well, and cache the data for future requests.
In this case the stages will be:
Get a request.
is the service data cached?
2.a. yes? post page with full data.
2.b. no? post page without service data.
2.b.i. On server side fetch service data and cache it for future requests.
2.b.ii. On client side fetch service data and cache it for current session.
Edit 2:
Bare in mind that the down side of this discipline is that in case the method you fetch the data changes, you will have to remember to modify it both on server and client side.

grab website content thats not in the sourcecode

I want to grab some financial data from sites like http://www.fxstreet.com/rates-charts/currency-rates/
up to now I'm using liburl to grab the sourcecode and some regexp search to get the data, which I afterwards store in a file.
Yet there is a little problem:
On the page as I see it in the browser, the data is updated almost each second. When I open the source code however the data I'm looking for changes only every two minutes.
So my program only gets the data with a much lower time-resolution than possible.
I have two questions:
(i) How is it possible that a source-code which remains static over two minutes produces a table that changes every second? What is the mechanism?
(ii) How do I get the data with second time-resolution, i.e. how do I read out such a changing table thats not shown in the sourcecode.
thanks in advance,
David
You can use the network panel in FireBug to examine the HTTP requests being sent out (typically to fetch data) while the page is open. This particular page you've referenced appears to be sending POST requests to http://ttpush.fxstreet.com/http_push/, then receiving and parsing a JSON response.
try sending POST request to http://ttpush.fxstreet.com/http_push/connect, and see what you get
it will continuously load new data
EDIT:
you can use liburl or python, it doesn't really matter. Under HTTP, when you browse the web, you send GET or POST requests.
Go to the website, open the Developer Tools (Chrome)/firebug(firefox plugin) and you will see that after all the data is loaded, there's a request that doesn't close - it stays open.
When you have a website and you want to fetch data continuously, you can do it in a few techniques:
make separate requests (using ajax) every few seconds - this will open a connection for each request, and if you want frequent data updates - it's wasteful
use long polling or server polling - make 1 request that fetches the data. it stays open, and flushes data to the socket (to your browser) whenever it needs. the TCP connection remains open. When the connection times out - you can reopen it. It's more effective than the above normally - but the connection remains open.
use XMPP or some other protocol (not HTTP) - used mainly on chats, like facebook/msn i think., probably google's and some others.
the website you posted uses the second method - when it detects a POST request to that page, it keeps the connection open and dumps data continuously.
What you need to do is make a POST request to that page, you need to see which parameters (if any) are needed to be sent. It doesn't matter how you make the request, as long as you send the right parameters.
you need to read the response with a delimiter - probably every time they want to process data, they send \n or some other delimiter.
Hope this helps. If you see that you still can't get around this let me know and i'll get into more technical details

Resources