In a Symfony 1.4 project, (monitored using New Relic) some pages are slow due to long sfController::forward execution time.
I have inspected sfController::forward method to see anything that can increase execution time for specific pages though I could not find any, also checked Symfony Internals document no clue there either.
So the question is, what increases the execution time of sfController::forward and how to reduce it?
Related
I need a lucid explanation of why XDMP-EXTIME happens in Marklogic. In my case it's happening during a search(read operation). In the exception message a line from the code is being printed:
XDMP-EXTIME: wsssearch:options($request, $req-config) -- Time limit exceeded
This gives me the impression that the execution does not go beyond that line. But it seems that it's a pretty harmless line of code ,it does not fetch any data from the DB, just sets certain search options. How can I pin point which part of the code is causing this? I have heard that increasing the max time limit of the task server solves such problems but that's not an option with me. Please let me know how such problems are tackled. It would be very very hard for me to show you the code base.Still hoping to hear something helpful from you guys.
The error message can sometimes put you on the wrong foot because of lazy evaluation. The execution can actually be further down the road than the error message seems to indicate. Could be one line, could be several. Look for where the returned value is being used.
Profiling can sometimes help getting a clearer picture of where most time is spent, but the lazy evaluation can throw things off here as well.
The bottom-line meaning of the message is pretty simple: the execution of your code takes too long. The actual search in which the options are being used is the most likely candidate of where it goes wrong.
If you are using cts:search or search:search under the covers, then that should normally perform well. A search typically gets slow when you end up returning many results, e.g. don't apply pagination. Search:search does that by default however.
A search can also get slow if you are running your search in update mode. You could potentially end up having MarkLogic trying to apply many (unnecessary) read locks. Put the following declaration in your search endpoint code, or xquery main module that does the search:
declare option xdmp:update "false";
HTH!
You could try profiling the code to see what specifically is taking so long. This might require increasing the session time limit temporarily to prevent the timeout from occurring while profiling. Note that unless this is being executed on the Task Server via xdmp:spawn or xdmp:spawn-fucntion, you would need to increase the value on the App Server hosting the script.
If your code is in a module, the easiest thing to do is make a call to the function that times out from Query Console using the Profile tab. Alternatively, you could begin the function with prof:enable(xdmp:request()) and later output the contents of prof:report(xdmp:request()) to a file on the filesystem, or insert it somewhere in the database.
I have written around 5k lines in 3 days for my new website. There are a lot places where leaks or Querys for the database can be the reason for slowing my page down but the fact is a single website-call needs around 2 full seconds whats very long i think.
1) How can I meassure the exact time what my page needs to load? (To Compare after I disable a query or change a query if it wirks)
2) How to find the leak / the thing that is slowing down my asp.net site the most?
Use this in page load..
Trace.IsEnabled = true;
It will show everything with time taken by every page events namely life cycle..
You can keep track of time lagging here and then proceed accordingly..
I use MiniProfiler on the applications I work on. If you have SQL Server as data store then use SQL Server Profiler to see what queries are being executed. Other than that, it's mostly grunt work when it comes to tracking performance bottlenecks.
You need to run a profile to check out the execution time each of your methods in a page is taking, for this many tools both free and paid ones are available. You can check out Glimpse which is a nice free tool, available on nuget and is preferred by most.
On a new website, I've an huge formular(meaning really big, needs at least 15-20min to finish it), that configure the whole website for one client for the next year.
It's distributed between several tabs(it's a wizard). Every time we go to the next tab, it makes a regular(non ajax) call to the server that generate the next "page". The previous informations are stored in the session(an object with a custom binder).
Everything was working fine until we test it today with all real data. Real data needs reflexion, work to find correct elements, ... And it takes times.
The problem we got is that the View receive a Model partialy empty. The session duration is set to 1440 minutes(and in IIS too). For now what I know is that I get a NullException the first time I try to access the Model into my view.
I'm checking the controller since something like 1 hour, but it's just impossible it gives a null model. If I put all those data very fast, I don't have any problem(but it's random data).
For now I did only manage to reproduce this problem on the IIS server, and I'm checking elmah logs to debug it, so it's not so easy to reproduce it.
Have you just any idea about how should I debug this? I'm a little lost here
I think you should assume session does not offer reliable persistence. I am not sure about details but I guess it will start freeing some elements when it exceeds its memory limit.
You will be safer if you use database to store that information or you could introduce your own implementation for persisting state.
in addition to ans provided by #Ufuk
you can easily send an ajax request every 1 minute which would actually do nothing but by doing this the session wont get expired and site will continue to run in extended periods
The problem was that the sessions wasn't having enough space I think. I resolved temporary my problem by restarting the application pool. Still searching a solution that will not implies to changes all this code. Maybe with other mode of session states, but I need to make my models serializable.
I'm currently working on a project that uses EventStore, CommonDomain, and NServiceBus, when I have NumberOfWorkerThreads set to 1, all of our services(nservicebus - we have 6 of them, each has thier own event store) runs perfectly, but when i set NumberOfWorkerThreads to more than one, I start seeing a ton of deadlocks, i mean like at least 50 per minute. All of the deadlocks are on the Commits table. From what i've found is that, it looks like I'm updating the same aggregate in multiple threads, which could easily happen during and import of a catalog per say, and I update quantity in one thread, while updating the price in another thread, so both threads are trying to update the same aggregate.
Has anyone else had this issue, and how have you gotten around it?
Given the chart here, what should I be looking at to identify the bottleneck? As you can see, requests are averaging nearly 14 seconds under load and the bulk of that time is attributed to the CLR in New Relic's profiling data. In the performance breakdown for a particular page, it attributes the bulk of the time to the WebTransaction/.aspx page.
I see that the database is readed also (the orange) and this is seams that one of all pages have delay the rest of pages because of the lock that session make on the pages.
you can read also :
Replacing ASP.Net's session entirely
My suggestion is totally remove the session calls and if this is not possible, find an other way to save them somewhere in database by your self.
Actually in my pages I have made all three possible options. 1. I call the page with out session. 2 I have made totally custom session that are values connected to the user cookie, and last 3. I have made threads that are run away from the session and they make calculations on background, when they finish I show the results.
In some cases the calculations are done on iframe that call a page without session and there later I show the results.
In the Pro version, you can use Transaction Traces, which help pinpoint exactly where the issue is happening.