When there are large number of items in the queue in tridion publishing queue the Clicking the show tasks Button again and again makes the DB slow. Because the DB query behind that is asynchronous. We cant stop someone from clicking the button again and again but is making CMS slow. Is there any patch or any solution???
Some of these Publishing Queue slowness could be avoided by purging your publishing queues on regular basis (schedule task) and also keeping database stats up to date and rebuilding indexes will help improve the performance better.
Purging Publication Queue documentation:
http://sdllivecontent.sdl.com/LiveContent/content/en-US/SDL_Tridion_2011_SPONE/idheading-259307200
Even if you implement the custom GUI extn or other solution which refreshes the queue at regular intervals, you could run into the same problems if you don't have maintenance tasks like purging queue/DB optimization.
Which version of Tridion you're running?. There is big difference on Publishing Queue between Tridion 2011 GA vs SP1 in terms of the filter for the user. Tridion GA user filter is not selected automatically, so when user check the tasks publishing queue returns the results for all users. This works correctly in SP1.
Related
I am planning on a project to schedule scripts on multiple Windows and Linux servers. I'm kind of going down the path of doing this all from scratch because I have requirements which alternative software don't seem to meet (such as running tasks on completion or failure of other tasks and being able to schedule on non standard intervals).
I was thinking about having a web interface which will allow users to add/modify/delete schedules for each machine to a database.
A windows service will then be checking the database for any jobs that need to be run at that point and connect over SSH for Linux or PowerShell for windows. All the scripts will write back to the database on their progress so that they can be checked by the user.
Basically I just wanted some advice from people who knows better ways or things I may need to look out for which could cause problems because I don't have much experience.
Thanks.
Oracle Scheduler has all options where you are looking for and probably more. See Overview of Oracle Scheduler for some global info. It comes doen to having a central schedular database that submits jobs to remote job agents that do the work pretty much independent from the central schedular repository. It does report back status etc. when the repository is accessible after a job has finished.
It's a very powerful tool and it takes away a lot of complex tasks for you by giving a framework that you can start using right out of the box.
We are upgrading to Tridion 2011 SP1 and as a part of Tridion search implementation we are using FS4SP (Fast Search for sharepoint 2010).
In proposed implemenatation search environement consists of following servers:
FAS4SP
FISE
Can someone guide us regarding how to push content to FAST from tridion and how to retrieve the same?
(Here due to some reasons we are not considering crawling of website by FAST)
What all APIs can be used for this implementation?
If you don't want to use the crawling approach, you will need to create a custom deployer, please take a look at this other article:
How can we integrate Microsoft FAST with SDL Tridion 2011 SP1?
Alternatively, if you don't have a development team who is familiar with Java, you might considering creating a .NET application which updates your FAST index based on either a File System or Database trigger when your pages or components are published, updated or deleted from your broker repository.
You will probably want to create XML for FAST and have the Custom Deployer (or Event System) send the content to FAST.
First create the FAST XML that works and write a sample app so you can insert it into the FAST index from either a .NET or Java application. This does not yet involve Tridion.
Then write your Custom Deployer or Event System and pass the XML to FAST.
IF you are using a Custom Deployer approach I would suggest to contact Tridion Professional Services if you have not done it yourself or are not a Java programmer. The new Tridion 2011 Storage API provides new opportunities for the Custom Deployer. In the meantime I would suggest to append the FAST XML to the normal Page Content at the end, surrounded by some markers, and have your custom deployer pull it out of the Page output, send to FAST, then remove from the output before continuing.
This is a fairly difficult challenge for those who do not have serious Content Delivery / Deployer / Java skills. However, if you want to go for it yourself I would suggest taking at least 2 weeks of time to research existing solutions and experiment with the API.
Using the Event System might be a little easier - but your success or failure message will not appear in the Publish Queue and if the search index fails to update you can only log the failure and not pass the info back to users.
I have an ASP.NET application that is consistently using 75% - 100% of the CPU on a production server. How can I profile the application to figure out what part of the code is using up the most CPU? I have looked at a couple of different tools (Xte Profiler, EQATEC, dotTrace), but they all seem to want you to load and run the application within their tool. It seems to me that they want you to load up the application in their tool and run tests locally (not in production). I want to profile the application while it is running in production with people hitting it to see what is actually going on. Is this possible?
I am a newbie to application profiling so forgive me if I have missed something obvious or am not thinking about this correctly.
Thanks,
Corey
Sam Saffron (one of the StackoverFlow creators) has written a great command-line tool a while ago, but unfortunately has abandoned it.
A friend of mine forked the code to make it work in 2015:
https://github.com/jitbit/cpu-analyzer
(the page has a link to Sam's post explaining how to use it)
The great thing about this tool (besides "no-install required" portability, cmd-line interface, etc etc) is that APM packages like NewRelic etc only monitor http-requests. If your app has some background threads - they won't help much.
You should consider taking a memory dump on the production server while it's experiencing high CPU. Check out ADPlus and taking a hang dump on the asp.net process. This can then be analyzed with Windbg or other tools.
I just went through a similar experience where our production servers were experiencing excessive CPU load - a scenario we could not recreate locally or in test/staging environments. It had nothing to do with the database (database CPU was normal). Analyzing the dump file is what clued us in on what was causing the problem (excessive compilation of regex objects by some library we were using).
This answer would be incomplete without Tess' blog, so here's the link.
My guess it has to do with long running database queries rather than the ASP.net application itself. In my experience 9 times out of 10 this is what I see and this takes the APPLICATION server down to a crawl as resources are consumed and the app has to wait for each query to finish to move on. Take a look at SQL profilier on the DB server and see if there are any queries that are taking a long time to execute.
It could be as simple as adding an index to a column or some other small minor optimizations. Once you know the query, you can then also go back to your code and tweak that section as well.
For those who stumble upon this question still, it really depends on what you are trying to accomplish.
If a server is running that high on CPU, odds are, a standard profiler will bring it to a grinding halt due to it's additional overhead.
There are actually three different types of profilers. Standard profilers, lightweight transaction profilers, and APM tools. You can read more about this in my blog post that discusses all 3:
.NET Profilers: 3 types and why you need all of them
It's certainly possible to profile ASP.NET with the EQATEC Profiler. See:
Profiling ASP.NET websites with EQATEC Profiler
EQATEC Profiler instruments your app in a separate step that enable the app itself to collect it's own profiling info, and the profiler then merely displays that timing data afterwards.
That means that you can run your instrumented ASP.NET app completely independent of the profiler itself.
You could e.g. instrument your app, mail it to your test site in India, have them run it on their server for some days where it will generate timing reports all on it's own, and have them mail back those reports to you, which you can then view in the profiler. Pretty neat.
Note: To have the profiled app generate timing snapshots "on it's own" it must know when to generate them. By default this is when the method Application_End is called in an ASP.NET app. You can programmatically dump snapshots when it suits you by using the EQATEC Profiler API. See the user guide or check out this thread.
You can read about this on Microsoft Developer Network.
You can select documentation according to the version of your Visual Studio. You should verify profiling functionality is provided for your Visual Studio type.
How to: Profile a Web Site or Web Application Using the Performance Wizard
Your best bet is to profile your code on your own machine to identify where it is spending time.
Grab a ten day free trial of this:
http://www.jetbrains.com/profiler/
Here are some links to get you going:
Link
http://msdn.microsoft.com/en-us/library/ms178643(v=VS.100).aspx
http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx
I have a website that's running on a Windows server and I'd like to add some scheduled background tasks that perform various duties. For example, the client would like users to receive emails that summarize recent activity on the site.
If sending out emails was the only task that needed to be performed, I would probably just set up a scheduled task that ran a script to send out those emails. However, for this particular site, the client would like a variety of different scheduled tasks to take place, some of them always running and some of them only running if certain conditions are met. Right now, they've given me an initial set of things they'd like to see implemented, but I know that in the future there will be more.
What I am wondering is if there's a simple solution for Windows that would allow me to define the tasks that needed to be run and then have one scheduled task that ran daily and executed each of the scheduled tasks that had been defined. Is a batch file the easiest way to do this, or is there some other solution that I could use?
To keep life simple, I would avoid building one big monolithic exe and break the work to do into individual tasks and have a Windows scheduled task for each one. That way you can maintain the codebase more easily and change functionality at a more granular level.
You could, later down the line, build a windows service that dynamically loads plugins for each different task based on a schedule. This may be more re-usable for future projects.
But to be honest if you're on a deadline I'd apply the KISS principle and go with a scheduled task per task.
I would go with a Windows Service right out of the gates. This is going to be the most extensible method for your requirements, creating the service isn't going to add much to your development time, and it will probably save you time not too far down the road.
We use Windows Scheduler Service which launches small console application that just passes parameters to the Web Service.
For example, if user have scheduled reports #388 and #88, scheduled task is created with command line looking like this:
c:\launcher\app.exe report:388 report:88
When scheduler fires, this app just executes web method on web service, for example, InternalService.SendReport(int id).
Usually you already have all required business logic available in your Web application. This approach allows to use it with minimal efforts, so there is no need to create any complex .exe or windows service with pluggable modules, etc.
The problem with doing the operations from the scheduled EXE, rather than from inside a web page, is that the operations may benefit from, or even outright require, resources that the web page would have -- IIS cache and an ORM cache are two things that come to mind. In the case of ORM, making database changes outside the web app context may even be fatal. My preference is to schedule curl.exe to request the web page from localhost.
Use the Windows Scheduled Tasks or create a Windows Service that does the scheduling itself.
Jeff has previously blogged about using the cache to perform "out of band" processing on his websites, however I was wondering what other techniques people are using to process these sorts of tasks?
Years ago, I saw Rob Howard describe a way to use an HttpModule to process tasks in the background. It doesn't seem as slick as using the Cache, but it might be better for certain circumstances.
This blog post has the details, and there are many others that capture the same information if you look around.
Windows Service
You may want to look at how DotNetNuke does it. I know it is written in VB.NET, but I retrofitted the code into C#. I was perusing the source and noticed they had a feature in their admin area to setup scheduled tasks. These tasks get setup thru the admin interface and stored in the database. When the site starts, thru the Global.asax file, they either created another thread to run this service that then runs the scheduled tasks at their scheduled time. I can't remember the exact logic, it's been a while, but it is definitely a good resource on how other people have done out of band processes for Asp.Net applications. This technique still keeps the logic within the Asp.Net application, but it runs out of band in my opinion.
if it's primarily data processing tasks and you're using MSSQL, how about scheduled SSIS tasks?
Scheduled tasks using http://www.codeproject.com/KB/cs/tsnewlib.aspx or schtasks.exe.
Quartz.NET
MSMQ
SQL Server jobs
Windows service
System.Threading.Timer or System.Timers.Timer
System.ComponentModel.BackgroundWorker
Asynchronous calls and callbacks
Scheduled tasks, or cron jobs.
The problem with scheduled tasks or cron jobs is that they don't share memory space with the web server. You could set up a scheduled task that requested pages from the web server, but that might create problems with long running tasks. It would be nice to have some low priority threads running on the actual ASP.Net application stack to do simple utility tasks like cleaning up caches, monitoring resources, and just to deal with general housekeeping.
Simple queue files along with a separate agent. For each type of out of band process write a separate agent .exe which watches a directory for queue files that include whatever data is needed to perform the specified process.
This may seem dirty but in the real world I find it gives a lot of flexibility, you aren't doing a lot of processing in ASP.net process space and you could easily adapt this style to farm processing out to cheap Linux servers running the agent process on Mono for when you start needing more RAM/CPU/disk.
If you are most comfortable with asp.net pages you can write a small app to handle your job and then "ping" the app with an outside service that monitors your web site. This will keep the app alive.