When starting the Purge Tool from the Content Manager server, nothing happens except I just get a 100% CPU process which kills my server instantly, any suggestions?
It's the first time I've executed this tool so I'm not sure what's supposed to happen; does it connect to the DB immediately? - the 190 GB DB I'm lumbered with might not be helping it if that's the case.
Thanks in advance.
I have also seen this in past, you may try this with publication by publication or with limited number of folders/structure group.
Yes purge tool use the database immediately.
You should get a dialog box - It can take a while to load, but in the past I have seen this to be a permissions problem. Try right-clicking on the program, and using "Run as Administrator" if you are on a Windows 2008 Server
The Purge tool starts out by trying to "consume" your blueprint structure. It does this in a pretty cumbersome way however, it will get there in the end. Just leave it to run, it will get there.
The Purge tool doesn't connect to the db directly. It uses the Tridion API. (Of course, Tridion then goes to the database.)
On starting, the Purge tool will load all your publications, which it then displays for you. If you have a lot of publications, and a slow system, this can take a while.
If there is a permissions problem, I'd expect it to be one of Tridion permissions, so ensure that the account you are running as it a Tridion administrator.
Related
i have an asp.net 2.0 site which is compiled and has been put in the IIS virtual directory
however the new changes cant be seen by some users, and some other users can see the new changes on the web site, i have tried deleting cache and cleaning internet settings and trying with mozilla and chrome and still cant see the new changes.
we also tried cleaning the connection pool on iis.
is there anything more i can possibly do?
Have you confirmed that the machines are hitting the right server? Ping the URL from the "bad" machines and see what it resolves to.
The javascript changes may need browsers to download newer script which was published. (Cached javascript might be older)
For session based changes, if users retained older session, the changes may not reflect until they restart the session.
I found the answer, the server was being split in two machines, and when the server admin updated files only did in one of the servers, not both, now both are updated and running ok, thanks every one for the suggestions.
Two things to try (if Bill Gregg's suggestion doesn't work):
1) Bounce the web server services.
2) Download a freeware app called CCLEANER (formerly called Crap Cleaner....it was seriously called that). This app (when configured correctly) will clean all the junk web files off client machines.
I have been waiting for nopCommerce to install for about half an hour now (onto a free tier AWS server) and I would love to see if the install is still going on or if it has crashed at some point.
Does nopCommerce output a log file that I could view with a tool like baretail?
First of all, NopCommerce's installation is a pain in the neck. It takes really long and no one knows if it is doing something. However, in my experience with NopCommerce, I have installed it more than 10 times and it has never failed. So, it is just a matter of waiting.
To reply your answer about the log, there is a SQL table "Log" that stores all errors, and also an option from the admin to view a report of this table. It is really good.
However, because NopCommerce is being installed and this table may not yet exist, you can also check the Windows Event log using the event viewer.
In order to help out one of our departments here I've cloned a machine that was on its last legs and is actually hosting some important data. The plan is to migrate to a VMware VM as soon as possible, but in the meantime I cloned the machine onto another PC for testing among other things.
I've renamed the original machine's hostname,
so the new machine is running ASP.NET and SQL Server 2005 (Express)
and the database-driven website (Intranet-only) is now back up and running but we're getting an error 4200 with ODBC about a login failed.
On checking the event viewer there's the following message:
However it seems that SQL server still sees the local accounts as being under a much older hostname (see below:)
If I try to add the user "CURRENTHOSTNAME\ASPNET" to the database in question it only sees the old hostname. Is this what's causing my problem ? I'm not sure.
I do know that to get IIS working in the first place I had to do "aspnet_reg.exe -ga CURRENTHOSTNAME\ASPNET" and that fixed my first problem but this second one has me stumped.
The third party company that supply the software are being very slow to help on this. I usually do PHP & MySQL so I've little to no knowledge of Microsoft SQL. If anybody can help me you'd really be saving my bacon as this is dragging on a couple of days now.
Thanks in advance!
Renaming a machine with a SQL server instance on it can be tricky. My guess is that you get the old machine name when you run this query:
SELECT ##SERVERNAME
If you do, the old computer name is still stored in the sys.servers table. You might be able to simply follow Microsoft's instructions at renaming, which is just running a couple of stored procedures:
sp_dropserver 'old_host_name'
GO
sp_addserver 'new_host_name', local
GO
By cloning a machine you're opening a big can of worms. My suggestion would be to backup data only and restore it to new computer with fresh installs of everything. But in your case, there are two things that I'd try:
after cloning you must reset SID - check this link
sql recognises users by their id/guid, so delete all users from sql server/database and add them from scratch
Okay it seems to be fixed now - I re-added the NEWHOSTNAME\ASPNET to Security\Logins under "Microsoft SQL Server Management Studio Express" and then it appeared under Databases\MyDatabase\Security\Users" or I was able to add it anyway (can't remember which) - not sure if renaming the server that shows up under SELECT ##SERVERNAME makes a difference, but this error is fixed now anyway
This may seem like a silly question, but we are having an issue debugging IIS in a shared test environment and I'm hoping that someone out there can give us an answer.
We have a Windows Server 2003 that is running IIS 6 and sharepoint 2007. We are debugging locally on the server with visual studio 2008.
When someone attaches the debugger and steps through the code, we find that all users are affected. In essence the web server stops handling all requests from all users.
Our question is whether this is a typical situation and is to be expected? Or is there some configuration that we can change that would allow the one user's session to be debugged but leave the other's unaffected.
Kev's on the right track. You need to make sure that the project you want to debug separate from the others is in its own application pool. This will isolate it to its own process and allow that process to be stopped/debugged without affecting the other applications which can remain in a different pool.
Setup
Start -> Run -> inetmgr
Right Click on Application Pools
Click New -> Application Pool
Name the new pool
Right Click on the application you want to isolate
Click Properties
Click on the Home Directory tab
In the application pool drop-down list select your new pool
Click OK
If there are any requests queued in the old process, they may take a few minutes to terminate before all requests are being diverted to the new process.
Debugging
To figure out which instance of w3wp.exe you need to attach the debugger to:
Start -> Run -> cmd
Type iisapp
You may be prompted to register CScript, if so click yes and run it again
The only gotcha you may still find is that if multiple applications are using the aspnet_state service you may run into blocking issues if you need to debug that process as well.
Links
MSDN
Developer.com
"When someone attaches the debugger
and steps through the code, we find
that all users are affected. In
essence the web server stops handling
all requests from all users."
This is normal, once you attach a debugger to a process such as inetinfo.exe or w3wp.exe and set a break point, every request/thread will be blocked until you allow the debugger to continue, until the next break-point.
I've never found a way around it. Is there some reason you can't debug on each developer's workstation?
Set up a parallel project on the server and try using that. You could use debug.mydomain.com and then just use that for testing. The only reason that I personally can think of to debug on your live servers is if there is a significant difference in the functioning of your app due to either hardware or software configuration.
Ideally you want to have a separate server/instance of your system in as similar an environment as possible so that you don't have to debug on your live machine. Also you might want to consider writing all errors to the event log or at least checking the log since asp.net usually get logged there. This way you can see where your errors are and use that to help you solve your problem in the development environment.
I believe in visual studio you can set the debugger to break only the process being debugged, and not all the processes. Depending on how your system is set up, YMMV with this.
It can't be changed AFAIK. But that's a normal practice to set up separate web-node or web-application for development/debugging purposes. If that's necessary to know exact values of some vars in certain situations you can always use debug logging.
I have deployed an application written in ASP.NET 2.0 into production and it's experiencing some latency issues. Pages are taking about 4-5 seconds to load. GridView refreshing are taking around the same time to load.
The app runs fine on the develpment box. I did the following investigation on the server
Checked the available memory ... 80% used.
Cheched the processor ... 1%
Checked disk IO from perfmon, less than 15%
The server config is
Windows Server 2003 Sp2
Dual 2.0 GZH
2GB RAM
Running SQL Server 2005 and IIS only
Is there anything else I can troubleshoot? I also checked the event log for errors, it's clean.
EDITED ~ The only difference I just picked up is on the DEV box I am using IE7 and the clients are using IE6 - Could this be an issue?
UPDATE ~ I updated all clients to IE8 and noticed a 30% increase in the performance. I finally found out I left my debug=true in the web.config file. Setting that to flase got the app back to the stable performance... I still can't believe I did that.
First thing I would do is enable tracing. (see: https://web.archive.org/web/20210324184141/http://www.4guysfromrolla.com/webtech/081501-1.shtml)
then add tracing points to your page generation code to give you an idea of how long each part of the page build takes:
System.Diagnostics.Trace.Write(
"Starting Page init",
"TraceCheck");
//Init page
System.Diagnostics.Trace.Write(
"End Page init",
"TraceCheck");
System.Diagnostics.Trace.Write(
"Starting Data Fetch",
"TraceCheck");
//Get Data
System.Diagnostics.Trace.Write(
"End Data Fetch",
"TraceCheck");
etc
this way you can see exactly how long each stage is taking and then target that area.
Double check that you application is not running in debug mode. In your web.config file check that the debug attribute under system.web\compilation is set to false.
Besides making the application run slower and using more system memory you will also experience slow page loading since noting is cached when in debug mode.
Also check your page size. A developer friend of mine once loaded an entire table into viewstate. A 12 megabyte page will slip by when developing on your local machine, but becomes immediately noticeable in production.
Are you running against the same SQL Server as in your tests or a different one?
In order to find out where the time's coming from you could add some trace statements to your page load, and then load the page with tracing turned on. That might help point to the problem.
Also, what are the specs of your development box? The same?
Depending on the version of visual studio you have, Team Developer has a Performance Wizard you might want to investigate.
Also, if you use IE 8, it has a Profiler which will let you see how long the site takes to load in the browser itself. One of the first things to determine is whether the time is client side or server side.
If client side, start looking at what javascript you have and optimize / get rid of it.
If server side, you need to look at all of the performance counters (perfmon). For example, we had an app that crawled on the production servers due to a tremendous amount of JIT going on.
You also need to look at the communication between the web and database server. How long are queries taking? Are the boxes thrashing the disk drives? etc.