We are getting some strange error when re-publishing the same page. The page was published successfully the first time and we can see the page from presentation server. It failed with the following error (see below) when we tried to publish it again (no change to page). The page ran OK within template builder and we got the correct html output, it failed in the last committing deployment step (Prepare Transport, Transporting, Preparing Deployment and Deploying are all successful). Once it fails to publish the second time, it always fails to publish, and we can't un-publish it either. Also when we make a copy of the failed page and create a new page, we can publish the new page first time, the new page then fails to publush the second time with the same error.
Does anyone know what would cause this error?
Here is the error msg:
Committing Deployment Failed Phase: Deployment Prepare Commit Phase failed, Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: "", Unable to prepare transaction: tcm:0-4210-66560, For input string: "", For input string: ""
Do you have multiple deployer ? sometime it happens due to multiple deployer configured and one of them not configured as it should. First time your page may be picked up by the right deployer and afterward by wrong one.
As frank suggested, share the transport and deployer log files sothat i or someone else can assist you in right direction.
The fact that you can publish successfully the first time and not on subsequent attempts (including unpublishes) might point to locking issues (of course, I'm speculating, but this would be consistent with your symptoms.
One thing that is known to cause file locking problems is anti-virus software. Usually it's recommended to exclude your site and data directories from coverage by anti-virus scanners.
Similarly, there can be locking issues for the deployed resources that are stored in a database. Is your database server giving errors or warnings? For example, are there issues with enlisting the resources in a transaction?
Related
I'm coming across this error when I run my web app. The error is given only when the code is run on my web server. I can run the exact same code on my local machine and it works just fine. The the only way that I see the error when running the app on my webserver is if I press f12 when I try to run a given page. The page is trying to SFTP a file to another server, but like I said, I can run the exact same code on my local machine with no errors so I know that the code will work. There are no message or error boxes that popup. I've gone over the code over and over as well as looking at the difference of the configuration and programs that are installed on my local machine as opposed to what is on my web server. There is nothing that I see that is different. Here is the whole error message I see:
Sys.WebForms.PageRequestManagerServerErrorException: The source was
not found, but some or all event logs could not be searched. To
create the source, you need permission to read all event logs to make
sure that the new source name is unique. Inaccessible logs: Security.
I've found quite a few questions about this error, most of them talk about giving access to all users in this registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Security
or adding a user called Network Service to the above key and giving it full access or granting read permissions for the Network Service user to the whole EventLog branch. Another path I've explored is to change the identity of the app pool in IIS and then change it back. I've tried just about everything listed on SO and other places. Like I said, most of them involve writing new keys and changing permissions on keys in the registry. Another one I've tried is creating a registry key named HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application\#MY APP# and then creating a string value inside it called EventMessageFile with the value of C:\Windows\Microsoft.NET\Framework\v4.0.30319\EventLogMessages.dll in it. Another suggestion is to open the the app as an Administrator. That didn't work either. I could go on and on and on but for the sake of not turning this into a novel I won't, but I hope I've shown enough examples to let everyone know that I've done due diligence in trying other solutions first before asking my question. Will someone please help me out with this very frustrating error?
I have been facing issue from some days, I have a jenkins server and configured to use MSTest.exe for my unit test execution in ASP.net Application. This is weird as this comes for a particular SVN link. Every other SVN source code is executing well.
Below is the log of console:
TestResults\UnitTestReport.trx D:\Sonar\Tools\mstest-to-junit.xsl -o TestResults\UnitTestsJUnitReport.xml
Error occurred while executing stylesheet 'D:\Sonar\Tools\mstest-to-junit.xsl'.
Code: 0x8007000e
Not enough storage is available to complete this operation.
I am using Visual Studio 2013 to manage a .sqlproj file containing our database schema. The schema has been deployed successfully dozens of times.
When attempting to publish to one specific target database, the "Creating publish preview" step appears to fail, but no error is given. The output from the preview includes some expected warnings:
The column {...} is being dropped, data loss could occur
If this deployment is executed, changes to {...} might introduce run-time errors in {...}
This deployment may encounter errors during execution because changes to {...} are blocked by {...}'s dependency in the target database
I have unchecked "Block incremental deployment if data loss might occur".
The Preview just stops, and no script is generated.
This happens when there exists a stored procedure (or view or constraint or other object) in the target database, that isn't included in your sqlproj, that references a table that would be altered by deploying your sqlproj. SSDT apparently can't determine whether the change is safe unless the referring thing is included in your sqlproj, and then it errs on the safe side by blocking the deployment.
Disabling the "Block incremental deployment if data loss might occur" option only relaxes the data-loss checks. There isn't a "Block incremental deployment if run-time errors might occur" option.
You have three options:
add whatever stored procedures, views, or whatever from the target database to your sqlproj
uncheck the "Verify Deployment" option in the ssdt publish options (this is dangerous unless you're aware of the other referring sprocs and know that they aren't going to break)
if you're certain that everything that should exist in the target database is contained in your sqlproj, you can enabled the "Drop objects in target but not in source" option
The issue may also be caused prepending a database object with the wrong schema. For instance a table being referenced within a stored procedure SQL statement and the table being prepended with an incorrect schema name.
Additionally, we had some permissions for a specific security group that once we removed the solution would build again. In order to troubleshoot the error perform a schema compare of the project code and the target database. Remove differences from the database until the publish functionality works. The last item that you removed from the database is your culprit.
The last warning pattern appears to be more than a warning:
This deployment may encounter errors during execution because changes
to {...} are blocked by {...}'s dependency in the target database
appears to have been the culprit behind stopping the rest of the preview and the generation of the script.
Interestingly, the schema change being introduced would not have broken the triggers referenced in the preview output.
removing schemabinding from the view allows the publish to succeed with only warnings
I'm stuck on this one. I hope someone here has some experience with this. Here is the situation. I have set up a web page that allows users to upload flat files to be loaded into SQL Server 2005 using SSIS. There are two difference SSIS processes depending on the file type. The decision of which SSIS process to use is made by the user on the website.
Once the file is uploaded by the user the process is started by a .NET Process object. The command line is the normal command line you'd expect to see to start dtexec with a specific SSIS file and that sets a couple variables. For example:
dtexec /f /De /set value
The ASP.NET Anonymous User is running as a domain user account. All SSIS package files for both SSIS processes are in the same directory. The domain user account has full privileges on that directory. The same method in ASP.NET starts either of the processes. The only difference is the WebMethod called by the website. One WebMethod for each type. It is in these WebMethods where the unique arguments are assigned to the command line text for SSIS.
Here is where I have run into the problem. When running the website process "1", it runs fine, but process "2" fails with the error mentioned above. When I capture the Standard Output I receive this:
Microsoft (R) SQL Server Execute
Package Utility Version 9.00.4035.00
for 32-bit Copyright (C) Microsoft
Corp 1984-2005. All rights reserved.
Started: 10:34:14 AM Could not create
DTS.Application because of error
0x800401F3 Started: 10:34:14 AM
Finished: 10:34:14 AM Elapsed: 0.016
seconds
I don't understand how everything can be nearly identical yet only one will run. One final thing, both methods work fine when I am testing directly from Visual Studio. I figure it must be something with the Anonymous User account used, but I can't figure out why one process would work and the other not work when they are so similar.
Any help will be greatly appreciated.
Rob
Found the problem. The error code was a phantom. What happened was a Connection Component was being fed by a variable that was holding a path to a folder the new account could not go to. Even though in process it would be replaced with a good target it was failing in validation. This is why there was no logs. I didn't have the logging level high enough to see it and it acted like a security issue. Which is was in a way of looking at it.
I'm being asked to look into a problem that occurs intermittently on a WebServer running my team's application.
Essentially, we have a webservice that does a lookup between codes. If you have Code Type A, you can use it to look up the corresponding Code Type B. Periodically, when memory is running low, when this webservice is called, a null reference exception is being thrown. Essentially, this service loads a lookup file into cache with a dependency on the file, so if the file chages, the cache is reloaded with the new file. The priority on the cache object is set to default. I'm guessing that somewhere in the code, it isn't being verified that the cache object is still there and when memory on the server gets low, that object is dumped causing the error. I'd like to be able to recreate the error and verify before I start digging into this code.
Is there a way in IIS manager (or from the command prompt) to force a running web app to dump it's cache? I would think that this should recreate the condition and therefore recreate the bug. Not to mention, seeing the detail error should lead to the right section of code.
Thanks,
Steve Brouillard
My gut reaction would be to set the WebMethod's CacheDuration to zero, then back to whatever you want on an ongoing basis. I haven't tried this, but I think this would dump the cache then start it forming again...
I found a utility that can be added to ASP.NET apps that will allow you to dynamically manage the cache as a whole or individual cache objects. Thanks to .NET Rocks! and dnrtv.
Here's a link to the tool that I used. This allowed me to clear just the specific objects in question, on the fly, and prove the error.
Thanks to everyone for your help. ASP Alliance Cache Manager.
Steve