We are having problem on a live production system.
One of the nodes stopped working properly (because of problems with network file system on which is it hosted) and that happened while the channel staging process was in progress.
Because of that really bad timing, the staging process remained unfinished and all locked resources remained that way which prevented editing products or catalogs on live system.
1st solution we have tried is restarting servers node by node, that didn't help.
2nd solution we tried executing SQLs mentioned in this support article:
https://support.intershop.com/kb/index.php/Display/2350K6
The exact SQLs we have executed are below, first one is for deleting from RESOURCELOCK table:
DELETE FROM RESOURCELOCK rl WHERE rl.LOCKID IN (SELECT
resourcelock.lockid
FROM
isresource ,
domaininformation resourcedomain,
process,
basiccredentials ,
domaininformation userdomain,
resourcelock ,
isresource_av
WHERE (
(isresource.domainid = resourcedomain.domainid)
AND (isresource.resourceownerid = process.uuid)
AND (resourcelock.lockid = isresource.uuid)
AND (process.userid = basiccredentials.basicprofileid(+))
AND (basiccredentials.domainid = userdomain.domainid(+))
AND (isresource_av.ownerid(+) = isresource.uuid)
AND (isresource.resourceownerid is not null)
AND (isresource_av.name(+) = 'locknestinglevel')
AND (process.name = 'StagingProcess')
));
And another one for ISRESOURCE table:
UPDATE isresource
SET
resourceownerid=null,
lockexpirationdate=null,
lockcreationdate=null,
lockingthreadid=null
WHERE
RESOURCEOWNERID='QigK85q6scAAAAF9Pf9fHEwf'; //UUID of StagingProcess
Now this has somewhat helped as it allowed for single products to be staged, but here are still two problems remaining:
Products can't be manually locked on live system for editing, when lock icon is clicked the page refreshes but it looks like it is still unlocked, but records are created for each product which is clicked on in ISRESOURCE table altough they are incomplete (the is no RESOURCEOWNERID, lock creation date, or lock expiration date), this can be seen below:
Also processes are tried to be created for product locking but they are all failing or running without end date as can be seen here:
Now for the second problem:
The channel staging cannot be started and it fails with message:
ERROR - Could not lock resources for process 'StagingProcess': Error finding resource lock with lockid: .0kK_SlyFFUAAAFlhGJujvESnull
That resource is MARKETING_Promotion resource:
Both problems started occuring after running above mentioned SQLs and it seems they are related, any advice on how to resolve this situation would be helpfull.
The first SQL that I posted shouldn't have been run:
DELETE FROM RESOURCELOCK rl WHERE rl.LOCKID IN....
The fix was to restore deleted resource locks and just set the lock fields in ISRESOURCE table to null with the second SQL:
UPDATE isresource
SET
resourceownerid=null,
lockexpirationdate=null,
lockcreationdate=null,
lockingthreadid=null
WHERE
RESOURCEOWNERID='QigK85q6scAAAAF9Pf9fHEwf'; //UUID of StagingProcess
Related
Just this morning when trying to view the Data Explorer UI for an Azure Cosmos DB table the window is totally blank and I see no rows (the table should not be empty). The only connection to this table is a Python script that pushes in simple rows with only a few variables however this has also stopped working just this morning.
I am still able to connect to the table service properly and I've even been able to create a new table through my Python script. However, as soon as I call table_service.insert_or_replace_entity('traps', task) ('traps' is the name of my table and task is the row I'm trying to push up) I receive back an HTTP Error 400. The request URL is invalid.
For reference, my connection in Python is as follows where Account_Name = my personal account name and Account_Key = my personal account key.
table_service = TableService(connection_string="DefaultEndpointsProtocol=https;AccountName=Account_Name;AccountKey=Account_Key;TableEndpoint=https://Account_Name.table.cosmosdb.azure.com:443/;")
for i in list(range(0,len(times))):
print(len(tags))
print(len(times))
print(len(locations))
task = {'PartitionKey': '1', 'RowKey': '{}'.format(tags[i]),'Date_Time' : '{}'.format(times[i]), 'Location' : '{}'.format(locations[i])}
table_service.insert_or_replace_entity('traps', task)
UPDATE
In reference to the HTTP Error 400 I discovered that I was trying to push a \n at the end of each of the tags string (i.e. tags[0] = 'ab123\n'). Stripping out the \n has resolved the HTTP 400 error but I am now receiving The specified resource does not exist. message when I attempt to upload which makes more sense as at why my Data Explorer is blank. I have tried uploading to a new table but its the same thing.
Second Update
Silly mistake on resource not found error was that my table is called "Traps" not "traps". Data appears to be uploading correctly now on the API side. However, the table is still not displaying at all in the data explorer page of the Azure portal. If anyone has insight on this it would be appreciated because the explorer is super helpful while we are still in development.
Third Update
I am able to connect to the table/database through Python and query data effectively. It all seems to be in there and up to date. The only thing I'm left unsure about is why the Data Explorer is not displaying properly. Aside from that, my recommendation is to obviously check your capital letters (my usual mistake haha) and DO NOT try to push up line feeds (\n) in the task/payload.
Want to provide an official update and response to your issue. This issue is being Hotfixed with an ETA rolled out by Monday (09/24/2018).
I have an ASP.Net application that accesses user data from a SQL database.
Visual Studio Version 2012
Windows Server 2012 Standard 6.2
Sql Server 2012
Program in Service since 11/2007 (with problem having never happened previously)
Problem:
First reported by 2 of my customers but I was not experiencing the problem until after a recent MS update.
Unsure of the particulars of those updates or whether it was only a coincident.
Log into application and go around to a few pages, all seems ok, than I select a new Active Company (auto filters list screens by Active Company ID from a session variable, changing active company changes the ID stored in the session variable), everything works fine for a while (1 - 4 mins) switching between screens and even different active companies, than at one point I go to a page that I've been to several times (that worked fine) and it shows everything from the last time I accessed it (literally the identical page from a few mins ago). I change to another page and it appears to be updated, go back to the screen that did not update and it no matter what, will not update again. I query the database and it is indicating the correct active company ID and query the session variable and that too is correct.
** The strange thing is I can wait 4 -5 mins (I just stop doing anything) and than try to access the page again and now it updates.
I have been beating at this now for almost 2 weeks and have not been able to determine to source of the problem.
I literally have tried every settings for session caching I could read up on with no (or minimal) affect.
Since our software utilizes session variables to hold user variables to control their environment (like active company selection), I went to go as far as removing the session variables and making the profile.variables (requiring Sql Session management) with minimal affect).
It seems to work fine for a few minutes (or page accesses) than once it stops updating the page, it will no longer update under any circumstance.
It will occur on pretty much any combination of page changes (after changing the active company, since it will actually change data displayed).
This design has been out in the field for over 8 years now (and is routinely brought up-to-date with the latest dot.net compiler updates, .net framework and IronSpeed Designer engine updates. This error has never occurred before now. No update to the development tools took place prior to the appearance of this issue.
I tried various tests.
Test 1:
I added java code to reset each page.
<script type="text/javascript">
function RefreshPage()
{
window.location.reload()
}
</script>
Result: No change
Test 2:
I stopped as soon as the page did not refresh and started timing when the page would update (1 -2 mins or going back and forth between the change active company and the reports screen several times)
Result:
After 60 - 90 secs, the current page seemed to do an update (the activity icon would appear than go away) so I would than check the page the was not refreshing and it was now correct.
Since I was using the report page for my tests, I would run a report when the screen update failed, to see what active company it thought it was on (since it was also reliant on the session variable, it was bringing up the correct report data, even though the page was not indicating the correct active company. Note: Every one of our screens indicate the current user and active company name at the top, so it is easy to see when it is not updating.
Any direction as to where to look from here would be greatly appreciated, I'm at a lost as to what to check now.
P.S. I installed MS Message Analyzer and had it monitor up to the point where I get a failure. I have never user MS MA before so I don't have much of an idea as to what to look for, other than the operation status was indicating Found (302) for the Get and Post and Ok (200) for the page I received the problem for.
Thanks in advanced!
John R
I propose to check caching options. I mean caching of page, controls, javascript, and browser. As workaround I propose to add some empty paramether to your page, ajax calls. For example instead opening "default.aspx" open "default.aspx?id=someNewGoid". Also consider adding some random paramethers to your ajax calls.
Try following coe for refresh:
<script type="text/javascript">
function S4() {
return (((1+Math.random())*0x10000)|0).toString(16).substring(1);
}
function guid()
{
var guid = (S4() + S4() + "-" + S4() + "-4" + S4().substr(0,3) + "-" + S4() + "-" + S4() + S4() + S4()).toLowerCase();
return guid;
}
function RefreshPage()
{
var url = window.location;
if (url.indexOf("?")>-1){
url = url.substr(0,url.indexOf("?"));
}//this par will cut of additional paramthers
window.location = url + "?id=" + guid();
window.location.reload()
}
</script>
When starting my program for the first time since the associated table has been deleted I get this error:
An exception of type 'Microsoft.WindowsAzure.Storage.StorageException' occurred in Microsoft.WindowsAzure.Storage.dll but was not handled in user code
Additional information: The remote server returned an error: (409) Conflict.
However if I refresh the crashed page the table will successfully create.
Here is the code just in case:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
Microsoft.WindowsAzure.CloudConfigurationManager.
GetSetting("StorageConnectionString"));
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("tableTesting");
table.CreateIfNotExists();
I don't really understand how or why I'd be getting a conflict error if there's nothing there.
These errors appear elsewhere in my code as well when I'm working with blob containers, but I can't reproduce them as easily.
If you look at the status codes here: http://msdn.microsoft.com/en-us/library/azure/dd179438.aspx, you will notice that you get 409 error code in two scenarios:
Table already exists
Table is being deleted
If I understand correctly, table.CreateIfNotExists() only handles the 1st situation but not the 2nd one. Please check if that is not the case in your situation. One way to check this would be to see details of Storage Exception. Somewhere you should get the code which would match with the link I mentioned above.
Also one important thing to understand is that when you delete the table, it is actually marked for deletion and is actually deleted through a background process (much like garbage collection). If you try to create a table between these two steps, you will get the 2nd error.
I have a classic ASP CRM that was built by a third party company. Currently, I have access to the source code and am able to make any changes required.
Randomly throughout the day, usually after some prolonged usage by users, most of my pages start getting an Out of Memory error.
The way that the application is built, is all the pages and scripts pull core functions from a Global.asp file. In that file are embeds to other global files as well, but the error presented shows
Out Of Memory
WhateverScriptYouTriedToRun.asp Line 0
Line 0 is the include for the global.asp file. Once the error occurs, after an unspecified amount of time the error occurence subsides for some time but then begins to reoccur again. With how the application is written, and the functions it uses, and the "diagnostics" I've already done - it seems to be a common used function that is withholding data such as recordset or something of that nature and then not releasing it properly. Other users then try to use the same function and eventually it just fills up causing the error. The only way for me to effectively clear the error is to actually restart IIS, Recycle the App Pool, and Restart the SQL Server Services.
Needless to say, myself and my users are getting annoyed....
I can't pinpoint the error due to the actual error message presented being Line 0 - but from there I have no idea where in the 20K lines of code it could be hanging up. Any thoughts or ideas on how to isolate or at least point me in the right direction to begin clearing this up? Is there a way for me to increase "memory" size for VBScript? I know there is a limitation but is it set at say...512K and you can increase it to 1GB?
Here are things I have tried:
Removing SQL Inline statements into Views
Going through several hundred scripts and ensuring that every OpenConnection & OpenRecordSet is followed by an appropriate Close.
Going through the Global File and commenting out any large SQL statements such as ApplicationLog (A function that writes the executed query into a table).
Some smaller script edits.
Common Memory Leak
You say you are closing all recordsets and connections which is good.
But are you deleting objects?
For example:
Set adoCon = new
Set rsCommon = new
'Do query stuff
'You do this:
rsCommon.close
adocon.close
'But do you do this?
Set adoCon = nothing
Set rsCommon = nothing
No garbage collection in classic ASP, so any objects not destroyed will remain in memory.
Also, ensure your closes/nothings are run in every branch. For example:
adocon.open
rscommon.open etc
'Sql query
myData = rscommon("condition")
if(myData) then
response.write("ok")
else
response.redirect("error.asp")
end if
'close
rsCommon.close
adocon.close
Set adoCon = nothing
Set rsCommon = nothing
Nothing is closed/destroyed before the redirect so it will only empty memory some of the time as not all branches of logic lead to the proper memory clearance.
Better Design
Also unfortunately it sounds like the website wasn't designed well. I always structure my classic ASP as:
<%
Option Explicit
'Declare all vars
Dim this
Dim that
'Open connections
Set adoCon...
adocon.open()
'Fetch required data
rscommon.open strSQL, adoCon
this = rsCommon.getRows()
rsCommon.close
'Fetch something else
rscommon.open strSQL, adoCon
that = rsCommon.getRows()
rsCommon.close
'Close connections and drop objects
adoCon.close
set adoCon = nothing
set rscommon = nothing
'Process redirects
if(condition) then
response.redirect(url)
end if
%>
<html>
<body>
<%
'Use data
for(i = 0 to ubound(this,2)
response.write(this(0, i) & " " & this(1, i) & "<br />")
next
%>
</body>
</html>
Hope some of this helped.
Have you looked at using a memory monitoring tool to see how much memory fragmentation is happening? My guess at a possible cause is that some object of a size is trying to be created but there isn't enough room in the memory to store it as one contiguous chunk. Imagine needing room to store an object that would take 100 MB and while there may be several hundred megabytes free, the largest contiguous chunk is 90MB then this doesn't fit.
Debug Diagnostic Tool v1.1 would be a tool where Bernard's articles may help in understanding how to use the tool.
Another thought is the question of how much string concatenation is there in the code? I remember where I used to work had problems with doing a lot of string concatenation operations that sucked up memory that may be another idea to consider.
Yeah, I could see some shock at that kind of number the first few times you see it but then if you understand what the code is doing it may make sense for why so much space gets reserved right off the bat at times.
I haven't used that debug tool specifically but I did have a tool that took a snapshot of memory when pages were hung so I couldn't tell if there was a performance impact of the tool or not. Course in my case I used a similar tool in 2004 so it has been a few years since I've had to research this kind of issue.
Just going to throw this in here, but this problem has taken a long time to solve. Here's a breakdown of what we did:
We took all the inline SQL and made SQL Views, every SELECT statement is now handled with a VIEW first.
I took every single SQL INSERT and UPDATE (as much as I could without breaking the system) and put them into Stored Procedures.
#2 was the one item that really made the biggest difference
Went through several thousand scripts, and ensured that variables were properly disposed of, and all the DB Open Connections were followed correctly with a Close Connection and same with Open/Close RecordSet.
One of the slow killers was doing something like:
ID = Request.QueryString("ID)
at the top of the page. Before redirecting, or closing a page, there is always a:
Set ID = Nothing
or the complete removal of the inference.
I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again.
However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass.
The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal.
But what it's doing is letting those other sessions in.
To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule.
I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.
The key may be to create a semaphore record in the database for the "drip" event.
Warning - consider the following pseudocode - I'm not looking up the functions.
When the post is queried, use a SQL statement like
$ts = get_time_now(); // or whatever the function is
$sid = session_id();
INSERT INTO table (postcategory, timestamp, sessionid)
VALUES ("$category", $ts, "$sid")
WHERE NOT EXISTS (SELECT 1 FROM table WHERE postcategory = "$category"
AND timestamp < $ts - 24 hours)
Database integrity will make this atomic so only one record can be inserted.
and the insertion will only take place if the timespan has been exceeded.
Then immediately check to see if the current session_id() and timestamp are yours. If they are, drip.
SELECT sessionid FROM table
WHERE postcategory = "$postcategory"
AND timestamp = $ts
AND sessionid = "$sid"
The problem goes like this with page requests even from the same session (same visitor), but also can occur with page requests from separate visitors. It works like this:
If you are doing content dripping, then a page request is probably what you intercept with add_action('wp','myPageRequest'). From there, if a scheduled post is due, then you create the new post.
The post takes a little bit of time to write to the database. In that time, a query on get_posts() may not see that new record yet. It may actually trigger your piece of code to create a new post when one has already been placed.
The fix is to force WordPress to flush the write cache appears to be this:
try {
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
# delete_post_meta($asPost['ID'], '_thwart');
# add_post_meta($asPost['ID'], '_thwart', '' . date('Y-m-d H:i:s'));
} catch (Exception $e) {}
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
$sLastPostDate = '';
# $sLastPostDate = $asPost['post_date'];
$sLastPostDate = substr($sLastPostDate, 0, strpos($sLastPostDate, ' '));
$sNow = date('Y-m-d H:i:s');
$sNow = substr($sNow, 0, strpos($sNow, ' '));
if ($sLastPostDate != $sNow) {
// No post today, so go ahead and post your new blog post.
// Place that code here.
}
The first thing we do is get the most recent post. But we don't really care if it's not the most recent post or not. All we're getting it for is to get a single Post ID, and then we add a hidden custom field (thus the underscore it begins with) called
_thwart
...as in, thwart the write cache by posting some data to the database that's not too CPU heavy.
Once that is in place, we then also use wp_get_recent_posts(1) yet again so that we can see if the most recent post is not today's date. If not, then we are clear to drip some content in. (Or, if you want to only drip in like every 72 hours, etc., you can change this a little here.)