I'm using Zeos and SQLite3 DB in Delphi
ZQuery2.Close;
ZQuery2.SQL.Clear;
ZQuery2.SQL.Add('SELECT * FROM users WHERE un = ' + QuotedStr( UserName ) );
ZQuery2.Open;
OutputDebugString(PWideChar( ZQuery2.FieldDefList.CommaText )); // log : id,un,pw
OutputDebugString(PWideChar(ZQuery2.FieldByName('pw').AsString)); //causes error sometimes
the code is working but sometimes I get the following error message
Exception class EDatabaseError with message 'ZQuery2:Field'pw' not found'.
This is odd because a field of a dataset shouldn't just disappear while the app is in the middle of running, especially if other fields are still operating normally. So, I would suspect something like a memory overwrite being the cause.
Memory overwrites usually happen when something is written to the wrong place in memory, overwriting what is there, usually because of an incorrect pointer value or a so-called "buffer overrun" where the writing operation carries on beyond where is should stop. Usually, the pointer value is so wildly wrong that the OS can detect it and raise an AV, but sometimes it is less obvious.
Delphi's memory manager has a 'full debug mode' which adds special checks for this condition, see here.
I suggest you enable full debug mode as per the linked document and wait for the exception to occur.
Related
Hello all and thanks for viewing this question,
I have a program that users get access to via a login screen. Once the user's credentials have been validated on the login screen, the main program is called (from the login screen) and the login screen disappears. All good. However, if the session crashes (or I press CTRL-PAUSE), the main program is terminated and I end up at the initial login screen. I'd have assumed that after a session crash, Progress (11.4) should take me back to the OS (Windows Server 2012), but not back to the initial screen. I have tried placing QUIT in different areas of the program, but Progress still takes me back to the initial screen, while I need it to quit completely. Any thoughts would be greatly appreciated. Thanks!
It's the AVM's default behavior to rerun the startup procedure after a STOP condition has occurred that was not handled.
You can add an
ON STOP UNDO, RETURN "stopped" .
option to a DO, FOR or REPEAT block close where your "crash" happens. Then the calling procedure could check for the RETURN-VALUE of "stopped".
Assuming you are on a recent version (OpenEdge 12.x), you can also use CATCH Blocks for Progress.Lang.Stop:
CATCH stopcon AS Progress.Lang.Stop:
QUIT.
END CATCH.
I think that your use of the word "crashed" is very, very confusing. If your session actually "crashes" in the usual sense that _progres (or prowin if this is Windows) terminates, then you would not have any locked records remaining. You would also have a protrace file that would help you to identify where the issue occurs.
Incidentally, you could add error logging to the client startup to determine where the errors that QXtend cannot find are occurring:
_progres dbname -p startup.p -clientlog logname.log
You have not shared any code so I can only guess but, presumably, you are running your login program via the -p startup parameter.
Correct me if I am wrong but something along these lines:
_progres dbname -p startup.p
The startup program then runs whatever it runs to get you logged in and run the application. Maybe something like this:
/* startup.p
*/
message "(re)starting!".
pause.
run value( "login.p" ).
run value( "stuff.p" ).
message "all done".
pause.
quit.
And:
/* login.p
*/
message "hello, logging in!".
pause.
return.
Along with:
/* stuff.p
*/
message "hello, doing stuff!".
pause.
run value( "notthere.p" ).
message "hello, doing more stuff!".
pause.
return.
At some point an error occurs (you seem to want to call this a "crash"). I have arranged for a serious error to occur when stuff.p tries to "run notthere.p". So if you run my example you will see the behavior that you have described - your session "crashes", the startup procedure re-runs, and you get to the login screen again.
To change that and trap the error simply wrap a "DO ON STOP" around the RUN statements. Like this:
/* startup.p
*/
message "(re)starting!".
pause.
do on error undo, leave
on endkey undo, leave
on stop undo, leave
on quit undo, leave: /* "leave", exits this block when one of the named conditions arises */
run value( "login.p" ).
run value( "stuff.p" ).
/* we just leave because we finished normally */
end.
message "all done".
pause.
quit.
You mention QXtend so I am guessing that MFG/Pro is involved. If you cannot directly modify the MFG/Pro startup procedure (as I recall that would be "-p mfg.p") just adapt the code above to be a "shim" that runs mfg.p from within the "DO ON STOP..." block.
I believe I have found a way to quit the initial login screen when this appears as the result of a session crash, by using the the ETIME function. Thanks again, Mike for your response.
I connected to clickhouse with tableau.
A query like this
select * from table_name limit 1
returns fields of the table, even though it should return raws.
image
If I try
select subs_key from table name limit 1
And click preview results
preview results
I get the error from above(except cnt is replaced with subs_key or whatever field I try to select)
How can I actually view table data?
Edit
There is a connection to the db, but no table is shown in available schemas.
EDIT 2
I managed to connect and get data from an oracle and mysql database, but while I am connected to click house, I can't see any data.
Don't quote me on this but I believe tableau has not official support for clickhouse, at least I could not find anything to contradict this, tons of people asking for it but nothing concrete.
There might some sort of beta integration that's not yet stable, hence you problem, but this is just blind guessing.
What I can recommend, if you really need a UI and can't just use the cl client is using tabix:
https://github.com/smi2/tabix.ui
Its fully open source for now and should be pretty easy and straight forward to learn, there might be the odd bits of Russian here and there, but I believe its getting debugged and translated at quite a good pace.
I get the same error message when I use DBeaver.
SQL Error [47]: ClickHouse exception, Code: 47, e.displayText() =
DB::Exception: Unknown identifier: default_type, e.what() = DB::Exception
If it's not a coincidence, then it's a JDBC driver bug.
When starting my program for the first time since the associated table has been deleted I get this error:
An exception of type 'Microsoft.WindowsAzure.Storage.StorageException' occurred in Microsoft.WindowsAzure.Storage.dll but was not handled in user code
Additional information: The remote server returned an error: (409) Conflict.
However if I refresh the crashed page the table will successfully create.
Here is the code just in case:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
Microsoft.WindowsAzure.CloudConfigurationManager.
GetSetting("StorageConnectionString"));
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("tableTesting");
table.CreateIfNotExists();
I don't really understand how or why I'd be getting a conflict error if there's nothing there.
These errors appear elsewhere in my code as well when I'm working with blob containers, but I can't reproduce them as easily.
If you look at the status codes here: http://msdn.microsoft.com/en-us/library/azure/dd179438.aspx, you will notice that you get 409 error code in two scenarios:
Table already exists
Table is being deleted
If I understand correctly, table.CreateIfNotExists() only handles the 1st situation but not the 2nd one. Please check if that is not the case in your situation. One way to check this would be to see details of Storage Exception. Somewhere you should get the code which would match with the link I mentioned above.
Also one important thing to understand is that when you delete the table, it is actually marked for deletion and is actually deleted through a background process (much like garbage collection). If you try to create a table between these two steps, you will get the 2nd error.
I have a classic ASP CRM that was built by a third party company. Currently, I have access to the source code and am able to make any changes required.
Randomly throughout the day, usually after some prolonged usage by users, most of my pages start getting an Out of Memory error.
The way that the application is built, is all the pages and scripts pull core functions from a Global.asp file. In that file are embeds to other global files as well, but the error presented shows
Out Of Memory
WhateverScriptYouTriedToRun.asp Line 0
Line 0 is the include for the global.asp file. Once the error occurs, after an unspecified amount of time the error occurence subsides for some time but then begins to reoccur again. With how the application is written, and the functions it uses, and the "diagnostics" I've already done - it seems to be a common used function that is withholding data such as recordset or something of that nature and then not releasing it properly. Other users then try to use the same function and eventually it just fills up causing the error. The only way for me to effectively clear the error is to actually restart IIS, Recycle the App Pool, and Restart the SQL Server Services.
Needless to say, myself and my users are getting annoyed....
I can't pinpoint the error due to the actual error message presented being Line 0 - but from there I have no idea where in the 20K lines of code it could be hanging up. Any thoughts or ideas on how to isolate or at least point me in the right direction to begin clearing this up? Is there a way for me to increase "memory" size for VBScript? I know there is a limitation but is it set at say...512K and you can increase it to 1GB?
Here are things I have tried:
Removing SQL Inline statements into Views
Going through several hundred scripts and ensuring that every OpenConnection & OpenRecordSet is followed by an appropriate Close.
Going through the Global File and commenting out any large SQL statements such as ApplicationLog (A function that writes the executed query into a table).
Some smaller script edits.
Common Memory Leak
You say you are closing all recordsets and connections which is good.
But are you deleting objects?
For example:
Set adoCon = new
Set rsCommon = new
'Do query stuff
'You do this:
rsCommon.close
adocon.close
'But do you do this?
Set adoCon = nothing
Set rsCommon = nothing
No garbage collection in classic ASP, so any objects not destroyed will remain in memory.
Also, ensure your closes/nothings are run in every branch. For example:
adocon.open
rscommon.open etc
'Sql query
myData = rscommon("condition")
if(myData) then
response.write("ok")
else
response.redirect("error.asp")
end if
'close
rsCommon.close
adocon.close
Set adoCon = nothing
Set rsCommon = nothing
Nothing is closed/destroyed before the redirect so it will only empty memory some of the time as not all branches of logic lead to the proper memory clearance.
Better Design
Also unfortunately it sounds like the website wasn't designed well. I always structure my classic ASP as:
<%
Option Explicit
'Declare all vars
Dim this
Dim that
'Open connections
Set adoCon...
adocon.open()
'Fetch required data
rscommon.open strSQL, adoCon
this = rsCommon.getRows()
rsCommon.close
'Fetch something else
rscommon.open strSQL, adoCon
that = rsCommon.getRows()
rsCommon.close
'Close connections and drop objects
adoCon.close
set adoCon = nothing
set rscommon = nothing
'Process redirects
if(condition) then
response.redirect(url)
end if
%>
<html>
<body>
<%
'Use data
for(i = 0 to ubound(this,2)
response.write(this(0, i) & " " & this(1, i) & "<br />")
next
%>
</body>
</html>
Hope some of this helped.
Have you looked at using a memory monitoring tool to see how much memory fragmentation is happening? My guess at a possible cause is that some object of a size is trying to be created but there isn't enough room in the memory to store it as one contiguous chunk. Imagine needing room to store an object that would take 100 MB and while there may be several hundred megabytes free, the largest contiguous chunk is 90MB then this doesn't fit.
Debug Diagnostic Tool v1.1 would be a tool where Bernard's articles may help in understanding how to use the tool.
Another thought is the question of how much string concatenation is there in the code? I remember where I used to work had problems with doing a lot of string concatenation operations that sucked up memory that may be another idea to consider.
Yeah, I could see some shock at that kind of number the first few times you see it but then if you understand what the code is doing it may make sense for why so much space gets reserved right off the bat at times.
I haven't used that debug tool specifically but I did have a tool that took a snapshot of memory when pages were hung so I couldn't tell if there was a performance impact of the tool or not. Course in my case I used a similar tool in 2004 so it has been a few years since I've had to research this kind of issue.
Just going to throw this in here, but this problem has taken a long time to solve. Here's a breakdown of what we did:
We took all the inline SQL and made SQL Views, every SELECT statement is now handled with a VIEW first.
I took every single SQL INSERT and UPDATE (as much as I could without breaking the system) and put them into Stored Procedures.
#2 was the one item that really made the biggest difference
Went through several thousand scripts, and ensured that variables were properly disposed of, and all the DB Open Connections were followed correctly with a Close Connection and same with Open/Close RecordSet.
One of the slow killers was doing something like:
ID = Request.QueryString("ID)
at the top of the page. Before redirecting, or closing a page, there is always a:
Set ID = Nothing
or the complete removal of the inference.
I have an ASP.NET site (VB.NET) that I'm trying to clean up. When it was originally created it was written with no error handling, and I'm trying to add it in to improve the User Experience.
Try
If Not String.IsNullOrEmpty(strMfgName) And Not String.IsNullOrEmpty(strSortType) Then
If Integer.TryParse(Request.QueryString("CategoryID"), i) And String.IsNullOrEmpty(Request.QueryString("CategoryID"))
MyDataGrid.DataSource = ProductCategoryDB.GetMfgItems(strMfgName, strSortType, i)
Else
MyDataGrid.DataSource = ProductCategoryDB.GetMfgItems(strMfgName, strSortType)
End If
MyDataGrid.DataBind()
If CType(MyDataGrid.DataSource, DataSet).Tables("Data").Rows.Count > 0 Then
lblCatName.Text = CType(MyDataGrid.DataSource, DataSet).Tables("Data").Rows(0).Item("mfgName")
End If
If MyDataGrid.Items.Count < 2 Then
cboSortTypes.Visible = False
table_search.Visible = False
End If
If MyDataGrid.PageCount < 2 Then
MyDataGrid.PagerStyle.Visible = False
End If
Else
lblCatName.Text &= "<br /><span style=""fontf-size: 12px;"">There are no items for this manufacturer</span>"
MyDataGrid.Visible = False
table_search.Visible = False
End If
Catch
lblCatName.Text &= "<br /><span style=""font-size: 12px;"">There are no items for this manufacturer</span>"
MyDataGrid.Visible = False
table_search.Visible = False
End Try
Now, this is trying to avoid generating a 500 error by catching exceptions. There can be three items on the query string, but only two matter here. In my test environment and in Visual Studio when I run this site, it doesn't matter if that item is on the query string. In production, it does matter. If that third item isn't present (SubCategoryID) on the query string, then the "There are no items for this manufacturer" displays instead of the data from the database.
In the two different environments I am seeing two different code execution paths, despite the same URLs and the same code base.
The site is running on Server 2003 with IIS 6.
Thoughts?
EDIT:
In response to the answer below, I doubt it's a connection error (though I see what you're getting to), as when I add the SubCategoryID to the query string, the site works correctly (displaying data from the database).
Also, if please let me know if you have any suggestions for how to test this scenario, without deploying the code back to production (it's been rolled back).
I think you should try to print out the exception details in your catch block to see what the problem is. It could anything for example a connection error to your database.
The error could be anything, and you should definitely consider printing this out or logging it somewhere, rather than making the assumption that there's no data. You're also outputting the same error message to the UI for two different code paths, which makes things harder to debug, especially without knowing if an exception occurred, and if so, what it was.
Generally, it's also better not to have a catch for all exceptions in cases like this, especially without logging the error. Instead, you should catch specific exceptions and handle these appropriately, and any real exceptions can get passed up the stack, ideally to a global error handler which can log it and/or send out some kind of error notification.
I discovered the reason yesterday. In short it was because when I copied my files from my computer into my dev-test environment, I missed a file, which ironically caused it to work, rather than not. So in the end it would have functioned the same in both environments.