Is there any possible way to check which query is so CPU intensive in _sqlsrv2 process?
Something which give me information about executed query in that process in that moment.
Is there any way to terminate that query without killing _sqlsrv2 process?
I cannot find any official materials in that subject.
Thank You for any help.
You could look into client database-request caching.
Code examples below assume you have ABL access to the environment. If not you will have to use SQL instead but it shouldn't be to hard to "translate" the code below
I haven't used this a lot myself but I wouldn't be surprised if it has some impact on performance.
You need to start caching in the active connection. This can be done in the connection itself or remotely via VST tables (as long as your remote session is connected to the same database) so you need to be able to identify your connections. This can be done via the process ID.
Generally how to enable the caching:
/* "_myconnection" is your current connection. You shouldn't do this */
FIND _myconnection NO-LOCK.
FIND _connect WHERE _connect-usr = _myconnection._MyConn-userid.
/* Start caching */
_connect._Connect-CachingType = 3.
DISPLAY _connect WITH FRAME x1 SIDE-LABELS WIDTH 100 1 COLUMN.
/* End caching */
_connect._Connect-CachingType = 0.
You need to identify your process first, via top or another program.
Then you can do something like:
/* Assuming pid 21966 */
FIND FIRST _connect NO-LOCK WHERE _Connect._Connect-Pid = 21966 NO-ERROR.
IF AVAILABLE _Connect THEN
DISPLAY _connect.
You could also look at the _Connect-Type. It should be 'SQLC' for SQL connections.
FOR EACH _Connect NO-LOCK WHERE _Connect._connect-type = "SQLC":
DISPLAY _connect._connect-type.
END.
Best of all would be to do this in a separate environment. If you can't at least try it in a test environment first.
Here's a good guide.
You can use a Select like this:
select
c."_Connect-type",
c."_Connect-PID" as 'PID',
c."_connect-ipaddress" as 'IP',
c."_Connect-CacheInfo"
from
pub."_connect" c
where
c."_Connect-CacheInfo" is not null
But first you need to enable connection cache, follow this example
Related
I've only recently started to work with KaaIoT and I am wondering if there is another way to store a log bucked to the server.
/* some headers */
static void main_callback(void *context)
{
kaa_user_log_record_t *log_record = kaa_logging_time_collection_create();
log_record->test_time = kaa_string_copy_create("some_time");
kaa_logging_add_record(kaa_client_get_context(context)->log_collector, log_record, NULL);
}
/* some other configuration */
error = kaa_client_start(kaa_client, main_callback, kaa_client, 5);
When I execute this code, the string "some_time" will be stored to the server every 5 seconds.
I was wondering if there was an other way to do this, like upload the log to the server when I press my 'enter' key? But I can't seem to find a command for this.
To my understanding kaa_logging_add_record, just add the record to the storing bucket waiting to be sent according to the logging strategy you have defined. (https://kaaproject.github.io/kaa/autogen-docs/client-c/v0.10.0/kaa__logging_8h.html#af0fadc09a50f5e38603271a08c581417) . The parameter 5 sec in kaa_client_start is only a delay to cycle the call back function. If you want to register an event, first you have to store it in the log bucket and the timestamp if you want to record at what time happened. If you want to notify at the moment, the I think you should use Notifications or Events. I am also scratching my head in something similar and I wonder if there is a better way.
I connected to clickhouse with tableau.
A query like this
select * from table_name limit 1
returns fields of the table, even though it should return raws.
image
If I try
select subs_key from table name limit 1
And click preview results
preview results
I get the error from above(except cnt is replaced with subs_key or whatever field I try to select)
How can I actually view table data?
Edit
There is a connection to the db, but no table is shown in available schemas.
EDIT 2
I managed to connect and get data from an oracle and mysql database, but while I am connected to click house, I can't see any data.
Don't quote me on this but I believe tableau has not official support for clickhouse, at least I could not find anything to contradict this, tons of people asking for it but nothing concrete.
There might some sort of beta integration that's not yet stable, hence you problem, but this is just blind guessing.
What I can recommend, if you really need a UI and can't just use the cl client is using tabix:
https://github.com/smi2/tabix.ui
Its fully open source for now and should be pretty easy and straight forward to learn, there might be the odd bits of Russian here and there, but I believe its getting debugged and translated at quite a good pace.
I get the same error message when I use DBeaver.
SQL Error [47]: ClickHouse exception, Code: 47, e.displayText() =
DB::Exception: Unknown identifier: default_type, e.what() = DB::Exception
If it's not a coincidence, then it's a JDBC driver bug.
I have a simple Biztalk Application 2013-r2 that imports a file into a table, then executes a long running post import process (via stored procedures).
Symptoms: when importing 2 files
The import of the first file has no issues
Then, the post processing starts (slow as expected due to long running stored procedure)
Then if you drop a second file, the first file post processing dissapears and the second import takes place.
Then they start alternating back and forth (you can see the post processing field being populated as expected)
Both send ports are active, sometimes you see a third one dehydrated
Since there are no errors reported, this must be a setting or do I need to move the post processing out of the long running transaction?
Details:
Orchestration Transaction Type is long running
The time out for the post processing send port is 59 minutes
The post processing stored procedure invokes child stored procedures.
No errors are reported anywhere
Both send ports have ordered delivery checked
Post Processing Stored Procedures:
CREATE PROCEDURE [sync].[MPostProcessing]
#Code NVARCHAR(2)
AS
----
---- 2. Normalize Address
----
IF #Code = '99'
EXEC sync.AElBatch #Code = #Code
CREATE PROCEDURE [sync].[AElBatch ] #Code AS VARCHAR(2)
AS
DECLARE #ID AS INT
WHILE EXISTS ( SELECT ID
FROM sync.[mtable]
WHERE Code = #Code
AND PostProcessingDone = 0 )
BEGIN
SELECT TOP 1
#ID = ID
FROM sync.[mtable]
WHERE Code = #Code
AND PostProcessingDone = 0
EXEC sync.PParse #ID = #ID
UPDATE sync.[mtable]
SET PostProcessingDone = 1
WHERE Code = #Code
AND ID = #ID
END
And then the PPArse stored procedure does more (all working, no errors reported)
Image of Biztalk Server Administration Console
So this is too long for a comment but I'm not 100% sure of your problem still. Either way:
It seems like you likely have some issues with your SPs. Refactor them to use set based queries instead of while loops (or cursors if you have any). Forcing SQL Server to process each individual scalar variable as a separate call will prevent it from fully optimizing whatever it's doing in sync.PParse - pass a table variable to it or something if you need to so that it can parallelize it properly and stop holding things up so badly.
It's quite possible that sync.PParse has a bug in it that is reading data it shouldn't. These lines in particular from AElBatch are troubling:
SELECT TOP 1
#ID = ID
FROM sync.[mtable]
WHERE Code = #Code
AND PostProcessingDone = 0
You probably want to add a batch identifier in there of some sort so that PostProcessing#2 doesn't start picking up what was really meant for PostProcessing#1.
Double check what's going on with sp_who2, see if things are getting blocked. It's likely that something is going on there, even if no errors are surfacing properly.
In the end, if none of that works, you might have to make them into a single SP that BizTalk calls so that Ordered Delivery will keep both jobs in the same queue - rather than allowing File Load #2 to complete before post processing job #1 is done.
I currently have an application using flask-sqlalchemy. My model is connected to a postgresql database, and now I would like to write unit tests (using nose). I was told to use SQLite to create a new database for testing, and after a lot of searching (and looking at the texting section on the flask-sqlalchemy website) I'm still confused as how to do it. Each class in my model.py looks something like the following:
db = SQLAlchemy(app)
class Prod(db.Model):
__tablename__ = 'prod'
id = db.Column(db.Integer, primary_key=True)
desc = db.Column(db.String)
def __init__(self, id, desc):
self.id = id
self.desc = desc
My config.py:
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres://name:pass#server/db
and I would like to test my insert functions in a new file by setting up and tearing down a new database for each test. If anyone can give me some example code that would be great. Thanks!
I can't answer your specific question, but will provide some general advise:
You will find that setting up and tearing down the complete database for each test will be too slow. Imagine in the future when you might have hundreds of tests or even thousands.
The approach we take is:
For testing purposes we have a database populated with test data. We have a script which creates a fresh database and populates it with this test data.
We run this script prior to running our test suite. All tests can assume this data exists.
Each test may create additional records if necessary, but it is their responsibility to undo any changes they make (delete new records, undo changes) - in order words to leave the database in the same state as it was before the test began. This prevents tests from interfering with each other.
In a project I manage we have a test suite of 1070 tests which runs in about 5 minutes using this approach.
What if we had taken your approach? Let's assume that 50% of these tests actually exercise the database (and need a fresh reload). That's 1070 * .50 * 20 seconds for the reload / 3600 = 2.97 hours. Oops - that's far too slow to be useful.
Even at a much smaller scale though, you'll be much happier if your test suite runs in 1 minute instead of 20 minutes.
I have a classic ASP CRM that was built by a third party company. Currently, I have access to the source code and am able to make any changes required.
Randomly throughout the day, usually after some prolonged usage by users, most of my pages start getting an Out of Memory error.
The way that the application is built, is all the pages and scripts pull core functions from a Global.asp file. In that file are embeds to other global files as well, but the error presented shows
Out Of Memory
WhateverScriptYouTriedToRun.asp Line 0
Line 0 is the include for the global.asp file. Once the error occurs, after an unspecified amount of time the error occurence subsides for some time but then begins to reoccur again. With how the application is written, and the functions it uses, and the "diagnostics" I've already done - it seems to be a common used function that is withholding data such as recordset or something of that nature and then not releasing it properly. Other users then try to use the same function and eventually it just fills up causing the error. The only way for me to effectively clear the error is to actually restart IIS, Recycle the App Pool, and Restart the SQL Server Services.
Needless to say, myself and my users are getting annoyed....
I can't pinpoint the error due to the actual error message presented being Line 0 - but from there I have no idea where in the 20K lines of code it could be hanging up. Any thoughts or ideas on how to isolate or at least point me in the right direction to begin clearing this up? Is there a way for me to increase "memory" size for VBScript? I know there is a limitation but is it set at say...512K and you can increase it to 1GB?
Here are things I have tried:
Removing SQL Inline statements into Views
Going through several hundred scripts and ensuring that every OpenConnection & OpenRecordSet is followed by an appropriate Close.
Going through the Global File and commenting out any large SQL statements such as ApplicationLog (A function that writes the executed query into a table).
Some smaller script edits.
Common Memory Leak
You say you are closing all recordsets and connections which is good.
But are you deleting objects?
For example:
Set adoCon = new
Set rsCommon = new
'Do query stuff
'You do this:
rsCommon.close
adocon.close
'But do you do this?
Set adoCon = nothing
Set rsCommon = nothing
No garbage collection in classic ASP, so any objects not destroyed will remain in memory.
Also, ensure your closes/nothings are run in every branch. For example:
adocon.open
rscommon.open etc
'Sql query
myData = rscommon("condition")
if(myData) then
response.write("ok")
else
response.redirect("error.asp")
end if
'close
rsCommon.close
adocon.close
Set adoCon = nothing
Set rsCommon = nothing
Nothing is closed/destroyed before the redirect so it will only empty memory some of the time as not all branches of logic lead to the proper memory clearance.
Better Design
Also unfortunately it sounds like the website wasn't designed well. I always structure my classic ASP as:
<%
Option Explicit
'Declare all vars
Dim this
Dim that
'Open connections
Set adoCon...
adocon.open()
'Fetch required data
rscommon.open strSQL, adoCon
this = rsCommon.getRows()
rsCommon.close
'Fetch something else
rscommon.open strSQL, adoCon
that = rsCommon.getRows()
rsCommon.close
'Close connections and drop objects
adoCon.close
set adoCon = nothing
set rscommon = nothing
'Process redirects
if(condition) then
response.redirect(url)
end if
%>
<html>
<body>
<%
'Use data
for(i = 0 to ubound(this,2)
response.write(this(0, i) & " " & this(1, i) & "<br />")
next
%>
</body>
</html>
Hope some of this helped.
Have you looked at using a memory monitoring tool to see how much memory fragmentation is happening? My guess at a possible cause is that some object of a size is trying to be created but there isn't enough room in the memory to store it as one contiguous chunk. Imagine needing room to store an object that would take 100 MB and while there may be several hundred megabytes free, the largest contiguous chunk is 90MB then this doesn't fit.
Debug Diagnostic Tool v1.1 would be a tool where Bernard's articles may help in understanding how to use the tool.
Another thought is the question of how much string concatenation is there in the code? I remember where I used to work had problems with doing a lot of string concatenation operations that sucked up memory that may be another idea to consider.
Yeah, I could see some shock at that kind of number the first few times you see it but then if you understand what the code is doing it may make sense for why so much space gets reserved right off the bat at times.
I haven't used that debug tool specifically but I did have a tool that took a snapshot of memory when pages were hung so I couldn't tell if there was a performance impact of the tool or not. Course in my case I used a similar tool in 2004 so it has been a few years since I've had to research this kind of issue.
Just going to throw this in here, but this problem has taken a long time to solve. Here's a breakdown of what we did:
We took all the inline SQL and made SQL Views, every SELECT statement is now handled with a VIEW first.
I took every single SQL INSERT and UPDATE (as much as I could without breaking the system) and put them into Stored Procedures.
#2 was the one item that really made the biggest difference
Went through several thousand scripts, and ensured that variables were properly disposed of, and all the DB Open Connections were followed correctly with a Close Connection and same with Open/Close RecordSet.
One of the slow killers was doing something like:
ID = Request.QueryString("ID)
at the top of the page. Before redirecting, or closing a page, there is always a:
Set ID = Nothing
or the complete removal of the inference.