Is my query getting cached? - railo

I'm used to Adobe ColdFusion and have been caching queries using cachedafter with a "simple" date:
<cfparam name="application.icons_last_changed" default="#now()#">
<cfquery name="get_icons" cachedafter="02/02/1978" datasource="#application.datasources.main#">
SELECT icon_id, icon_name
FROM REF_Icon
WHERE #application.icons_last_changed#=#application.icons_last_changed#
ORDER BY sort_order
</cfquery>
I transitioned my project from ColdFusion/MSSQL to Railo/PostgreSQL. Today, something in the Railo diagnostics caught my eye.
I'm used to seeing "get_ref_icon (Datasource=Workstream, Time=cached, Records=39) in /path/qry_get_ref_icon.cfm"
but in Railo I see "get_ref_icon (Datasource=Workstream, Time=0.974 ms, Records=39) in /pathqry_get_ref_icon.cfm".
Thinking that perhaps the simple data value ("02/02/1978") for cachedafter isn't supported by Railo, I tried setting the date with createodbcdatetime('1978-02-02 16:37:00'), but that didn't seem to make a difference.
Of course, 0.974 ms is such a small time that perhaps the query is cached, and Railo just isn't as explicit as ColdFusion.
Is my query getting cached, or am I going about it the wrong way?

No, the query is not getting cached.
I followed Busches's suggestion and reviewed the results from <cfdump var="#get_icons#" />:
Query
Template:/super/double/secret/path/qry_get_ref_icon.cfm
Execution Time (ms):0.624
Recordcount:39
Cached:No <--UH OH, SPAGHETTI-O's
Lazy:No
SQL:
SELECT icon_id, icon_name
FROM REF_Icon
WHERE active_ind=1 /*{ts '1978-02-02 16:37:00'}*/
AND {ts '2013-02-20 22:25:14'}={ts '2013-02-20 22:25:14'}
ORDER BY sort_order
Because I'm a distrustful type, I also followed Adam Cameron's suggestion and changed some data then reran the query. Query results updated, so no caching.
I've reported the issue to the fine Railo folks: https://issues.jboss.org/browse/RAILO-2318

Related

How I can filter meetings which are going to start in given time range

I am working on a specific requirement to filter out any meetings that are going to start in next 15 mins of a given calendar.
I can see that there is timeMax query option which will give events starting before given time but the problem I am facing is that I am also getting older events (which are done in past). Any way to get records only froj now to next 15 mins?
I tried query using syncToken but I guess that doesnt works with timeMax so not able to get just the delta and instead getting all the events.
Calendar Event List API
As suggested under the comments, you could be using timeMin and timeMax. It should be something similar to:
timeMin = 2022-12-27T15:30:00+01:00
timeMax = 2022-12-27T15:45:00+01:00
Notes:
Use the query parameters from above and make sure to respect that it must be RFC3339 timestamp.
This might be the only available option or suggestion when utilizing the event.list to filter them by the 15 minutes mark and check their status. It would be a loop process that could potentially hit a quota.
var request = Calendar.events.list({
'calendarId': calendar_id,
"singleEvents" : true,
"orderBy" : "startTime",
"timeMin": startDate.toISOString(),
"timeMax": maxDate.toISOString()
});
Calendar API limitation
If these suggestions or options are not enough or can be considered workarounds due to the limitations, you could always request a feature by going under Issue Tracker.
References
Events: list
Issue Tracker

HTTP status code: 404 Received error: Code: 47, e.displayText() = DB::Exception: Unknown identifier: TableauSQL.cnt, e.what() = DB::Exception

I connected to clickhouse with tableau.
A query like this
select * from table_name limit 1
returns fields of the table, even though it should return raws.
image
If I try
select subs_key from table name limit 1
And click preview results
preview results
I get the error from above(except cnt is replaced with subs_key or whatever field I try to select)
How can I actually view table data?
Edit
There is a connection to the db, but no table is shown in available schemas.
EDIT 2
I managed to connect and get data from an oracle and mysql database, but while I am connected to click house, I can't see any data.
Don't quote me on this but I believe tableau has not official support for clickhouse, at least I could not find anything to contradict this, tons of people asking for it but nothing concrete.
There might some sort of beta integration that's not yet stable, hence you problem, but this is just blind guessing.
What I can recommend, if you really need a UI and can't just use the cl client is using tabix:
https://github.com/smi2/tabix.ui
Its fully open source for now and should be pretty easy and straight forward to learn, there might be the odd bits of Russian here and there, but I believe its getting debugged and translated at quite a good pace.
I get the same error message when I use DBeaver.
SQL Error [47]: ClickHouse exception, Code: 47, e.displayText() =
DB::Exception: Unknown identifier: default_type, e.what() = DB::Exception
If it's not a coincidence, then it's a JDBC driver bug.

How to set local timezone in Sails.js or Express.js

When I create or update record on sails it write this at updateAt:
updatedAt: 2014-07-06T15:00:00.000Z
but I'm in GMT+2 hours (in this season) and update are performed at 16:00.
I have the same problem with all datetime fields declared in my models.
How can I set the right timezone on Sails (or eventually Express) ?
The way I handled the problem after hours of research :
Put
process.env.TZ = 'UTC'; //whatever timezone you want
in config/bootstrap.js
I solved the problem, you should setting the MySql options file to change timezone to UTC
in the config/connections.js
setting at this
devMysqlServer: {
adapter: 'sails-mysql',
host: '127.0.0.1',
user: 'root',
password: '***',
database: '**',
timezone: 'utc'
},
Trying to solve your problem by setting the timezone on your server is a bit short-sighted. What if you move? Or someone in a different country accesses your application? The important thing is that the timestamps in your database have a timezone encoded in them, so that you can translate to the correct time on the front end. That is, if you do:
new Date('2014-07-06T15:00:00.000Z')
in your browser console, you should see it display the correct date and time for wherever you are. Sails automatically encodes this timestamp for you with the default updatedAt and createdAt fields; just make sure you always use a timezone when saving any custom timestamps to the database, and you should be fine!
The best architecture planning here, IMO, is to continue using Sails.js isoDate formatting. When you're user's load your website/app the isoDate will be converted to their client/browser timezone which is usually set at the OS level.
Here's an example you can test this out with. Open a browser console and run new Date().toISOString() and look at the time it sets. It's going to be based of off the spec for isoDate 8601 (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString).
Now, change your system time to a different time zone or simply change your hour on the time and save (you shouldn't have to reload if you're using chrome console). Run your command in the console again new Date().toISOString() and you'll get an adjusted time appropriate to the time you just changed.
If you'd like to continue on proving to yourself the time Sails.js is appropriate to use, use Moment.js on an isoDate that is stored in your database (created by waterline ORM) like so moment("2016-02-05T22:36:48.800Z").fromNow() and you'll notice the time is relative to your system time.
I've come to grips with not setting a timezone at the app level (I see why the sails authors did it that way), however I've been having a rough time performing a simple date match query. I'd assume that if you create a record using the default blueprint methods (this one containing an extra datetime field over the defaults), passing in a date, that you'd be able to pass in the same date in a get query and get the same record.
For example, let's say the datetime field is called "specialdate". If I create a new record through the api with "specialdate" equaling "06-09-2014" (ignoring time), I haven't been able to run a find query in which I can pass in "06-09-2014" and get that record back. Greater than queries work fine (if I do a find for a date greater than that). I'm sure it's a timezone offset thing, but haven't been able to come up with a solution.

VBScript Out Of Memory Error

I have a classic ASP CRM that was built by a third party company. Currently, I have access to the source code and am able to make any changes required.
Randomly throughout the day, usually after some prolonged usage by users, most of my pages start getting an Out of Memory error.
The way that the application is built, is all the pages and scripts pull core functions from a Global.asp file. In that file are embeds to other global files as well, but the error presented shows
Out Of Memory
WhateverScriptYouTriedToRun.asp Line 0
Line 0 is the include for the global.asp file. Once the error occurs, after an unspecified amount of time the error occurence subsides for some time but then begins to reoccur again. With how the application is written, and the functions it uses, and the "diagnostics" I've already done - it seems to be a common used function that is withholding data such as recordset or something of that nature and then not releasing it properly. Other users then try to use the same function and eventually it just fills up causing the error. The only way for me to effectively clear the error is to actually restart IIS, Recycle the App Pool, and Restart the SQL Server Services.
Needless to say, myself and my users are getting annoyed....
I can't pinpoint the error due to the actual error message presented being Line 0 - but from there I have no idea where in the 20K lines of code it could be hanging up. Any thoughts or ideas on how to isolate or at least point me in the right direction to begin clearing this up? Is there a way for me to increase "memory" size for VBScript? I know there is a limitation but is it set at say...512K and you can increase it to 1GB?
Here are things I have tried:
Removing SQL Inline statements into Views
Going through several hundred scripts and ensuring that every OpenConnection & OpenRecordSet is followed by an appropriate Close.
Going through the Global File and commenting out any large SQL statements such as ApplicationLog (A function that writes the executed query into a table).
Some smaller script edits.
Common Memory Leak
You say you are closing all recordsets and connections which is good.
But are you deleting objects?
For example:
Set adoCon = new
Set rsCommon = new
'Do query stuff
'You do this:
rsCommon.close
adocon.close
'But do you do this?
Set adoCon = nothing
Set rsCommon = nothing
No garbage collection in classic ASP, so any objects not destroyed will remain in memory.
Also, ensure your closes/nothings are run in every branch. For example:
adocon.open
rscommon.open etc
'Sql query
myData = rscommon("condition")
if(myData) then
response.write("ok")
else
response.redirect("error.asp")
end if
'close
rsCommon.close
adocon.close
Set adoCon = nothing
Set rsCommon = nothing
Nothing is closed/destroyed before the redirect so it will only empty memory some of the time as not all branches of logic lead to the proper memory clearance.
Better Design
Also unfortunately it sounds like the website wasn't designed well. I always structure my classic ASP as:
<%
Option Explicit
'Declare all vars
Dim this
Dim that
'Open connections
Set adoCon...
adocon.open()
'Fetch required data
rscommon.open strSQL, adoCon
this = rsCommon.getRows()
rsCommon.close
'Fetch something else
rscommon.open strSQL, adoCon
that = rsCommon.getRows()
rsCommon.close
'Close connections and drop objects
adoCon.close
set adoCon = nothing
set rscommon = nothing
'Process redirects
if(condition) then
response.redirect(url)
end if
%>
<html>
<body>
<%
'Use data
for(i = 0 to ubound(this,2)
response.write(this(0, i) & " " & this(1, i) & "<br />")
next
%>
</body>
</html>
Hope some of this helped.
Have you looked at using a memory monitoring tool to see how much memory fragmentation is happening? My guess at a possible cause is that some object of a size is trying to be created but there isn't enough room in the memory to store it as one contiguous chunk. Imagine needing room to store an object that would take 100 MB and while there may be several hundred megabytes free, the largest contiguous chunk is 90MB then this doesn't fit.
Debug Diagnostic Tool v1.1 would be a tool where Bernard's articles may help in understanding how to use the tool.
Another thought is the question of how much string concatenation is there in the code? I remember where I used to work had problems with doing a lot of string concatenation operations that sucked up memory that may be another idea to consider.
Yeah, I could see some shock at that kind of number the first few times you see it but then if you understand what the code is doing it may make sense for why so much space gets reserved right off the bat at times.
I haven't used that debug tool specifically but I did have a tool that took a snapshot of memory when pages were hung so I couldn't tell if there was a performance impact of the tool or not. Course in my case I used a similar tool in 2004 so it has been a few years since I've had to research this kind of issue.
Just going to throw this in here, but this problem has taken a long time to solve. Here's a breakdown of what we did:
We took all the inline SQL and made SQL Views, every SELECT statement is now handled with a VIEW first.
I took every single SQL INSERT and UPDATE (as much as I could without breaking the system) and put them into Stored Procedures.
#2 was the one item that really made the biggest difference
Went through several thousand scripts, and ensured that variables were properly disposed of, and all the DB Open Connections were followed correctly with a Close Connection and same with Open/Close RecordSet.
One of the slow killers was doing something like:
ID = Request.QueryString("ID)
at the top of the page. Before redirecting, or closing a page, there is always a:
Set ID = Nothing
or the complete removal of the inference.

Response Buffer Limit Exceeded

I am running a simple query to get data out of my database & display them. I'm getting an error that says Response Buffer Limit Exceeded.
Error is : Response object error 'ASP 0251 : 80004005'
Response Buffer Limit Exceeded
/abc/test_maintenanceDetail.asp, line 0
Execution of the ASP page caused the Response Buffer to exceed its configured limit.
I have also tried Response.flush in my loop and also use response.buffer = false in my top of the page, but still I am not getting any data.
My database contains 5600 records for that, Please give me some steps or code to solve the issue.
I know this is way late, but for anyone else who encounters this problem: If you are using a loop of some kind (in my case, a Do-While) to display the data, make sure that you are moving to the next record (in my case, a rs.MoveNext).
Here is what a Microsoft support page says about this:
https://support.microsoft.com/en-us/help/944886/error-message-when-you-use-the-response-binarywrite-method-in-iis-6-an.
But it’s easier in the GUI:
In Internet Information Services (IIS) Manager, click on ASP.
Change Behavior > Limits Properties > Response Buffering Limit from 4 MB to 64 MB.
Apply and restart.
The reason this is happening is because buffering is turned on by default, and IIS 6 cannot handle the large response.
In Classic ASP, at the top of your page, after <%#Language="VBScript"%> add:
<%Response.Buffer = False%>
In ASP.NET, you would add Buffer="False" to your Page directive.
For example:
<%#Page Language="C#" Buffer="False"%>
I faced the same kind of issue, my IIS version is 8.5. Increased the Response Buffering Limit under the ASP -> Limit Properties solved the issue.
In IIS 8.5, select your project, you can see the options in the right hand side. In that under the IIS, you can see the ASP option.
In the option window increase the Response Buffering Limit to 40194304 (approximately 40 MB) .
Navigate away from the option, in the right hand side top you can see the Actions menu, Select Apply. It solved my problem.
If you are not allowed to change the buffer limit at the server level, you will need to use the <%Response.Buffer = False%> method.
HOWEVER, if you are still getting this error and have a large table on the page, the culprit may be table itself. By design, some versions of Internet Explorer will buffer the entire content between before it is rendered to the page. So even if you are telling the page to not buffer the content, the table element may be buffered and causing this error.
Some alternate solutions may be to paginate the table results, but if you must display the entire table and it has thousands of rows, throw this line of code in the middle of the table generation loop: <% Response.Flush %>. For speed considerations, you may also want to consider adding a basic counter so that the flush only happens every 25 or 100 lines or so.
Drawbacks of not buffering the output:
slowdown of overall page load
tables and columns will adjust their widths as content is populated (table appears to wiggle)
Users will be able to click on links and interact with the page before it is fully loaded. So if you have some javascript at the bottom of the page, you may want to move it to the top to ensure it is loaded before some of your faster moving users click on things.
See this KB article for more information http://support.microsoft.com/kb/925764
Hope that helps.
Thank you so much!
<%Response.Buffer = False%> worked like a charm!
My asp/HTML table that was returning a blank page at about 2700 records. The following debugging lines helped expose the buffering problem: I replace the Do While loop as follows and played with my limit numbers to see what was happening:
Replace
Do While not rs.EOF
'etc .... your block of code that writes the table rows
rs.moveNext
Loop
with
Do While reccount < 2500
if rs.EOF then recount = 2501
'etc .... your block of code that writes the table rows
rs.moveNext
Loop
response.write "recount = " & recount
raise or lower the 2500 and 2501 to see if it is a buffer problem. for my record set, I could see that the blank page return, blank table, was happening at about 2700 records, good luck to all and thank you again for solving this problem! Such a simple great solution!
You can increase the limit as follows:
Stop IIS.
Locate the file %WinDir%\System32\Inetsrv\Metabase.xml
Modify the AspBufferingLimit value. The default value is 4194304, which is about 4 MB.
Changing it to 20MB (20971520).
Restart IIS.
One other answer to the same error message (this just fixed my problem) is that the System drive was low on disk space. Meaning about 700kb free. Deleting a lot of unused stuff on this really old server and then restarting IIS and the website (probably only IIS was necessary) cause the problem to disappear for me.
I'm sure the other answers are more useful for most people, but for a quick fix, just make sure that the System drive has some free space.
I rectified the error 'ASP 0251 : 80004005' Response Buffer Limit as follow:
To increase the buffering limit in IIS 6, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command, and then press ENTER:
cd /d %systemdrive%\inetpub\adminscripts
Type the following command, and then press ENTER:
cscript.exe adsutil.vbs SET w3svc/aspbufferinglimit LimitSize
Note LimitSize represents the buffering limit size in bytes. For example, the number 67108864 sets the buffering limit size to 64 MB.
To confirm that the buffer limit is set correctly, follow these steps:
Click Start, click Run, type cmd, and then click OK.
Type the following command, and then press ENTER:
cd /d %systemdrive%\inetpub\adminscripts
Type the following command, and then press ENTER:
cscript.exe adsutil.vbs GET w3svc/aspbufferinglimit
refers to https://support.microsoft.com/en-us/kb/944886
If you are looking for the reason and don't want to fight the system settings, these are two major situations I faced:
You may have an infinite loop without next or recordest.movenext
Your text data is very large but you think it is not! The common reason for this situation is to copy-paste an Image from Microsoft word directly into the editor and so the server translates the image to data objects and saves it in your text field. This can easily occupies the database resources and causes buffer problem when you call the data again.
In my case i just have writing this line before rs.Open .....
Response.flush
rs.Open query, conn
It can be due to CursorTypeEnum also. My scenario was the initial value equal to CursorTypeEnum.adOpenStatic 3.
After changed to default, CursorTypeEnum.adOpenForwardOnly 0, it backs to normal.

Resources