documentum DQL command exec count_sessions - what means columns "hot_list_size", "warm_list_size","cold_list_size"? - dql

I found undocumented DQL command in documentum (tested on D6.5)
exec count_sessions
what means columns "hot_list_size", "warm_list_size","cold_list_size" in output of this query?

hot_list_size refers to the number of active content server sessions, warm_list_size refers to the sessions that have been timed out on the server side, and cold_list_size is the number of free sessions. Reference: post on the EMC Community site.

Related

Various difficulties creating ASP.NET Session tables via aspnet_regsql.exe

We're trying to move ASP.NET session state for one of our Azure web apps into a database, and it seems like the aspnet_regsql.exe tool is the way to go. Unfortunately, I'm getting stuck on a few issues below. It's an Azure SQL database, and I'm connecting using the server's admin account.
I initially wanted to add the session tables to our existing database, so I ran .\aspnet_regsql.exe -U adminusername -P adminpassword -S servername.database.windows.net -d databasename -ssadd -sstype c. Which throws the exception "Database 'databasename' already exists. Choose a different database name"
Omitting the database name and running it again throws the exception: "Execution Timeout Expired" after about 30 seconds, which is just the default for SqlCommand.CommandTimeout. This occurs while executing the "CREATE DATABASE" command. I tried creating a database manually, and it takes about 50 seconds for some reason. This database is S0 tier and is not under any load
Running aspnet_regsql again on the already-created database (because it's idempotent, right?) leads to the "Database already exists" error, as does pre-creating an empty database for it to start from.
There's no flag that lets me increase the timeout, and I can't set command timeout using the -C (connection string) flag
Adding the -sqlexportonly flag to generate a script and just running that directly doesn't work either (yes, I know I'm not supposed to run InstallSqlState.sql directly). It throws a whole load of error messages saying things like:
Reference to database and/or server name in 'msdb.dbo.sp_add_job' is not supported in this version of SQL Server.
USE statement is not supported to switch between databases.
Which makes me think this script might have some issues with an Azure SQL database...
Does anyone have any ideas?
Update:
It looks like all the errors involving 'msdb' are related to removing and re-adding a database job called 'Job_DeleteExpiredSessions'. Azure SQL doesn't support database jobs, so the only options I can see are
Run SQL on a VM instead (vastly more expensive, and I'd rather stick with the platform services than have to manage VMs)
Implement one of those "Elastic Job Agents"
Perhaps move the same functionality elsewhere (e.g. a stored proc)?
Turns out Microsoft has an article about how to do exactly what I need, which I somehow
missed during my searching yesterday. Hopefully this answer saves someone else a few hours of frustration. All the info you need is at https://azure.microsoft.com/en-au/blog/using-sql-azure-for-session-state/ earlier.
Note that YMMV since it's from 2010 and also says in scary red letters
"Microsoft does not support SQL Session State Management using SQL Azure databases for ASP.net applications"
Nevertheless, they provide a working script that seems to do exactly what I need.

Apache sentry - Get sentry groups to which a given database/tables has been assigned to

I want get for a given database/table the list of groups this database/table has been granted access in sentry.
There does not appear to be a Sentry SHOW command for this purpose in the documentation.
This blog post suggests that you can instead query the Sentry database directly (assuming you are using the Sentry service, not policy files).
However, at present there is no command to show the group to role
mapping. The only way to do this is by connecting to the Sentry
database and deriving this information from the tables in the
database.
If you're using CDH you can determine which node in the cluster is
running the Sentry database using Cloudera Manager, navigating to
Clusters > Sentry, then clicking Sentry Server and then
Configuration. Here you will find the type of database being used
(e.g. MySQL, PostgreSQL, Oracle), the server the databases is running
on, it's port, the database name and user.
You will need the Sentry database password - the blog post gives a suggestion for retrieving it if you do not know it.
An example query for a PostgreSQL database is given:
SELECT "SENTRY_ROLE"."ROLE_NAME","SENTRY_GROUP"."GROUP_NAME"
FROM "SENTRY_ROLE_GROUP_MAP"
JOIN "SENTRY_ROLE" ON "SENTRY_ROLE"."ROLE_ID"="SENTRY_ROLE_GROUP_MAP"."ROLE_ID"
JOIN "SENTRY_GROUP" ON "SENTRY_GROUP"."GROUP_ID"="SENTRY_ROLE_GROUP_MAP"."GROUP_ID";
However, I have not tried this query myself.
This should work for MySQL:
SELECT R.ROLE_NAME, G.GROUP_NAME
FROM SENTRY_ROLE_GROUP_MAP RGM
JOIN SENTRY_ROLE R ON R.ROLE_ID=RGM.ROLE_ID
JOIN SENTRY_GROUP G ON G.GROUP_ID=RGM.GROUP_ID;

join CDR records from 2 asterisk server

I have 2 asterisk servers (server_A and server_B), On both I store CDR records in MySql.
My question:
Is there any way how to join CDR records from both servers, when user from server_A call to user from server_B??
In this case, you can automatically append a system identifier to front of the unique IDs by setting systemname in the asterisk.conf file (for each of your boxes):
[options]
systemname=server_A
Further info:
http://ofps.oreilly.com/titles/9780596517342/asterisk-DB.html#setting_systemname
For each SIP device, either ensure that you define:
accountcode=the_user_id+the_server_id
... or ...
setvar=user_server=the_user_id+the_server_id
... where you would naturally replace "the_user_id+the_server_id" with meaningful data.
You can then tweak your MySQL CDRs so that you are storing either "accountcode" or "user_server" as a field. If you want to get very clever -- and it's likely a good idea for data fault tolerance anyway -- set up MySQL replication between the two servers so you're actually writing to the same Database/Table for your CDR data.
Further reading:
http://www.voip-info.org/wiki/view/Asterisk+variables
http://onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html

Automatic fix for tempdb error related to 'ASPStateTempSessions'

As per this how-to, I've successfully configured IIS on my XP-SP3 dev box for SQL Server 2008 Express to save ASP.NET session state information. I'm just using SQL Server because otherwise on every recompile, I was losing the session state which was obnoxious (having to re-login). But, I'm facing an annoying issue in that every time I restart SQL there's this error, and sometimes one or two other very similar friends:
The SELECT permission was denied on the object 'ASPStateTempSessions',
database 'tempdb', schema 'dbo'.
To fix the error, I just open Management Studio and edit the User Mapping for the login/dbo I'm using on the ASPState db, and re-add tempdb to that user with all but deny permissions. Apparently, once the right permissions are there, ASP.NET is able to automatically create the tables it uses. It just can't run that CreateTempTables sproc until the right security is there.
THE QUESTION...
Is there a way to not have to re-do this on every restart of the SQL Server?
I don't really care right now about keeping the temp data across restarts, but I would like to not have to go through this manual step just to get my web app working on localhost, which uses session state variables throughout. I suppose one could resort to some kind of stored procedure within SQL Server to accomplish the task for this machine when the service starts, to not have to do it manually. I'd accept such an answer as a quick fix. But, I'm also assuming there's a better recommended configuration or something. Not seeing an answer to this on the how-to guide or elsewhere here on StackOverflow.
Both answers seem valid; but with most things Microsoft, its all in the setup...
First uninstall the ASPState database by using the command:
aspnet_regsql –ssremove –E -S .
Note:
-E is to indicate you want to use integrated security connection.
-S informs what SQL server and SQL instance to use, and the "." (dot) specifies default local instance
Then re-install using the command:
aspnet_regsql –ssadd –sstype p –E -S .
Note:
The sstype has three options, t | p | c ... the first "t", tells the installer to host all stored procedures in the ASPState database, and all data in the tempdb. The second option "p" tells the installer to persist data to the ASPState database. The last option "c" allows you to specify a different 'custom' database to persist the session state data.
If you reinstall using the "-sstype p" you then need only to supply datareader/datawriter to the ASPState database for the user that's making the connection (in most cases, the application pool's identity in IIS).
The added benefit of persisting the data is that session state is retained even after a restart of the service. The only drawback is that you need to ensure the agent cleanup job is pruning old sessions regularly (it does this by default, every minute).
Important:
If you are running a cluster, you must persist session data. You're only option is to use sstype 'p' or 'c'.
Hope this sheds light on the issue!
For the record, I did find a way to do this.
The issue is that the tempdb is recreated from the model db each time the service restarts. The gist of the solution is to create a stored procedure that does the job, and then make that procedure run at startup.
Source code (credit to the link above) is as follows:
use master
go
-- remove an old version
drop proc AddAppTempDBOwner
go
-- the sp
create proc AddAppTempDBOwner as
declare #sql varchar(200)
select #sql = 'use tempdb' + char(13) + 'exec sp_addrolemember ''db_owner'', ''app'''
exec (#sql)
go
-- add it to the startup
exec sp_procoption 'AddAppTempDBOwner', 'startup', 'true'
go
Well done for finding the strangest way possible to do this.
The correct answer is as follows:
use master
go
EXEC sp_configure 'Cross DB Ownership Chaining', '1'
go
RECONFIGURE
go
EXEC sp_dboption 'ASPState', 'db chaining', 'true'
go

Aggregating multiple distributed MySQL databases

I have a project that requires us to maintain several MySQL databases on multiple computers. They will have identical schemas.
Periodically, each of those databases must send their contents to a master server, which will aggregate all of the incoming data. The contents should be dumped to a file that can be carried via flash drive to an internet-enabled computer to send.
Keys will be namespace'd, so there shouldn't be any conflict there, but I'm not totally sure of an elegant way to design this. I'm thinking of timestamping every row and running the query "SELECT * FROM [table] WHERE timestamp > last_backup_time" on each table, then dumping this to a file and bulk-loading it at the master server.
The distributed computers will NOT have internet access. We're in a very rural part of a 3rd-world country.
Any suggestions?
Your
SELECT * FROM [table] WHERE timestamp > last_backup_time
will miss DELETEed rows.
What you probably want to do is use MySQL replication via USB stick. That is, enable the binlog on your source servers and make sure the binlog is not thrown away automatically. Copy the binlog files to USB stick, then PURGE MASTER LOGS TO ... to erase them on the source server.
On the aggregation server, turn the binlog into an executeable script using the mysqlbinlog command, then import that data as an SQL script.
The aggregation server must have a copy of each source servers database, but can have that under a different schema name as long as your SQL all does use unqualified table names (does never use schema.table syntax to refer to a table). The import of the mysqlbinlog generated script (with a proper USE command prefixed) will then mirror the source servers changes on the aggregation server.
Aggregation across all databases can then be done using fully qualified table names (i.e. using schema.table syntax in JOINs or INSERT ... SELECT statements).

Resources