(Note: I haven't used MariaDB. I'm only doing research at this point.)
I read MariaDB's TDE / data-at-rest encryption support. In its Limitations section it states, "The MariaDB error log is not encrypted. The error log can contain query text and data in some cases..."
I'm assuming that statement implies query errors that might contain a date of birth (DOB) will be plainly written in the error log. If so, what are the workarounds in order to keep the errors logs secure? If the solution is handling encryption in syslog, would you please explain the process?
In addition, are there any other points listed in the Limitations that a new user like myself should be aware of? I am not familiar with MariaDB's intricacies.
Thanks.
Slow query log, general log and error log are not encrypted.
While slow query log and general log (see MDEV-9639) are usually not used in a production environment, the error log can contain in certain cases (like a server crash) SQL statements which might contain confidential data.
A solution would be to redirect the error log to syslog and to enable rsyslogd encrytion, More information can be found here:
Writing the Error Log to Syslog on Unix
Related
We're trying to move ASP.NET session state for one of our Azure web apps into a database, and it seems like the aspnet_regsql.exe tool is the way to go. Unfortunately, I'm getting stuck on a few issues below. It's an Azure SQL database, and I'm connecting using the server's admin account.
I initially wanted to add the session tables to our existing database, so I ran .\aspnet_regsql.exe -U adminusername -P adminpassword -S servername.database.windows.net -d databasename -ssadd -sstype c. Which throws the exception "Database 'databasename' already exists. Choose a different database name"
Omitting the database name and running it again throws the exception: "Execution Timeout Expired" after about 30 seconds, which is just the default for SqlCommand.CommandTimeout. This occurs while executing the "CREATE DATABASE" command. I tried creating a database manually, and it takes about 50 seconds for some reason. This database is S0 tier and is not under any load
Running aspnet_regsql again on the already-created database (because it's idempotent, right?) leads to the "Database already exists" error, as does pre-creating an empty database for it to start from.
There's no flag that lets me increase the timeout, and I can't set command timeout using the -C (connection string) flag
Adding the -sqlexportonly flag to generate a script and just running that directly doesn't work either (yes, I know I'm not supposed to run InstallSqlState.sql directly). It throws a whole load of error messages saying things like:
Reference to database and/or server name in 'msdb.dbo.sp_add_job' is not supported in this version of SQL Server.
USE statement is not supported to switch between databases.
Which makes me think this script might have some issues with an Azure SQL database...
Does anyone have any ideas?
Update:
It looks like all the errors involving 'msdb' are related to removing and re-adding a database job called 'Job_DeleteExpiredSessions'. Azure SQL doesn't support database jobs, so the only options I can see are
Run SQL on a VM instead (vastly more expensive, and I'd rather stick with the platform services than have to manage VMs)
Implement one of those "Elastic Job Agents"
Perhaps move the same functionality elsewhere (e.g. a stored proc)?
Turns out Microsoft has an article about how to do exactly what I need, which I somehow
missed during my searching yesterday. Hopefully this answer saves someone else a few hours of frustration. All the info you need is at https://azure.microsoft.com/en-au/blog/using-sql-azure-for-session-state/ earlier.
Note that YMMV since it's from 2010 and also says in scary red letters
"Microsoft does not support SQL Session State Management using SQL Azure databases for ASP.net applications"
Nevertheless, they provide a working script that seems to do exactly what I need.
I am having a recurring problem with Lasso 9.2.6 where the instance slows to a crawl performance-wise and throws these errors to the log:
Failure in sqlite_session_driver active_tick: Error from SQLite
database "lasso_session": 19 constraint failed
Restarting the instance solves the performance problem temporarily, but errors continue to appear.
Any recommendations for cleaning this up or resetting the session database to clear out invalid data?
Depending on traffic volume or logging volume you may be overloading the sqlite tables. It's hard to say exactly what the cause is without checking, but I'd look at the settings for sessions. Consider setting the sessions to use either memory or the MySQL driver (I recommend a Memory table if using MySQL).
Have a look at the size of the tables and check if any are excessively large. You can just run ls -l /var/lasso/instances/default/SQLiteDBs/ or use a sqlite tool. The logbook and email tables are also likely suspects.
I have an Asp.Net MVC 5 website with EntityFramework codefirst approach in a shared hosting plan. It uses the open source WebbsitePanel for control panel and its SQL Server panel is somewhat limited. Today when I wanted to edit the database, I encountered this error:
The transaction log for database 'db_name' is full due to 'LOG_BACKUP'
I searched around and found a lot of related answers like this and this or this but the problem is they suggest running a query on the database. I tried running
db.Database.ExecuteSqlCommand("ALTER DATABASE db_name SET RECOVERY SIMPLE;");
with the visual studio (on the HomeController) but I get the following error:
System.Data.SqlClient.SqlException: ALTER DATABASE statement not allowed within multi-statement transaction.
How can I solve my problem? Should I contact the support team (which is a little poor for my host) or can I solve this myself?
In Addition to Ben's Answer, You can try Below Queries as per your need
USE {database-name};
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE {database-name}
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE ({database-file-name}, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE {database-name}
SET RECOVERY FULL;
GO
Update Credit #cema-sp
To find database file names use below query
select * from sys.database_files;
Call your hosting company and either have them set up regular log backups or set the recovery model to simple. I'm sure you know what informs the choice, but I'll be explicit anyway. Set the recovery model to full if you need the ability to restore to an arbitrary point in time. Either way the database is misconfigured as is.
Occasionally when a disk runs out of space, the message "transaction log for database XXXXXXXXXX is full due to 'LOG_BACKUP'" will be returned when an update SQL statement fails.
Check your diskspace :)
This error occurs because the transaction log becomes full due to LOG_BACKUP. Therefore, you can’t perform any action on this database, and In this case, the SQL Server Database Engine will raise a 9002 error.
To solve this issue you should do the following
Take a Full database backup.
Shrink the log file to reduce the physical file size.
Create a LOG_BACKUP.
Create a LOG_BACKUP Maintenance Plan to take backup logs frequently.
I wrote an article with all details regarding this error and how to solve it at The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP
This can also happen when the log file is restricted in size.
Right click database in Object Explorer
Select Properties
Select Files
On the log line, click the ellipsis in the Autogrowth / Maxsize column
Change/verify Maximum File Size is Unlimited.
After chaning to unlimited, database came back to life.
I got the same error but from a backend job (SSIS job). Upon checking the database's Log file growth setting, the log file was limited growth of 1GB. So what happened is when the job ran and it asked SQL server to allocate more log space, but the growth limit of the log declined caused the job to failed. I modified the log growth and set it to grow by 50MB and Unlimited Growth and the error went away.
I am connecting to a Teradata database through ODBC with Stata on an Ubuntu server (12.04 LTS). Everything works fine, except that I have my TD userid and password stored in the .odbc.ini file, which seems like a terrible idea. The alternative is to enter them in Stata, which seems even worse and is awkward. Is there a way to do this more securely? The login info that I use to ssh into the server is synced with the TD database. It seems that it should be possible to pass that information along.
In ODBC terms you do not need to store usernames / passwords in any of your ODBC ini files. Both the ODBC SQLConnect and SQLDriverConnect support the passing in of username / password at the time they are called.
SQLDriverConnect would need something in your InConnectionString like "DSN=YourDataSourceName;UID=username;PWD=password".
You could go one step further and pass in the whole DSN as a command line argument thus meaning that you would not need an ODBC data source in an ini file. I'm sure one of the forum readers can post a sample for you from Teradata.
As for passing in the user name and password from your SSH loging. Your application would need to capture that and pass it to ODBC.
If you want to establish a finer grain of security around your odbc.ini file or other files on your Ubuntu server that may contain user credentials I would strongly suggest the use of Access Control Lists (ACLs). Beyond the typical Owner::Group::World permissions you can specify permissions down to the specific user on whether they are allowed or denied an explicit permission for a given file.
Other options regarding security on Teradata include the use of LDAP authentication if your environment supports it. Configuring LDAP on Teradata is beyond the scope of SO and in many cases a billable, professional services engagement with Teradata's Information Security CoE.
We were experiencing errors with one of our web methods on a test web server we have. The main error was:
"Access to the path 'E:\websites\Discovery\ProfileService\App_Data' is denied"
Looking further down the stack trace gives a little more info:
"at System.Web.DataAccess.SqlConnectionHelper.CreateMdfFile..."
"at System.Web.DataAccess.SqlConnectionHelper.EnsureSqlExpressDBFile..."
"at System.Web.DataAccess.SqlConnectionHelper.GetConnection..."
"at System.Web.Security.SqlMembershipProvider.GetUser..."
"at System.Web.Security.Membership.GetUser..."
"at System.Web.Security.Membership.GetUser..."
It appeared that the membership provider was trying to find a connection string for a membership call. On failing to find this entry, it tried to create a new local membership database and failed to do this with a permissions error.
We double checked the connection strings and they seemed ok though they were encrypted. We then saved the config with the connection strings section decrypted - the call now worked!
We know that the connection strings were correct because other service methods were working fine. What is even stranger is that some aspects of membership seemed to work with encryption in place.
Has anyone seen this before or know how to make this work with encrypted connection strings?
In your code, before you make a SQL call are you decrypting the connection strings?
The error from the stack trace is likely that your app doesn't have write permission in that directory.