What the max using memory in Oracle XE11g release 2 , i had read in oracle documentation that allowed max memory for database is 1GB RAM ,
IF it per Database or user and if it per database , how i can modify it per user ??
1GB is right, and it's for the database (not per user).
You may want to post your question on StackExchange - dba.stackexchange.com; people there may be able to help more.
As far as I know, you can't limit memory usage per user - you can limit memory usage PER SESSION (only in a shared server architecture, which I don't know if the express Edition uses). This is in a user's PROFILE: https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_6010.htm#i2066025 - see the "size clause" for PRIVATE SGA.
Related
I am using an Amazon EC2 instance and a MariaDB database.
Suppose that I have 20GB database storage and 10 user accounts. I want each account to have 2GB database storage.
How can I achieve this?
I don't think there is any builtin way to monitor or report disk usage by user.
If you limit each user to one database:
GRANT ALL PRIVILEGES ON user1_db TO user1#'...' ...;
and periodically run a query like
SELECT table_schema AS db_name,
SUM(data_length+index_length) / 1073741824 DB_Gigabyte
FROM information_schema.tables
GROUP BY table_schema;
to get a list of how many GB is in each database. From there, it is up to you to deal with any over-eating users.
Caution: Because of temp tables, free space, etc., it is quite possible for a user to temporarily exceed the limit. Also, there are cases where inserting one small row will grow the allocation by a megabyte or more. So, be lenient, else both you and the user will be puzzled at what is going on.
Be sure to have innodb_file_per_table = ON so that you can more easily deal with bloated users.
I use asp.net mvc, sql server. Query in my repository's class. Sometimes query is executed in 10 seconds, sometimes in 3 minutes!! Why? I used a SQL Server Profiler, but I realy don't understand what could be the cause and how I can find it.
Query:
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[FirstAddressId] AS [FirstAddressId],
[Extent1].[SecondAddressId] AS [SecondAddressId],
[Extent1].[Distance] AS [Distance],
[Extent1].[JsonRoute] AS [JsonRoute]
FROM [dbo].[AddressXAddressDistances] AS [Extent1]
Check your query plan. Just run your SELECT statement in SqlServer Management Studio to obtain real query plan. More info is here: Query plan.
If they are the same, but response time differs significantly between the calls, than probably the issue is with lockc on db level (or huge active workloads). I mean incorrect transaction isolation level for instance or some reports running in the meantime obtaining too much resources (or generating locks "because of something" to ensure some data consistency enforced by some developer).
Many factors have influence on performance (including memory available at the moment of query execution).
You can run also a few queries to analyze quality of your statistics (or just update all of them using EXEC sp_updatestats) or please analyze fragmentation of the indexes. I guess, but by me locks and outdated stats or defragmented indexes, can force SqlServer to choose very inefficient query plan.
Some info on active locks: Active locks on table
Additional info 1:
If you are the only user of this db and it's on your local machine (you use SQLServer Express) the issue with locks is rather less possible then other problems. Try to open Event Log of SqlServer. It's available in SqlServer Management Studio on left side (tree) under your engine instance here: Management/Sql Server Logs/Current. Do you see any unusual info there? Try to review system log also (using Event Viewer app). In case of hardware problems you should see there also some info. Btw: how many rows do your have in the table? Try to review also behavior of your disks in some Process Explorer or Performance Monitor. If disk queue length is to big it can be main source of the problem (in such case look what apps stress disk)...
More info on locks:
SELECT
[spid] = session_Id
, ecid
, [blockedBy] = blocking_session_id
, [database] = DB_NAME(sp.dbid)
, [user] = nt_username
, [status] = er.status
, [wait] = wait_type
, [current stmt] =
SUBSTRING (
qt.text,
er.statement_start_offset/2,
(CASE
WHEN er.statement_end_offset = -1 THEN DATALENGTH(qt.text)
ELSE er.statement_end_offset
END - er.statement_start_offset)/2)
,[current batch] = qt.text
, reads
, logical_reads
, cpu
, [time elapsed (ms)] = DATEDIFF(mi, start_time,getdate())
, program = program_name
, hostname
--, nt_domain
, start_time
, qt.objectid
FROM sys.dm_exec_requests er
INNER JOIN sys.sysprocesses sp ON er.session_id = sp.spid
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle)as qt
WHERE session_Id > 50 -- Ignore system spids.
AND session_Id NOT IN (##SPID) -- Ignore this current statement.
ORDER BY 1, 2
GO
Before you waste any more time on this, you should realize that something like the time a query takes in development is essentially meaningless. In development, you're running a single-threaded web server in IIS Express, which means that you've also got VS running, sitting on roughly 2-4 GB of RAM. Together with that, you're running a SQL Server instance, that's fighting the system for both RAM and hard drive time. You haven't given any specs of your system, but if you also happen to be sporting a consumer-class 5400 or 7200 RPM platter-style drive rather than an SSD, that's going to severely impact performance as well. Then, we haven't even got into what else might be running on this system. Photoshop? Outlook? Your favorite playlists of MP3s decoding in the background? What's Windows doing? It might be downloading/applying updates, indexing your drive for search, etc. None of that applies any more when you move into production (or at least shouldn't). In production, you should have a dedicated server with 4-8 GB of RAM and an SSD or enterprise-class 15,000+ RPM platter drive devoted just to SQL Server, so it can spit out query results at lightning speeds.
Long and short, if you want to guage website/query performance of your application, you need to deploy it to a facsimile of what you'll be running in production. There, you can pound the hell out of it and get some real data you can actually do something with. Trying to profile your app in development is just a total waste of time.
I have an Asp.Net MVC 5 website with EntityFramework codefirst approach in a shared hosting plan. It uses the open source WebbsitePanel for control panel and its SQL Server panel is somewhat limited. Today when I wanted to edit the database, I encountered this error:
The transaction log for database 'db_name' is full due to 'LOG_BACKUP'
I searched around and found a lot of related answers like this and this or this but the problem is they suggest running a query on the database. I tried running
db.Database.ExecuteSqlCommand("ALTER DATABASE db_name SET RECOVERY SIMPLE;");
with the visual studio (on the HomeController) but I get the following error:
System.Data.SqlClient.SqlException: ALTER DATABASE statement not allowed within multi-statement transaction.
How can I solve my problem? Should I contact the support team (which is a little poor for my host) or can I solve this myself?
In Addition to Ben's Answer, You can try Below Queries as per your need
USE {database-name};
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE {database-name}
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE ({database-file-name}, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE {database-name}
SET RECOVERY FULL;
GO
Update Credit #cema-sp
To find database file names use below query
select * from sys.database_files;
Call your hosting company and either have them set up regular log backups or set the recovery model to simple. I'm sure you know what informs the choice, but I'll be explicit anyway. Set the recovery model to full if you need the ability to restore to an arbitrary point in time. Either way the database is misconfigured as is.
Occasionally when a disk runs out of space, the message "transaction log for database XXXXXXXXXX is full due to 'LOG_BACKUP'" will be returned when an update SQL statement fails.
Check your diskspace :)
This error occurs because the transaction log becomes full due to LOG_BACKUP. Therefore, you can’t perform any action on this database, and In this case, the SQL Server Database Engine will raise a 9002 error.
To solve this issue you should do the following
Take a Full database backup.
Shrink the log file to reduce the physical file size.
Create a LOG_BACKUP.
Create a LOG_BACKUP Maintenance Plan to take backup logs frequently.
I wrote an article with all details regarding this error and how to solve it at The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP
This can also happen when the log file is restricted in size.
Right click database in Object Explorer
Select Properties
Select Files
On the log line, click the ellipsis in the Autogrowth / Maxsize column
Change/verify Maximum File Size is Unlimited.
After chaning to unlimited, database came back to life.
I got the same error but from a backend job (SSIS job). Upon checking the database's Log file growth setting, the log file was limited growth of 1GB. So what happened is when the job ran and it asked SQL server to allocate more log space, but the growth limit of the log declined caused the job to failed. I modified the log growth and set it to grow by 50MB and Unlimited Growth and the error went away.
I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.
Currently I'm working on a project with opends. I have to upload more than 200k entries in the OpenDS. But unfortunately its fails at random times when file limit exceeding more than 10k - 15k.
When I google for that particular error (alert ID 9896233: JE Database Environment corresponding to backend id userRoot is corrupt. Restart the Directory Server to reopen the Environment) it seems like openDS backend DB [BerklyDB] is not that reliable when adding massive number of entries. How can i plug in new commercial or open source reliable relational DB [Oracle/ H2] to the openDS. any configuration ? or do i have to change the openDS code ?
First you should be aware that Oracle has pulled the plug on the OpenDS project and it is now completely stalled. Development continues as open source as the OpenDJ project : http://opendj.forgerock.org.
This said, I believe that there is a problem with your environment. When I was still working on OpenDS, our basic stress test was importing and running very high load against 10 Millions users. 200K entries is not massive number. My daily OpenDJ tests on my laptop are done with 100K to 1M entries. We have customers running in production with OpenDJ with more than 20M entries, growing 40% every 6 months !
Berkeley DB has been proved to be very scalable and reliable.
Things you might want to check : what is the maximum number of files that can be opened by a single process on your machine ? Linux defaults to 1024 and the limit may be easy to hit with OpenDS or OpenDJ. Are you using a local filesystem ? Berkeley DB is not supported on networked FS such as NFS or other NAS.
Finally, check the logs/errors file and your systems log. Chances are that one of them will have a message containing the root cause of the problem (most likely logs/errors).
Kind regards,
Ludovic Poitou
ForgeRock - Product Manager for OpenDJ