Cosmos DB local emulator - what's the simulated RU? - azure-cosmosdb

Today, I'm hitting the throttling limit on my local Cosmos DB emulator with a Too Many Requests response within a StorageException, which I'm pleased about since it's best to hit errors in dev.
I can find the option to turn rate limiting off /DisableRateLimiting but nothing to control what the limit is; there's no /ThroughPut-like switch.
Does anyone know what RU value is on the emulator?

To change the RU throughput setting for a database in the emulator, you can do so via the data explorer.
Open Data Explorer from the tray icon (Windows):
Select or create a database, then configure within Scale:

Related

Simple query is executed very slow to 100 rows

I use asp.net mvc, sql server. Query in my repository's class. Sometimes query is executed in 10 seconds, sometimes in 3 minutes!! Why? I used a SQL Server Profiler, but I realy don't understand what could be the cause and how I can find it.
Query:
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[FirstAddressId] AS [FirstAddressId],
[Extent1].[SecondAddressId] AS [SecondAddressId],
[Extent1].[Distance] AS [Distance],
[Extent1].[JsonRoute] AS [JsonRoute]
FROM [dbo].[AddressXAddressDistances] AS [Extent1]
Check your query plan. Just run your SELECT statement in SqlServer Management Studio to obtain real query plan. More info is here: Query plan.
If they are the same, but response time differs significantly between the calls, than probably the issue is with lockc on db level (or huge active workloads). I mean incorrect transaction isolation level for instance or some reports running in the meantime obtaining too much resources (or generating locks "because of something" to ensure some data consistency enforced by some developer).
Many factors have influence on performance (including memory available at the moment of query execution).
You can run also a few queries to analyze quality of your statistics (or just update all of them using EXEC sp_updatestats) or please analyze fragmentation of the indexes. I guess, but by me locks and outdated stats or defragmented indexes, can force SqlServer to choose very inefficient query plan.
Some info on active locks: Active locks on table
Additional info 1:
If you are the only user of this db and it's on your local machine (you use SQLServer Express) the issue with locks is rather less possible then other problems. Try to open Event Log of SqlServer. It's available in SqlServer Management Studio on left side (tree) under your engine instance here: Management/Sql Server Logs/Current. Do you see any unusual info there? Try to review system log also (using Event Viewer app). In case of hardware problems you should see there also some info. Btw: how many rows do your have in the table? Try to review also behavior of your disks in some Process Explorer or Performance Monitor. If disk queue length is to big it can be main source of the problem (in such case look what apps stress disk)...
More info on locks:
SELECT
[spid] = session_Id
, ecid
, [blockedBy] = blocking_session_id
, [database] = DB_NAME(sp.dbid)
, [user] = nt_username
, [status] = er.status
, [wait] = wait_type
, [current stmt] =
SUBSTRING (
qt.text,
er.statement_start_offset/2,
(CASE
WHEN er.statement_end_offset = -1 THEN DATALENGTH(qt.text)
ELSE er.statement_end_offset
END - er.statement_start_offset)/2)
,[current batch] = qt.text
, reads
, logical_reads
, cpu
, [time elapsed (ms)] = DATEDIFF(mi, start_time,getdate())
, program = program_name
, hostname
--, nt_domain
, start_time
, qt.objectid
FROM sys.dm_exec_requests er
INNER JOIN sys.sysprocesses sp ON er.session_id = sp.spid
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle)as qt
WHERE session_Id > 50 -- Ignore system spids.
AND session_Id NOT IN (##SPID) -- Ignore this current statement.
ORDER BY 1, 2
GO
Before you waste any more time on this, you should realize that something like the time a query takes in development is essentially meaningless. In development, you're running a single-threaded web server in IIS Express, which means that you've also got VS running, sitting on roughly 2-4 GB of RAM. Together with that, you're running a SQL Server instance, that's fighting the system for both RAM and hard drive time. You haven't given any specs of your system, but if you also happen to be sporting a consumer-class 5400 or 7200 RPM platter-style drive rather than an SSD, that's going to severely impact performance as well. Then, we haven't even got into what else might be running on this system. Photoshop? Outlook? Your favorite playlists of MP3s decoding in the background? What's Windows doing? It might be downloading/applying updates, indexing your drive for search, etc. None of that applies any more when you move into production (or at least shouldn't). In production, you should have a dedicated server with 4-8 GB of RAM and an SSD or enterprise-class 15,000+ RPM platter drive devoted just to SQL Server, so it can spit out query results at lightning speeds.
Long and short, if you want to guage website/query performance of your application, you need to deploy it to a facsimile of what you'll be running in production. There, you can pound the hell out of it and get some real data you can actually do something with. Trying to profile your app in development is just a total waste of time.

Meteor: Delay on direct db inserts (with external MongoDB)

I have a C application that inserts data directly to the database of my Meteor application. The app works fine (withoud delays) when I run it in development mode (with "meteor"). However, if I run the app as a node app (bundled) and with external MongoDB, there's an annoying delay in screen updates (5-10s).
I have noticed some previous discussions about this:
Meteor: server-side db insert delays
Using node ddp-client to insert into a meteor collection from Node
Questions:
Is there any other way than building a server-side API for doing the db inserts through Meteor?
Why the delay is only when using external MongoDB?
Is there a way in Meteor to shorten this database polling interval?
You need to enable oplog tailing. Without oplog tailing, when your C program makes a database write, the Meteor server doesn't realise anything has changed until it polls MongoDB again. With oplog tailing, it can pick up the changes much more quickly and efficiently. In development mode, oplog tailing is enabled automatically, but for production it needs some additional setup.
Your MongoDB must be set up as a replica set (a replica set of one node does work).
You have to pass in a mongo URL for the replica set's local database with the environment variable MONGO_OPLOG_URL.
For more information, see this article.

Plugging another relational DB to OpenDS

Currently I'm working on a project with opends. I have to upload more than 200k entries in the OpenDS. But unfortunately its fails at random times when file limit exceeding more than 10k - 15k.
When I google for that particular error (alert ID 9896233: JE Database Environment corresponding to backend id userRoot is corrupt. Restart the Directory Server to reopen the Environment) it seems like openDS backend DB [BerklyDB] is not that reliable when adding massive number of entries. How can i plug in new commercial or open source reliable relational DB [Oracle/ H2] to the openDS. any configuration ? or do i have to change the openDS code ?
First you should be aware that Oracle has pulled the plug on the OpenDS project and it is now completely stalled. Development continues as open source as the OpenDJ project : http://opendj.forgerock.org.
This said, I believe that there is a problem with your environment. When I was still working on OpenDS, our basic stress test was importing and running very high load against 10 Millions users. 200K entries is not massive number. My daily OpenDJ tests on my laptop are done with 100K to 1M entries. We have customers running in production with OpenDJ with more than 20M entries, growing 40% every 6 months !
Berkeley DB has been proved to be very scalable and reliable.
Things you might want to check : what is the maximum number of files that can be opened by a single process on your machine ? Linux defaults to 1024 and the limit may be easy to hit with OpenDS or OpenDJ. Are you using a local filesystem ? Berkeley DB is not supported on networked FS such as NFS or other NAS.
Finally, check the logs/errors file and your systems log. Chances are that one of them will have a message containing the root cause of the problem (most likely logs/errors).
Kind regards,
Ludovic Poitou
ForgeRock - Product Manager for OpenDJ

How to Share write access, SQLite database between 2 applications?

Ohkay, so here is what I am trying to do.
I have a c program that connects to Google Chromes "Web Data" Sqlite db and it can read and write to it when chrome is not launched. But the minute Chrome is launched , i only have read access to the db.
Is there any way i can make my program perform writing operations onto the db while chrome is open?
like temporarily shut down chrome's access to the db for a few miliseconds to insert 1 row into the db and then let chrome take charge again?
Willing to put a bounty on this .. please help.
I'm not sure if google WebData has same implementation as sqlite even if it is same sql engine, but there is possibility to open same SQLite file and perform lock when you are trying to write and release lock when you finished writing, below is link to doc for SQLite.
http://www.sqlite.org/lockingv3.html - concurrency
http://www.sqlite.org/c3ref/busy_handler.html - locking handler
I presume that the only problem you will have is, does google chrome ever release this lock :)

Create SQL Server Deployment script using database publishing wizard

I'm getting the following error when using the Database Publishing wizard to script a SQL Server Express database for deployment. I have googled for hours unsuccessfully. Anyone had this isssue or know how to solve it?
Timeout expired. The timeout period
elapsed prior to obtaining a
connection from the pool. This may
have occurred because all pooled
connections were in use and max pool
size was reached.
Is the database held remotely? - Have you checked firewall settings...?
The error is what it says it is...it's timed out trying to connect.
Make sure you can connect via other means (SQL Management Studio, the app itself). Check the connection string, even try copying the database locally and scripting it that way.
So I downloaded the latest Database Publishing Wizard and it seems to work :-)

Resources