We are currently using SQL Server 2008 Express Edition, but would like to upgrade to Standard Edition. Does it mean that we need a license with 20 seats, if we have 20 Active Directory users that are using the DB from a C# application?
If yes, does it make sense to switch from Windows Forms to Web Applications in order to decrease the amount of licenses needed?
Switching to a web app won't change the licensing needs of your application. If you have 20 users connecting to your SQL Server then you need 20 CALs for Standard Edition as whilst you may have a single "user" connecting to the DB you're still servicing 20 users. The MS license docs cover this in some detail.
The alternative approach for to go with per processor licenses. You obviously need to do the maths to work out which option is more cost effective for your user growth estimates.
Given that you're starting at 20 users the per user (CAL) route will probably be the cheapest option.
You have two types of licenses available to you, each with their own set of rules and scenarios where they make sense.
Per Processor License. Here you license each physical (or virtual if you are using virtualization and depending on the Sql Server Edition) processors.
Server/CAL license. Here you would buy a license for each server running Sql Server and Client Access Licenses (CAL) for each user or device. Note that a CAL would allow that user or device to connect to any number of SQL Servers without the need to buy additional CALs if you add additional servers. Also, any type of software or hardware that reduces the number of devices or users that directly access SQL Server (an example would be the use of a web application to reduce the number of users that connect to the database directly through connection pooling) would NOT reduce the number of CALs you get. You will still need to get them for each user using the web application.
The following microsoft link provides pricing points for Sql Server 2008 and also includes a Sql Server 2008 R2 Quick Reference, which includes all the information that you might need. We can see that based on the above link:
Per Processor would cost you $7,171.00
Server/CAL would end up being $4,178.00 based on the bellow calculations
Server $898.00
CAL $164.00 x 20 = $3,280
Total $898.00 + $3,280 = $4,178.00
Of course this is an estimate that doesn't include tax, discounts, or software assurance.
If you want more information I would recommend asking on serverfault
Related
I want to monitor oracle ebs(11i) & oracle db(11g) simultaneously during load test through dynatrace.
Oracle EBS architecture
I know we can monitor oracle db using dynatrace but did not find how to Identify what areas or modules (e.g. Order Management, Sales, Finance, Shipping) a particular work flow/user request touches during the load test?
I found that using DC RUM we can capture the metrics for Form Server. Apart from this I also want to monitor Concurrent processing server. Is it possible using dynatrace or not?
With Dynatrace DC RUM you may choose one of two approaches of monitoring EBS performance.
First - DC RUM using agentless technology, captures network traffic between all servers and clients and as result provides you with information on performance, usage and availability details. Additionally for most popular network protocols including the ones used during communication with Oracle database, Oracle Forms servers and web servers it’s possible to use analyzers, that provides deeper performance insights. For example with Oracle Forms analyzer applied for EBS monitoring DC RUM is decoding all interactions between user and oracle forms reporting user names, form names, control names and identifying EBS module name. For Oracle Database traffic it reports performance down to single query execution including SQL, database schema and user name. Answering your question it allows monitoring of Oracle EBS and Oracle DB simultaneously.
Second one – Enterprise Synthetic allows you to create synthetic tests for key transactions in EBS. This way for example you may track the performance of whole creating sales order transaction.
DC RUM is intended to constant, systematic application performance monitoring. However if you have it in your company it’s also perfect tool to evaluate the results of the load tests performed on EBS.
I saw this question:
How many users on one azure instance before I hit performance issues?
Which discusses how many users an azure instance could support for a webpage. I'm wondering if this would be any different for a webpage vs webserver that client applications (such as mobile phones) are call into, to get data. For example, if you have a single azure webrole running that exposes a REST enpoint, how many devices could call into the service before it starts to buckle under pressure?
How long is a string? :-)
If your app computes one million digits of pi on each web request, it will probably handle fewer concurrent web requests than an app that replies to each web request with "hello world."
(This is another, blunter, version of David's answer.)
A Web Role instance is merely a Windows 2008 Server R2 (or SP2) virtual machine of a given size (1-8 cores, 1.75-14GB usable RAM, 100-800Mbps network). You can run web sites, different web servers (tomcat, for example), WCF services (through IIS or standalone ServiceHosts), etc.
Scaling is going to depend heavily on the app itself: Is it CPU-constrained? Network-constrained? Do you have queue-based workload and your queue backlog is growing?
Sometimes it's critical to scale up to larger VMs, just to handle one of the constraints mentioned. It's always wise to pick the smallest VM size to run in a baseline mode (e.g. 1 or 2 users), then scale out to more instances as needed.
It's important to identify the key performance indicators (KPI's) for your app. You can then automate your scaling, with something like the Autoscale Appliction Block (WASABi).
Here's a reference page with all VM sizes, with details about CPU, local disk, network bandwidth, and RAM.
I can't buy a the SQL server full/express plan on my hosting environment and I was thinking of using SQLCE with EF 4.0
expected user load is 1000-2000 per day.
"user load is 1000-2000 per day"
This isn't a particularly good measure of what load your database will be under.
You need to measure things like:
the number and complexity of your queries.
What kind of writes (insert/update/delete) will need to be performed.
How many of those a user might perform.
The amount of data being dealt with in the above queries.
Whether you can cache any of the results of queries.
For instance, I know of systems where having 1000 users required a cluster of high end servers to deal with the load.
If you can model what the performance is like for 50, 100, and 500 users - that could give you an idea of whether you can deal with this load.
FWIW: SQL Server Express Edition is free for commercial usage.
The sqlserverCe-dll (as well as the similar SQLite-dll) has 32bit native code inside. so there might be some 32/64bit issues when running on a 64bit system.
I am not shure if there is already a 64bit sqlserverCe-dll.
SQL Server Compact will run under medium trust under ASP.NET 4, and supports both x64 and x86 platforms. It is limited to max 256 concurrent connections. It is file based, and not quite as robust as SQL Server, and does not support recovery to a point in time.
I have a client who is interested in hiring my company to do a small, custom, multi-user contact database/crm. They don't care about the technology I use, as long as the product is hosted inside their organization (no "cloud" hosting). Now the problem is that their IT department refuses to host any application developed by an outside company on their servers, additionally they will not allow any server not serviced by them inside of their network.
The only means of sharing data that IT would allow is a windows network share...
I was thinking to do the application as a fat client in Adobe Air, and let all users access a shared sqlite database, but then I read a lot of negative things about this approach.
So I'm asking you - Are there people out there who have actually tried this ?
What are your experiences ?
You can use an MS-Access 2007+ (accdb) file.
Of course there are many database engines with much more features and much better SQL syntax, but if you are looking for a file-based database system that can be accessed simultaneously by multiple processes on a shared Windows drive, then an accdb file is as good as you're going to get I think.
Also note that another popular embedded database, SQL Server Compact Edition, cannot be used on shared drives (at least not by multiple processes from different machines).
References:
Share Access Database on a Network Drive:
http://office.microsoft.com/en-us/access-help/ways-to-share-an-access-database-HA010279159.aspx#BM3
SQL Server CE Cannot be used on a shared drive:
SQLCE 4 - EF4.1 Internal error: Cannot open the shared memory region
The ways sqlite locks databases means you have to be careful if there's a chance you'll have multiple sources trying to access the database. You either have to develop a waiting method, or a timeout, or something
We are looking at creating a custom ASP.NET application for a client, however they are a nonprofit and thus budget is limited.
We typically develop ASP.NET web and desktop apps to connect to a central SQl Server 200X database, ie with a full version of SQL Server, running on networked Windows Server. In this case we won't have a full version available.
Are there any issues with using SQL Server Express in this sort of arrangement? IIS and SQL Server Express would be running on the same physical server, serving up pages over the local Intranet to users.
Any real differences to be aware of in regards to development of the app itself or deployment? This will be a fairly standard app, with SQL mainly being used for a datastore with tables and SPs, nothing really SQL Server specific beyond that.
SQL Server Express edition should be fine for this scenario. It has all the core features of the full product, but as you said you are only really using it for data storage and some SPs, then you will not need any of the additional functionality available in the other versions (ie. reporting and analysis services). There are some limitations to the express version (the biggest being that the maximum database size is 4GB), but they should not really affect you unless your are building a very busy ASP.Net application.
Some public-facing websites use SQL Server Express as the database server (DotNetKicks being the only one I can remember at the moment) without issue.
The exact list of unsuported features in Express is at SQL Server Express Features:
Database mirroring
SQL Mail
Online restore
Fail-over clustering
Database snapshot
Distributed partitioned views
Parallel index operations
VIA protocol support
Mirrored media sets
Log shipping
Partitioning
Parallel DBCC
Address Windowing Extensions (AWE)
Parallel Create Index
Hot-add memory
Enhanced Read Ahead and Scan
Native HTTP SOAP access
Indexed views (materialized views)
SQL Mail and Database Mail
Partitioned views
Online Index Operations
SQL Server Agent and SQL Server Agent Service
SSIS, SSAS, OLAP/Data Mining
The SQL Server Express with Advanced Services Features supports a "subset of Reporting Services features".
In addtion there are the operational restrictions:
Express will use onyl one CPU core
Express will not grow the buffer pool over 1 GB no matter how much RAM you have
Express will not allow any database to grow over 4GB and will not put online (restore, attach) databases that are already over 4 GB.
The key problems you may run into are the operational restrictions (one core, 1 GB ram, 4GB each database) and the lack of SQL Agent, preventing any sort of job scheduling.
You should not really run into anything, its actually a full featured product that MS SQL Express
Here's a really basic comparison from Microsoft.