What's the ASP.NET Connection String Format for a Linked Server? - asp.net

I've got a database server that I am unable to connect to using the credentials I've been provided. However, on the staging version of the same server, there's a linked server that points to the production database. Both the staging server and the linked server have the same schema.
I've been reassured that I should expect to be able to connect to the live server before we go live. Unfortunately, I've reached a point in my development where I need more than the token sample records that are currently in the staging database. So, I was hoping to connect to the linked server.
Thus far in my development against this schema has been against the staging server itself, using Subsonic objects. That all works fine.
I can connect via SQL Server Management Studio to that linked server and execute my queries directly. I can also execute 'manual" queries in C# against the linked server by having my connection string hook up to the staging server and running my queries as
SELECT * FROM OpenQuery([LINKEDSERVER],'QUERY')
However, the Subsonic objects are what's enabling me to bring this project in on time and under budget, so I'm not looking to do straight queries in my code.
What I'm looking for is whether there's a way to state the connection string to the linked server. I've looked at lots of forum entries, etc. on the topic and most of the answers seem to completely gloss over the "linked server" portion of the question, focusing on basic connection string syntax.

I don't believe that you can access a linked server directly from an application without the OpenQuery syntax. Depending on the complexity of your schema, it might make sense to write a routine or sproc to populate your staging database with data from your live database.
You might also consider looking at Redgates SQL Data Generator or any other data gen tool. Redgates is pretty easy to use.
One other idea - can you get a backup of the live database that you can install in development to do your testing? If its just data for development and testing that you seek, you probably want to stay away from connecting to your production database at all.

Create testing stored procedures on server B that reference the data on server A via the linked server. e.g. if your regular sproc references a table on Server B say:
databaseA.dbo.tableName
then use the linked servername to reference the same database/table on server A:
linkedServerName.databaseA.dbo.tableName
If server A is identical in its database/table/column names than you will be able to do this by some quick find/replace work.

creating a linked server from .NET doesn't make any sense since a linked server is nothing but a connection from one sqlserver to another server (sql, file, excel, sybase etc etc), in essence it is just a connection string (you can impersonate and do some other stuff when creating a linked server).

One Way is to create two connection strings and access the approperiate database when required.
Second option is create connection for Database A only and create a link server For Databse B in Database.good article, i really like it. I am doing a bit on research about Asp.net connection and i found also macrotesting www.macrotesting.com to be very good source. Thanks for you article.....
Regards...
Meganathan .J

Related

Wrong dependency to IIS restart for getting changed data in SQL Server

I am working on an ASP.NET webforms application with Entity Framework. Also for some reports it uses a dll and in that we have explicit query to get the records from SQL Server (such as ADO).
The problem is that when I change a column such as ParentID in SQL Server, I must to reset the website in IIS to see it and this solves the problem. This dependency is not logical and I want to know why this happens? Is there any relation to caching because of calling method in the dll?
How can I solve this problem?
When you run a query against SQL server (or any database, really), the result that you see is not the data "in the database", so to speak. The query returns a copy of that data that belongs only to you. The copy of the data gets sent over the network, to the client - in your case, an ASP.NET web application - and the application does whatever it needs to do, such as show it to a user.
Once the query which retrieved the data is complete, there is no longer any link between the data in the client, and the data in the database. There is no continuous, "live" connection between the two, even if your actual database connection is still open. The database connection is merely a way to send queries to the server, and for it to send copies of the data back.
It's like taking a copy of a file from a different machine. If you copy a file from my machine, and then I update my copy, your copy doesn't instantly get updated.
If you want data in some user interface to stay perfectly up to date with the data that actually exists in the database, you have a difficult problem to solve. There is no "easy" way to do this. Or perhaps more accurately, there is no simple or efficient way to do this.
This might seem odd to you. You're thinking "well, why not? Why doesn't it just show me the values as they actually exist?". The reason is that these systems need to be able to support many users - often thousands at once - who are all both reading the database and writing to it. Imagine someone was in the middle of updating data in the database, but then they rollback their transaction. Should you see the data as it was being modified, but not committed? What if two users are trying to update "the same" data at once? All sorts of concurrency questions come into play, which basically boils down to questions about locking.
What you are encountering here is a basic principle of multi-threaded environments, which translates to systems with multiple clients: Data can't be accessed directly by multiple people at the same time. Instead, you give each person their own immutable copy.
In a web application things are even more disconnected. When the browser requests the web page, the server side of the web application gets a copy of the data from the database, and then transmits that to the browser. Once the page is loaded there is no longer any link between the web server and the database server, or any link between the web server and the web browser at the client, and certainly no link between the web browser and the database.
Ultimately, this is one of the "hard problems" in computer science. You want to know how to tell the client to invalidate their "cache", and refresh their local data. There are a few mechanisms provided by .NET to do this with SQL Server, but they are quite technical. One of them is query notifications

how to start Apache Jena Fuseki as read only service (but also initially populate it with data)

I have been running an Apache Jean Fuseki with a closed port for a while. At present my other apps can access this via localhost.
Following their instructions I start this service as follows:
./fuseki-server --update --mem /ds
This create and updatable in memory database.
The only way I currently know how to add data to this database is using the built-in http request tools:
./s-post http://localhost:3030/ds/data
This works great except now I want to expose this port so that other people can query the dataset. However, I don't want to allow people to update or change the database, I just want them to be able to use and query the information I originally loaded into the data base.
According to the documentation (http://jena.apache.org/documentation/serving_data/), I can make the database read-only by starting it without the update option.
Data can be updated without access control if the server is started
with the --update argument. If started without that argument, data is
read-only.
But when I start the database this way, I am no longer able to populate with the initial dataset.
So, MY QUESTION: How I start an in-memory Fuseki database which I can populate with my original dataset but then disallow further http updates.
(My guess is that I need another method to populate the Fueseki database that is not using the http protocol. But I'm not sure)
Some options:
Here are some options:
1/ Use TDB tools to build a database offline and then start the server read only on that TDB database.
2/ Like (1) but use --update to build a persistent database, then stop the server, and restart without --update. The database is now read only. --update affects the services available and does not affect the data in any other way.
Having a persistent database has the huge advantage that you can start and stop the server without needing to reload data.
3/ Use a web server to pass through query requests to the fuseki server and limit the Fuseki server to talk to only localhost. You can update from the local machine, external people can't.
4/ Use Fuseki2 and adjust the security settings to allow update only from localhost but query from anywhere.
What you can't do is update a TDB database currently being served by Fuseki.

How do I see the SQL commands issued against ASPNETDB and watch the dataflow?

Just about everything I've seen relating to ASP.Net's Login control treats it like a black box. I'm interested in seeing the SQL commands issued against ASPNETDB and watching the dataflow.
For example, the Login control uses ASPNETDB and stored procedure dbo.aspnet_Membership_FindUsersByName. I'm not clear on how to call the procedure because it expects #PageIndex and #PageSize parameters (#ApplicationName and #UserNameToMatch make sense to me). I would like to read about the procedure or trace it.
Would anyone know of good reading on the topic, or suggest a path to explore the control?
What you are looking for is called a SQL Server Trace. The Graphical User Interface for SQL Traces is SQL Server Profiler. This only ships with certain versions of SQL Server (for instance, if you have SQL Server Express Edition then you will not have SQL Server Profiler, but you will still be able to utilize the Trace stored procedures and database objects).
Using Profiler (or the Trace db objects), you'll be able to filter out certain events and data depending on what you are specifically looking to capture. This will give you all the information you'll need to find out the data being transmitted to and from the server -> client application (or in this case, the ASP.NET application).
The events and data that a Trace puts forth can be extremely daunting, especially if you are new to this (which it sounds like you are) and there are a lot of hits to the database. Learn about the Profiler Templates you can utilize, and the individual Events you can analyze.
If you have access to SQL Server, then fire up the profiler and you can see in real-time the sql statements executed against the db.
Just for good measure a brief step by step guide for starting up profiler.
Starting up SQL profiler
If your using SQL express you may not have profiler, however here's an open source alternative (note. I've never used it)
free profiler
If you set it up to use SQL Server (using aspnet_regsql.exe), you can see the stored procedures it uses.

How can I handle a web application that connects to many SQL Server databases?

I am building an ASP.NET web application that will use SQL Server for data storage. I am inheriting an existing structure and I am not able to modify it very much. The people who use this application are individual companies who have paid to use the application. Each company has about 5 or 10 people who will use the application. There are about 1000 companies. The way that the system is currently structured, every company has their own unique database in the SQL Server instance. The structure of each database is the same. I don't think that this is a good database design but there is nothing I can do about it. There are other applications that hit this database and it would be quite an undertaking to rewrite the DB interfaces for all of those apps.
So my question is how to design the architecture for the new web app. There are times of the month where the site will get a lot of traffic. My feeling is that the site will not perform well at these times because I am guessing that when we have 500 people from different companies accessing the site simultaneously that they will each have their own unique database connection because they are accessing different SQL Server databases with different connection strings. SQL Server will not use any connection pooling. My impression is that this is bad.
What happens if they were to double their number of customers? How many unique database connections can SQL Server handle? Is this a situation where I should tell the client that they must redesign this if they want to remain scalable?
Thanks,
Corey
You don't have to create separate connections for every DB
I have an app that uses multiple DBs on the same server. I prefix each query with a "USE dbName; "
I've even run queries on two separate DB's in the same call.
As for calling stored procs, it's a slightly different process. Since you can't do
Use myDB; spBlahBLah
Instead you have to explicity change the DB in the connection object. In .Net it looks something like this:
myConnection.ChangeDatabase("otherDBName");
then call your stored procedure.
Hopefully, you have a single database for common items. Here, I hope you have a Clients table with IsEnabled, Logo, PersonToCallWhenTheyDontPayBills, etc. Add a column for Database (i.e. catalog) and while you're at it, Server. You web application will point to the common database when starting up and build the list of database connetions per client. Programmatically build your database connection strings with the Server and Database columns in the table.
UPDATE:
After my discussion with #Neil, I want to point out that my method assumes a singleton database connection. If you don't do this then it would be silly to follow my advice.
Scaling is a complex issue. However why are you not scaling the web aspect as well? Then the connection pooling is limited to the web application.
edit:
I'm talking about the general case here. I know tha pooling occurs at many levels, not just the IDbConnection (http://stackoverflow.com/questions/3526617/are-ado-net-2-0-connection-pools-pre-application-domain-or-per-process). I was wondering whether the questioner had considered scaling at the we application level.

Setting up a backup DB server in ASP.NET web.config file

I currently have an asp.net website hosted on two web servers that sit behind a Cisco load balancer. The two web servers reference a single MSSQL database server.
Since this database server is a single point of failure, I'm adding an additional MSSQL server for backup. I would like to setup my web.config files to write everything to both MSSQL servers, but only read from the "primary" database server unless it is unreachable for some reason, at which point the backup MSSQL server would be used.
Is this possible via a web.config file setting, or must this be done in code? Thanks in advance for any help.
New Information:
I just wanted to add further information on this topic after researching it for the past several days - Microsoft TechNet has a good article title "Implementing Application Failover with Database Mirroring" (http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/implappfailover.mspx#EMD).
This specifically covers the database mirroring feature in Microsoft SQL Server 2005 and the new new "Failover Partner" connection string keyword that allows you to specify two server/db instances in a single connection string.
The article is well worth a read if your interested in implementing this type of feature.
What you want is called "failover", where if one database fails your queries are automatically redirected to the other. This is acheived at the database level, not the application. There are a lot of walkthroughs etc for setting up failover clusters: here's one for SQL 2000, and another for SQL 2005. Basically, once you set it up, the primary database communicates all activity to the secondary one. If the primary fails, the secondary is (almost) up to date and takes over.
The servers form a cluster, and look like a single unit - similar to the way your load-balanced web servers look to the outside world. The backup monitors the primary, and if the primary stops responding, the backup takes over receiving queries. If you're Googling, try also looking adding the keywords "database mirroring" and "quorum".
Its a bit more complex than that. Does your webpage write to the databases? Or just read?
If they write, then you'll have to worry about keeping the 2 databases synchronized, probably using mirroring or log shipping.
But what you are (in essense) talking about doing is setting up a SQL cluster.
I've written a blog entry that shows how to setup MSSQL Database mirroring as well as how to actually utilize it from a managed code perspective:
http://www.improve.dk/blog/2008/03/23/sql-server-mirroring-a-practical-approach
Nice answer from "Rick" but I just wanted to add my 2 cents of information. Normally for a setup with failover without a lot of expensive equipment, I would set it up like that:
You can have your 2 SQL Server box waiting for request and have a third box low-end system with SQL Server 2005 Express as a "health monitoring". What that saves you is 10K$ for the box and one SQL Server licence. SQL Server Express (as in Free) can do the health monitoring between the 2 databases servers without any issues.
That is my setup :)

Resources