I am developing a ASP.NET Web Application with real time functionality by using ASP.NET SignalR.
The problem which I'm facing is the SqlNotificationType.
If I use SqlNotificationType.Change, I can't get the change notification from my database. The SQL 'ServiceBroker' is enabled for my database.
private void dependency_OnChange(object sender, SqlNotificationEventArgs e)
{
if (e.Type == SqlNotificationType.Change)
{
NotificationHub nHub = new NotificationHub();
nHub.SendNotifications();
}
}
But if I use SqlNotificationType.Subscribe, It just start notifying me the database changes but the database size starts growing with the every change made in the database.
private void dependency_OnChange(object sender, SqlNotificationEventArgs e)
{
if (e.Type == SqlNotificationType.Subscribe)
{
NotificationHub nHub = new NotificationHub();
nHub.SendNotifications();
}
}
Whenever a change is made in the database table, a new subscription must be created by re-executing the query after each notification is processed.
It increases the Database Size
Given below is the function to sendNotifications to all the connected clients.
public string SendNotifications()
{
using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["MCNServerconnectionString"].ConnectionString))
{
const string query = "Select ID, AgentID, DiallerID, Call_Direction, Extension, Call_ID from [MCNServer].[dbo].[CallsDataRecords] where ID = 915";
connection.Open();
using (SqlCommand command = new SqlCommand(query, connection))
{
command.Notification = null;
DataTable dt = new DataTable();
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += new OnChangeEventHandler(dependency_OnChange);
if (connection.State == ConnectionState.Closed)
connection.Open();
var reader = command.ExecuteReader();
dt.Load(reader);
if (dt.Rows.Count > 0)
{
json = JsonConvert.SerializeObject(dt, Formatting.Indented);
}
}
}
IHubContext context = GlobalHost.ConnectionManager.GetHubContext<NotificationHub>();
return context.Clients.All.RecieveNotification(json).ToString();
}
The solution I found is to decrease the database QueryNotificationTimeOut to expire the notifications.
How to Invalidate Cache Entry inorder to eliminate querynotifications?
After lot of searching and debugging my code, I figured out the problem and resolve it.
The SqlNotificationType.Change wasn't working for me and when I used SqlNotificationType.Subscribe it works for me but it increases by database size by subscribing Query Notifications and they are not getting expired due to unauthorized access of database.
So the problem due to database size was increasing while I was using Sql Dependency and enabled Service Broker for my database is unauthorized access to database with sa user.
Due to this problem, queries notifications are not getting expired and they are adding in to the database whenever a change it made in the database. That's why the database size was growing.
So to solve this problem I alter my database and set authorization to sa user and its working fine for me.
ALTER AUTHORIZATION ON DATABASE::[MCNServer] TO [sa];
Now no query notification is pending in the database as sa user is now authorized to access this database and I'm using SqlNotificationType.Change and it is working now.
Related
I am writing an SDK method with transaction using NpgsqlConnection for others to use.
When they were calling my method, they used SqlConnection with another transaction to wrap their DB stuff and my SDK's DB stuff.
If I set my SDK method without a transaction, the outer code was fine and my SDK method could be rolled back. (Which was odd too. Still figuring out why.)
If I set my SDK method with a transaction though, the outer code crashed with a TransactionAbortedException:
System.Transactions.TransactionAbortedException : The transaction has aborted.
---- Npgsql.PostgresException : 55000: prepared transactions are disabled
Currently we're using enlist=false at the SDK's connection string to prevent the inner transaction from joining the outer one but I'd like to know the reason behind this behavior.
Here's the code I'm reproducing the problem with:
using (var scope = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadCommitted,
},
TransactionScopeAsyncFlowOption.Enabled))
{
await using (var conn = new SqlConnection(#"Server=(localdb)\mssqllocaldb;Database=Test;ConnectRetryCount=0"))
using (var cmd = new SqlCommand("insert into [Test].[dbo].[Test] (Id, \"Name\") values (1, 'A')", conn))
{
await conn.OpenAsync();
var result = await cmd.ExecuteNonQueryAsync();
await SdkMethodToDoStuffWithNpgsql(1);
scope.Complete();
}
}
I had SdkMethodToDoStuffWithNpgsql() to mock a method in a repository with Postgres context injected.
public async Task SdkMethodToDoStuffWithNpgsql(long id)
{
var sqlScript = #"UPDATE test SET is_removal = TRUE WHERE is_removal = FALSE AND id = #id;
INSERT INTO log(id, data) SELECT id, data FROM log WHERE id = #id";
using (var scope = new TransactionScope(
TransactionScopeOption.RequiresNew,
new TransactionOptions
{
IsolationLevel = IsolationLevel.ReadCommitted,
},
TransactionScopeAsyncFlowOption.Enabled))
{
await using (var conn = new NpgsqlConnection(this._context.ConnectionString))
{
await conn.OpenAsync();
using (var cmd = new NpgsqlCommand(sqlScript, conn))
{
cmd.Parameters.Add(new NpgsqlParameter("id", NpgsqlDbType.Bigint) { Value = id });
await cmd.PrepareAsync();
var result = await cmd.ExecuteNonQueryAsync();
if (result != 2)
{
throw new InvalidOperationException("failed");
}
scope.Complete();
}
}
}
}
The above is the expected behavior - enlisting two connections in the same TransactionScope triggers a "distributed transaction"; this is known in PostgreSQL terminology as a "prepared transaction", and you must enable it in the configuration (this is the cause of the error you're seeing above). If the intention is to have two separate transactions (one for SQL Server, one for PostgreSQL) which commit separately, then opting out of enlisting is the right thing to do. You should also be able to use TransactopScopeOption.Suppress.
Note that distributed transactions aren't currently supported in .NET Core, only in .NET Framework (see this issue). So unless you're on .NET Framework, this won't work even if you enable prepared transactions in PostgreSQL.
Setup
•Visual Studio 2010
•IIS 8.5
•.NET Framework 4.6
•Microsoft SQL Server 2014
•AppPool Account on IIS is domain\web
I have a web page that monitors changes in a database table. I am using dependency_OnChange to monitor the database and pass the data to the user via signalR. I set a breakpoint in the dependency_OnChange method and it is only getting hit a few times out of thousands of database updates.
In web.config... I am using Integrated Security=True.
My user is a sysadmin on the sql box. (This is just for proof of concept)
In Global.asax... specifying a queuename and stopping and starting sqldependency
void Application_Start(object sender, EventArgs e)
{
var queuename = "Q_Name";
var sConn = ConfigurationManager.ConnectionStrings["singalR_ConnString"].ConnectionString;
SqlDependency.Stop(sConn, queuename);
SqlDependency.Start(sConn, queuename);
}
void Application_End(object sender, EventArgs e)
{
var queuename = "Q_Name";
var sConn = ConfigurationManager.ConnectionStrings["singalR_ConnString"].ConnectionString;
SqlDependency.Stop(sConn, queuename);
}
In code behind...
public void SendNotifications()
{
//Identify Current User and Row No
string CurrentUser = GetNTName();
string message = string.Empty;
string conStr = ConfigurationManager.ConnectionStrings["singalR_ConnString"].ConnectionString;
using (SqlConnection connection = new SqlConnection(conStr))
{
string query = "SELECT [RowNo] FROM [dbo].[Test] WHERE [User] = #User";
string SERVICE_NAME = "Serv_Name";
using (SqlCommand command = new SqlCommand(query, connection))
{
// Add parameters and set values.
command.Parameters.Add("#User", SqlDbType.VarChar).Value = CurrentUser;
//Need to clear notification object
command.Notification = null;
//Create new instance of sql dependency eventlistener (re-register for change events)
SqlDependency dependency = new SqlDependency(command, "Service=" + SERVICE_NAME + ";", 0);
//SqlDependency dependency = new SqlDependency(command);
//Attach the change event handler which is responsible for calling the same SendNotifications() method once a change occurs.
dependency.OnChange += new OnChangeEventHandler(dependency_OnChange);
connection.Open();
SqlDataReader reader = command.ExecuteReader();
if (reader.HasRows)
{
reader.Read();
message = reader[0].ToString();
}
}
}
//If query returns rows, read the first result and pass that to hub method - NotifyAllClients.
NotificationsHub nHub = new NotificationsHub();
nHub.NotifyAllClients(message);
}
private void dependency_OnChange(object sender, SqlNotificationEventArgs e)
{
//Check type to make sure a data change is occurring
if (e.Type == SqlNotificationType.Change)
{
// Re-register for query notification SqlDependency Change events.
SendNotifications();
}
}
NotificationsHub.cs page...
//Create the Hub
//To create a Hub, create a class that derives from Microsoft.Aspnet.Signalr.Hub.
//Alias that can call class from javascript. - i.e. var hub = con.createHubProxy('DisplayMessage');
[HubName("DisplayMessage")]
public class NotificationsHub : Hub //Adding [:Hub] let c# know that this is a Hub
{
//In this example, a connected client can call the NotifyAllClients method, and when it does, the data received is broadcasted to all connected clients.
//Create NotifyAllClients Method
//public means accessible to other classes
//void means its not returning any data
public void NotifyAllClients(string msg)
{
IHubContext context = GlobalHost.ConnectionManager.GetHubContext<NotificationsHub>();
//When this method gets called, every single client has a function displayNotification() that is going to be executed
//msg is the data that is going to be displayed to all clients.
context.Clients.All.displayNotification(msg);
}
}
The first thing I would do here is refactor the Sql Dependency setup out to a stand alone method and call it from your send notification. (SoC and DRY) because if you are creating other SqlDependencies in other places they are going to trip each other up. Secondly your are creating a new NotificationsHub, You should be getting the currently active hub.
DefaultHubManager hubManager = new DefaultHubManager();
hub = hubManager.ResolveHub("NotificationsHub");
hub.NotifyAllClients(message);
There is also an older way to get the hub but I am not sure it will still work
GlobalHost.ConnectionManager.GetHubContext<NotificationsHub>()
I also have an example of a simpler version in this answer.
Polling for database changes: SqlDependency, SignalR is Good
Let me know if you have any questions.
I have a class that gets tables from Sql Server. the class is static, but the variables are not. I want to know if it is OK in Asp net, because I had read not to use static at database in Asp net.
My Class: (There are more functions in the class, I put here one for example)
public static class DataBase
{
public static bool TableChange(string sqlCreate)
{
using (SqlConnection connection = new SqlConnection(Global.ConnectionString))
{
using (var cmd = new SqlCommand(sqlCreate, connection))
{
try
{
connection.Open();
cmd.ExecuteNonQuery();
}
catch (Exception ex)
{
Log.WriteLog(ex.Message + "\n" + sqlCreate, ex, HttpContext.Current.Request);
return false;
}
}
}
return true;
}
}
Thanks in advance
What you have read is most probably something to do with this approach:
public static EntityContext Database = new EntityContext();
// or
public static SqlConnection Database = new SqlConnection("...");
Here you store the database connection in a static variable and thus all parallel requests would want to use the same connection which is a very bad approach if it even works at all (it will probably work sort of fine until the page is under load).
You do not have this problem, because in your case only the methods are static, not the variables. Your code follows the recommended path - open connection (retrieve it from the pool), execute query, close the connection (return it to the pool).
Note: I DON't want to write custom membership provider.
I want to write my own Provider class so I can define it in web.config and access it like Membership class.
Here is a sample of my class (it has many other static methods):
public static class MySqlHelper
{
private static string constring = ConfigurationManager.ConnectionStrings["MyConnString"].ConnectionString;
public static int ExecuteNonQuery(string mysqlquery)
{
SqlConnection conn = new SqlConnection(connString);
SqlCommand cmd = new SqlCommand(mysqlquery, conn);
int result;
try
{
conn.Open();
result= cmd.ExecuteNonQuery();
}
finally
{
conn.Close();
}
return result;
}
}
Usage: MySqlHelper.ExecuteNonQuery("select * from customers");
Now as you see I have hard-coded the name of connectionstring i.e. "MyConnString". I am planning to make it dynamic.
So I was wondering if I can make it like static built-in Membership class, where I can define the connectionStringName in web.config. This way the class can be made re-usable without always naming my connectionstring in web.config to "MyConnString".
1: I DON'T want to pass connectionstring in every static method as a parameter.
2: I must be able to access the methods similar to Membership.CreateUser i.e. static.
I am looking over the web in parallel but any inputs/guidance will help.
Edited: I have updated my code sample, to clear some confusion about issues using static class. Here is a new question I posted to clarify that. Sorry about confusion.
the only thing i can think of that meets the qualifications you laid out is to use dependency injection, a static constructor, and inject in an something like an IConnectionStringProvider. this seems like about the most convoluted thing i can think of, so you might like it. :)
edit
after reading your comment, it seems like you just want to be able to reference any connection string, but only one connection string per application. i'd say just add an element to appSettings named MySqlProviderConnection with the value being the name of the connection string you want to use.
then in your helper, check for the existence of the appsetting, get its value, and pass it in to your ConfigurationManager.ConnectionStrings call. that way your provider could use any connection you want, without changing any code.
I typically discourage sharing one SqlConnection instance across several requests. Even if you enable MARS, you can run into performance issues. I think when your connection receives a non-read command, the connection buffer will pause all current reads until the write finishes. The only thing you're really saving is the time it takes to establish a connection.
SqlConnections are pooled so you can configure the provider to have a min / max number of instances available to soliciting clients. Keep in mind this is also controlled by whatever database you're connecting to; assuming you're connecting to a SQL Server instance, SQL Server has its own maximum connections allowed setting.
Instead of allowing clients to determine when to open/close a shared SqlConnection instance, I suggest having your public members take in either a command string or command parameters. Then, similar to what your sample has suggested, open a connection from the pool and execute the command.
public IEnumerable<SqlResults> ExecuteStoredProcedure(string procedure, params SqlParameter[] parameters) {
using(SqlConnection connection = new SqlConnection(MyConnectionStringProperty)) {
try {
connection.Open();
using(SqlCommand command = new SqlCommand(procedure, connection)) {
command.CommandType = CommandType.StoredProcedure;
if(parameters != null) {
command.Parameters.AddRange(parameters);
}
// yield return to handle whatever results from proc execution
// can also consider expanding to support reader.NextResult()
using(SqlDataReader reader = command.ExecuteReader()) {
yield return new SqlResults {
Reader = reader;
};
}
}
}
finally {
if(connection.State != ConnectionState.Closed) {
connection.Close();
}
}
}
}
The sample code above is just that - a sample of a concept I use at work. The sample does now have maximized error handling but is very flexible in how results are returned and handled. The SqlResults class simply contains a SqlDataReader property and can be expanded to include errors.
As far as making any of this static, it should be fine as long as you enable a way to make a singleton instance of the provider class and continue to not have any mutable properties be shared (potentially across various requests/threads). You may want to consider some sort of IoC or Dependency Injection approach for providing the connection string given your request.
EDIT
Yield allows the caller to use the returned object before the execution context returns to the method yielding the return for continued execution. So in the sample above, a caller can do something like this:
// Since it's an IEnumerable we can handle multiple result sets
foreach(SqlResults results in MySqlHelper.ExecuteStoredProcedure(myProcedureName, new SqlParameter("myParamName", myParamValue)) {
// handle results
}
without the connection closing while we handle the results. If you notice in the sample, we have using statements for our SqlClient objects. This approach allows result set handling to be decoupled from MySqlHelper as the provider class will take care of the would-be-duplicate SQL provision code, delegate result handling to the caller, then continue with what it has to do (i.e. close the connection).
As for IoC/DI, I personally use Castle Windsor. You can inject dependency objects as properties or construction parameters. Registering an Inversion of Control container as your dependency resource manager will allow you to (among other things) return the same object when a type of resource is requested. Basically, for every caller class that needs to use MySqlHelper, you can inject the same instance when the caller class is instantiated or when the caller class references its public MySqlHelper property. I, personally, prefer constructor injection whenever possible. Also, when I say inject, I mean you don't have to worry about setting the property value as your IoC/DI does it for you (if configured properly). See here for a more in depth explanation.
As another note, the IoC/DI approach would really only come into play if your class is non-static such that each application can have its own singleton instance. If MySqlHelper is static, then you could only support one connection string unless you pass it in, which in your original question, you'd prefer not to do so. IoC/DI will allow you to use your MySqlHelper property member as if it were static though since the registered container would ensure that the property has a proper instance.
Here is the complete code of a SqlHelper that I'd used on some small projects.
But carefull with static for this kind of class. If you will use it for Web project, remember that the connection will be shared at the same instance for all users, which can cause bad problems...
using System.Data;
using System.Data.SqlClient;
using System.Web.Configuration;
public class SqlHelper
{
private SqlConnection connection;
public SqlHelper()
{
connection = new SqlConnection();
}
public void OpenConnection()
{
// Updated code getting the ConnectionString without hard naming it.
// Yes, if you have more than 1 you'll have problems... But, how many times it happens?
if (WebConfigurationManager.ConnectionStrings.Length == 0)
throw new ArgumentNullException("You need to configure the ConnectionString on your Web.config.");
else
{
connection.ConnectionString = WebConfigurationManager.ConnectionStrings[0].ConnectionString;
connection.Open();
}
}
public void CloseConnection()
{
if (connection != null && connection.State != ConnectionState.Closed)
connection.Close();
}
public DataTable ExecuteToDataTable(string sql)
{
DataTable data;
SqlCommand command = null;
SqlDataAdapter adapter = null;
try
{
if (connection.State != ConnectionState.Open)
OpenConnection();
command = new SqlCommand(sql, connection);
adapter = new SqlDataAdapter(command);
retorno = new DataTable();
adapter.Fill(data);
}
finally
{
if (command != null)
command.Dispose();
if (adapter != null)
adapter.Dispose();
CloseConnection();
}
return data;
}
public int ExecuteNonQuery(string sql)
{
SqlCommand command = null;
try
{
if (connection.State != ConnectionState.Open)
OpenConnection();
command = new SqlCommand(sql, connection);
return command.ExecuteNonQuery();
}
finally
{
if (command != null)
command.Dispose();
CloseConnection();
}
}
public object ExecuteScalar(string sql)
{
SqlCommand command = null;
try
{
if (connection.State != ConnectionState.Open)
OpenConnection();
command = new SqlCommand(sql, connection);
return command.ExecuteScalar();
}
finally
{
if (command != null)
command.Dispose();
CloseConnection();
}
}
}
Sample usage:
SqlHelper sql = new SqlHelper();
DataTable data = sql.ExecuteToDataTable("SELECT * FROM Customers");
int affected = sql.ExecuteNonQuery("INSERT Customers VALUES ('Test')");
But if you really want static (if you is on a single user enviroment), just put static on all methods.
I have got an ASP.Net website, where the data is brought in from ISeries.
The data connection to ISeries is quite slow and the speed is quite important for this website. Because of the slow speed of data retrieval from ISeries, I want to make as less database connections as possible.
So, I was thinking about storing tables from the database which rarely changes as static properties in my website. Whenevera user logs in I submit a thread which refreshes the data in the static property.
Is this approach correct? If not, what are the problems with this approach and what are the possible alternatives?
Example:-
For list of ports, I submit the below thread when user logs on:-
// Get Ports list
Thread threadPorts = new Thread(delegate()
{
Ports.getPortList();
});
threadPorts.Start();
Session["threadPorts"] = threadPorts;
In class Ports, there are 2 methods -
one for populating the static property PortList,
and the other checks if the thread is alive and waits for the thread to complete and retrieve the list of ports, once it is complete. The second method is the one which I use in my application whenever I need the list of ports (populating a dropdown, etc).
public static void getPortList()
{
DataTable dt = new DataTable();
DB2Connection conn = new DB2Connection(ConfigurationManager.ConnectionStrings["db2IBM"].ConnectionString);
conn.Open();
string query = query to get ports from ISeries;
DB2Command cmd = new DB2Command(query, conn);
cmd.CommandType = CommandType.Text;
DB2DataAdapter adp = new DB2DataAdapter(cmd);
adp.Fill(dt);
cmd.Dispose();
conn.Close();
List<Port> list = new List<Port>();
foreach (DataRow row in dt.Rows)
{
list.Add(new Port(row[0].ToString(), row[1].ToString(), row[2].ToString(), row[3].ToString()));
}
StaticProp.PortList = list;
}
public static List<Port> getPortListfromSession()
{
List<Port> portList = new List<Port>();
if (System.Web.HttpContext.Current.Session["threadPorts"] != null)
{
Thread t = (Thread)System.Web.HttpContext.Current.Session["threadPorts"];
if (t != null)
{
if (t.IsAlive)
{
t.Join();
}
}
}
if (System.Web.HttpContext.Current.Session["threadPorts"] != null)
System.Web.HttpContext.Current.Session.Remove("threadPorts");
portList = StaticProp.PortList;
return portList;
}
I take it that ISeries, is an external database!
Why not take data from that database and stick it in your own, and update it separately?
You can then query your own database quickly, and update your database, as often as you see fit, alternatively you can use a file, I personally my preferred file data format is Json, over XML - but database is much better.