I have an application that will reside within a business2business network that will communicate with our AS400 in our internal network environment. The firewall has been configured to allow the data request through to our AS400, but we are seeing a huge lag time in connection speed and response time. For example what takes less than a half second in our local development environments is taking upwards of 120 seconds in our B2B environment.
This is the function that we are utilizing to get our data. We are using the enterprise library application blocks, so the ASI object is the Database...
/// <summary>
/// Generic function to retrieve data table from AS400
/// </summary>
/// <param name="sql">SQL String</param>
/// <returns></returns>
private DataTable GetASIDataTable(string sql)
{
DataTable tbl = null;
HttpContext.Current.Trace.Warn("GetASIDataTable(" + sql + ") BEGIN");
using (var cmd = ASI.GetSqlStringCommand(sql))
{
using (var ds = ASI.ExecuteDataSet(cmd))
{
if (ds.Tables.Count > 0) tbl = ds.Tables[0];
}
}
HttpContext.Current.Trace.Warn("GetASIDataTable() END");
return tbl;
}
I am trying to brainstorm some ideas to consider as to why this is occurring.
Have never used ASP.NET or AS400 in anger, but I have seen this kind of behaviour before and it usually indicated some kind of network problem, typically a reverse DNS lookup that is timing out.
Assuming you have ping enabled through your firewall, check that you can ping in both directions.
Also run traceroute from each machine to try and diagnose where a delay might be.
Hope that helps.
Sorry but I can't tell you what is going on but I just have a couple comments...
First I would output the sql, see if it has a lot of joins and/or is hitting a table (file) with a large amount of records. If you really want to dig in fire up your profiler of choice (I use Ants Profiler) and try to find a profiler for the 400 - see what the server resources are as well as actual query after it goes thru the odbc driver.
I have worked with asp.net and as400 a few times and the way I have been most successful is actually using sql server with a linked server to AS400. I created a view to make it simpler to work with - hiding the oddities of as400 naming. It worked well in my scenario because the application needed to pull information from sql server anyway.
I thought I would mention it in case it helps... best of luck
Check the size of your iSeries system as well. Depending on the size of the query and if the system is undersized for the applications running on it, this may take time. While it shouldn't be thrown out as a posibility, I have seen a similar behavior in the past. But of course more likely is a network issue.
The other idea if you can solve the speed issue or is a sizing problem is to store it in an MS SQL Server then write the records from SQL Server to the iSeries from there.
Related
I am experiencing database connection errors with an ASP.NET application written in VB, running on three IIS servers. The underlying database is MS Access, which is on a shared network device. It uses Entity Framework, code first implementation and JetEntityFrameworkProvider.
The application is running stable. But, approximately 1 out of 1000 attempts to open the database connection fails with either one of the following two errors:
06:33:50 DbContext "Failed to open connection at 2/12/2020 6:33:50 AM +00:00 with error:
Cannot open database ''. It may not be a database that your application recognizes, or the file may be corrupt.
Or
14:04:39 DbContext "Failed to open connection at 2/13/2020 2:04:39 PM +00:00 with error:
Could not use ''; file already in use.
One second later, with refreshing (F5), the error is gone and it works again.
Details about the environment and used code.
Connection String
<add name="DbContext" connectionString="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=x:\thedatabase.mdb;Jet OLEDB:Database Password=xx;OLE DB Services=-4;" providerName="JetEntityFrameworkProvider" />
DbContext management
The application uses public property to access DbContext. DbContext is kept in the HttpContext.Current.Items collection for the lifetime of the request, and is disposed at it’s end.
Public Shared ReadOnly Property Instance() As DbContext
Get
SyncLock obj
If Not HttpContext.Current.Items.Contains("DbContext") Then
HttpContext.Current.Items.Item("DbContext") = New DbContext()
End If
Return HttpContext.Current.Items.Item("DbContext")
End SyncLock
End Get
End Property
BasePage inits and disposes the DbContext.
Protected Overrides Sub OnInit(e As EventArgs)
MyBase.OnInit(e)
DbContext = Data.DbContext.Instance
...
End Sub
Protected Overrides Sub OnUnload(e As EventArgs)
MyBase.OnUnload(e)
If DbContext IsNot Nothing Then DbContext.Dispose()
End Sub
What I have tried
Many of the questions on SO which address above error messages, deal with generally not being able to establish a connection to the database – they can’t connect at all. That’s different with this case. Connection works 99,99% of the time.
Besides that, I have checked:
Permissions: Full access is granted for share where .mdb (database) and .ldb (locking file) resides.
Network connection: there are no connection issues to the shared device; it’s a Gigabit LAN connection
Maximum number of 255 concurrent connections is not reached
Maximum size of database not exceeded (db has only 5 MB)
Changed the compile option from “Any CPU” to “x86” as suggested in this MS Dev-Net post
Quote: I was getting the same "Cannot open database ''" error, but completely randomly (it seemed). The MDB file was less than 1Mb, so no issue with a 2Gb limit as mentioned a lot with this error.
It worked 100% on 32 bit versions of windows, but I discovered that the issues were on 64 bit installations.
The app was being compiled as "Any CPU".
I changed the compile option from "Any CPU" to "x86" and the problem has disappeared.
Nothing helped so far.
To gather more information, I attached an Nlog logger to the DbContext which writes all database actions and queries to a log file.
Shared Log As Logger = LogManager.GetLogger("DbContext")
Me.Database.Log = Sub(s) Log.Debug(s)
Investigating the logs I figured out that when one of the above errors occured on one server, another one of the servers (3 in total) has closed the db connection at exactly the same time.
Here two examples which correspond to the above errors:
06:33:50 DbContext "Closed connection at 2/12/2020 6:33:50 AM +00:00
14:04:39 DbContext "Closed connection at 2/13/2020 2:04:39 PM +00:00
Assumption
When all connections of a DbContext have been closed, the according record is removed from the .ldb lock file. When a connection to the db is being opened, a record will be added to the lock file. When these two events occur at the exact same time, from two different servers, there is a write conflict to the .ldb lock file, which results in on of the errors from above.
Question
Can anyone confirm or prove this wrong? Has anyone experienced this behaviour? Maybe I am missing something else. I’d appreciate your input and experience on this.
If my assumption is true, a solution could be to use a helper class for accessing db, which catches and handles this error, waiting for a minimal time period and trying again.
But this feels kind of wrong. So I am also open to suggestions for a “proper” solution.
EDIT: The "proper" solution would be using a DBMS Server (as stated in the comments below). I'm aware of this. For now, I have to deal with this design mistake without being responsible for it. Also, I can't change it in the short run.
I write this as an aswer because of space but this is not really an answer.
It's for sure an OleDb provider issue.
I think that is a sharing issue.
You could do some tries:
use a newer OleDb provider instead of Microsoft.Jet.OLEDB.4.0. (if you have try 64 bits you could already have try another provider because Jet.OLEDB.4.0 is 32 bits only)
Implement a retry mechanism on the new DbContext()
Reading your tests this is probaly not your case. I THINK that Dispose does not always work properly on Jet.OLEDB.4.0 connections. I noted it on tests and I solved it using a different testing engine. Before giving up I used this piece of code
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, true);
GC.WaitForPendingFinalizers();
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced, true);
As you can understand reading this code, they are tries and the latest solution was changing the testing engine.
If your app is not too busy you could try to lock the db using a different mechanism (for example using a lock file). This is not really different from new DbContext() retries.
In late '90s I remember I had an issue related to disk sharing OS (I were using Novel Netware). Actually I have not experience in using mdb files on a network share. You could try to move the mdb on a folder shared with Windows
Actually I use Access databases only for tests. If you really need to use a single file database you could try other solutions: SQL Lite (you need a library, also this written by me, to apply code first https://www.nuget.org/packages/System.Data.SQLite.EF6.Migrations/ ) or SQL Server CE
Use a DBMS Server. This is for sure the best solution. As the writer of JetEntityFrameworkProvider I think that single file databases are great for single user apps (for this apps I suggest SQL Lite), for tests (I think that for tests JetEntityFrameworkProvider is great), for transfering data or, also, for readonly applications. In other cases use a DBMS Server. As you know, with EF, you can change from JetEntityFrameworkProvider to SQL Server or to MySql without effort.
You went wrong at the design stage: The MS Access database engine is unfit for ASP.Net sites, and this is explicitly stated on multiple places, e.g. the official download page under details.
The Access Database Engine 2016 Redistributable is not intended .... To be used by ... a program called from server-side web application such as ASP.NET
If you really have to work with an Access database, you can run a helper class that retries in case of common errors. But I don't recommend it.
The proper solution here is using a different RDBMS which exhibits stateless behavior. I recommend SQL Server Express, which has limitations, but if you exceed those you will be far beyond what Access supports, and wont cause errors like this.
I have a mobile service sync table that is giving me absolutely HORRENDOUS performance.
The table is declared as:
IMobileServiceSyncTable<Myclass> myclassTable;
this.client = new MobileServiceClient("my url here");
var store = new MobileServiceSQLiteStore(“localdb.db”);
store.DefineTable<Myclass>();
this.client.SyncContext.InitializeAsync(store);
this.myclassTable = client.GetSyncTable<Myclass>();
Than later in a button handler I’m calling into:
this.myclassTable.ToCollectionAsync();
The problem is, the performance is horrific. It takes at best minutes and most times just sits there indefinitely.
Is there anything in the above that I’ve done that would explain why performance is so absolutely terrible?
this.myclassTable.ToCollectionAsync();
For IMobileServiceSyncTable table, the above method would execute the SELECT * FROM [Myclass] sql statement against your local sqlite db.
The problem is, the performance is horrific. It takes at best minutes and most times just sits there indefinitely.
AFAIK, when working with offline sync, we may invoke the pull operation for retrieving a subset of the server data, then insert the retrieved data into the local store table. For await this.myclassTable.PullAsync(), it would send request and retrieve the server data with the MaxPageSize in 50, and the client SDK would send another request to confirm whether there has more data and pull them automatically.
In summary, I would recommend you checking with your code to locate the specific code which causes this poor performance. Also, you could leverage adding diagnostic logging, capturing the network traces via Fiddler to troubleshoot with this issue.
I am working on a migration project where we are migrating one application from Weblogic to Websphere 8.5 server.
In Weblogic server, we can specify default schema while creating datasource but I don't see same option in WebSpehere 8.5 server.
Is there any custom property through which we can set it , I tried currentSchema=MySchema but it did not work.
This answer requires significantly more work, but I'm including it because it's the designed solution to customize pretty much anything about a connection, including the schema. WebSphere Application Sever allows you to provide/extend a DataStoreHelper.
Knowledge Center document on providing a custom DataStoreHelper
In this case, you can extend com.ibm.websphere.rsadapter.Oracle11gDataStoreHelper.
JavaDoc for Oracle11gDataStoreHelper
The following methods will be of interest:
doConnectionSetup, which performs one-time initialization on a connection when it is first created
doConnectionCleanup, which resets connection state before returning it to the connection pool.
When you override doConnectionSetup, you are supplied with the newly created connection, upon which you can do,
super.doConnectionSetup(connection);
Statement stmt = connection.createStatement();
try {
stmt.execute(sqlToUpdateSchema);
} finally {
stmt.close();
}
doConnectionCleanup lets you account for the possibility that application code that is using the connection might switch the schema to something else. doConnectionCleanup gives you the opportunity to reset it. Again, you are supplied with a connection, upon which you can do,
super.doConnectionCleanup(connection);
Statement stmt = connection.createStatement();
try {
stmt.execute(sqlToUpdateSchema);
} finally {
stmt.close();
}
Note that in both cases, invoking the corresponding super class method is important to ensure you don't wipe out the database-specific initialization/cleanup code that WebSphere Application Server has built in based on the database.
As far as I know Weblogic only allows setting a default schema by setting the 'Init SQLto a SQL string which sets the current schema in the database, such asSQL ALTER SESSION SET CURRENT_SCHEMA=MySchema`. So, this answer is assuming the only way to set the current schema of a data source is via SQL.
In WebSphere, the closest thing to WebLogic's Init SQL is the preTestSQLString property on WebSphere.
The idea of the preTestSQLString property is that WebSphere will execute a very simple SQL statement to verify that you can connect to your database properly when the server is starting. Typically values for this property are really basic things like select 1 from dual', but since you can put in whatever SQL you want, you could setpreTestSQLStringtoSQL ALTER SESSION SET CURRENT_SCHEMA=MySchema`.
Steps from the WebSphere documentation (link):
In the administrative console, click Resources > JDBC providers.
Select a provider and click Data Sources under Additional properties.
Select a data source and click WebSphere Application Server data source properties under Additional properties.
Select the PreTest Connections check box.
Type a value for the PreTest Connection Retry Interval, which is measured in seconds. This property determines the frequency with which a new connection request is made after a pretest operation fails.
Type a valid SQL statement for the PreTest SQL String. Use a reliable SQL command, with minimal performance impact; this statement is processed each time a connection is obtained from the free pool.
For example, "select 1 from dual" in oracle or "SQL select 1" in SQL Server.
Universal Connection Pool (UCP) is a Java connection pool and the whitepaper "UCP with Webshere" shows how to set up UCP as a datasource.
for JDBC datasource, the steps are similar but, you can choose the default JDBC driver option.
Check out the paper for reference.
I'm using SQL Server and ASP.NET. I have the following function:
Using js = daoFactory.CreateJoinScope()
Using tran = New Transactions.TransactionScope()
'...
tran.Complete()
End Using
End Using
However, the following exception is thrown:
The transaction manager has disabled its support for remote/network transactions.
Description of JoinScope:
Public Class JoinScope
Implements IJoinScope
Implements IDisposable
'...
End Class
I have worked this way in another application with the same environment without a problem, but here I have this problem. What could I do to fix the issue?
Make sure that the "Distributed Transaction Coordinator" Service is
running on both database and client.
Also make sure you check "Network DTC Access", "Allow Remote Client",
"Allow Inbound/Outbound" and "Enable TIP".
To enable Network DTC Access for MS DTC transactions
Open the Component Services snap-in.
To open Component Services, click Start. In the search box, type dcomcnfg, and then press ENTER.
Expand the console tree to locate the DTC (for example, Local DTC) for which you want to enable Network MS DTC Access.
On the Action menu, click Properties.
Click the Security tab and make the following changes:
In Security Settings, select the Network DTC Access check box.
In Transaction Manager Communication, select the Allow Inbound and Allow Outbound check boxes.
I had a store procedure that call another store Procedure in "linked server".when I execute it in ssms it was ok,but when I call it in application(By Entity Framework),I got this error.
This article helped me and I used this script:
EXEC sp_serveroption #server = 'LinkedServer IP or Name',#optname = 'remote proc transaction promotion', #optvalue = 'false' ;
for more detail look at this:
Linked server : The partner transaction manager has disabled its support for remote/network transactions
In my scenario, the exception was being thrown because I was trying to create a new connection instance within a TransactionScope on an already existing connection:
Example:
void someFunction()
{
using (var db = new DBContext(GetConnectionString()))
{
using (var transaction = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted }))
{
someOtherFunction(); // This function opens a new connection within this transaction, causing the exception.
}
}
}
void someOtherFunction()
{
using (var db = new DBContext(GetConnectionString()))
{
db.Whatever // <- Exception.
}
}
I was getting this issue intermittently, I had followed the instructions here and very similar ones elsewhere. All was configured correctly.
This page: http://sysadminwebsite.wordpress.com/2012/05/29/9/ helped me find the problem.
Basically I had duplicate CID's for the MSDTC across both servers. HKEY_CLASSES_ROOT\CID
See: http://msdn.microsoft.com/en-us/library/aa561924.aspx section Ensure that MSDTC is assigned a unique CID value
I am working with virtual servers and our server team likes to use the same image for every server. It's a simple fix and we didn't need a restart. But the DTC service did need setting to Automatic startup and did need to be started after the re-install.
Comment from answer: "make sure you use the same open connection for all the database calls inside the transaction. – Magnus"
Our users are stored in a separate db from the data I was working with in the transactions. Opening the db connection to get the user was causing this error for me. Moving the other db connection and user lookup outside of the transaction scope fixed the error.
I post the below solution here because after some searching this is where I landed, so other may too. I was trying to use EF 6 to call a stored procedure, but had a similar error because the stored procedure had a linked server being utilized.
The operation could not be performed because OLE DB provider _ for linked server _ was unable to begin a distributed transaction
The partner transaction manager has disabled its support for remote/network transactions*
Jumping over to SQL Client did fix my issue, which also confirmed for me that it was an EF thing.
EF model generated method based attempt:
db.SomeStoredProcedure();
ExecuteSqlCommand based attempt:
db.Database.ExecuteSqlCommand("exec [SomeDB].[dbo].[SomeStoredProcedure]");
With:
var connectionString = db.Database.Connection.ConnectionString;
var connection = new System.Data.SqlClient.SqlConnection(connectionString);
var cmd = connection.CreateCommand();
cmd.CommandText = "exec [SomeDB].[dbo].[SomeStoredProcedure]";
connection.Open();
var result = cmd.ExecuteNonQuery();
That code can be shortened, but I think that version is slightly more convenient for debugging and stepping through.
I don't believe that Sql Client is necessarily a preferred choice, but I felt this was at least worth sharing if anyone else having similar problems gets landed here by google.
The above Code is C#, but the concept of trying to switch over to Sql Client still applies. At the very least it will be diagnostic to attempt to do so.
I was having this issue with a linked server in SSMS while trying to create a stored procedure.
On the linked server, I changed the server option "Enable Promotion on Distributed Transaction" to False.
Screenshot of Server Options
If you could not find Local DTC in the component services try to run this PowerShell script first:
$DTCSettings = #(
"NetworkDtcAccess", # Network DTC Access
"NetworkDtcAccessClients", # Allow Remote Clients ( Client and Administration)
"NetworkDtcAccessAdmin", # Allow Remote Administration ( Client and Administration)
"NetworkDtcAccessTransactions", # (Transaction Manager Communication )
"NetworkDtcAccessInbound", # Allow Inbound (Transaction Manager Communication )
"NetworkDtcAccessOutbound" , # Allow Outbound (Transaction Manager Communication )
"XaTransactions", # Enable XA Transactions
"LuTransactions" # Enable SNA LU 6.2 Transactions
)
foreach($setting in $DTCSettings)
{
Set-ItemProperty -Path HKLM:\Software\Microsoft\MSDTC\Security -Name $setting -Value 1
}
Restart-Service msdtc
And it appears!
Source: The partner transaction manager has disabled its support for remote/network transactions
In case others have the same issue:
I had a similar error happening. turned out I was wrapping several SQL statements in a transactions, where one of them executed on a linked server (Merge statement in an EXEC(...) AT Server statement). I resolved the issue by opening a separate connection to the linked server, encapsulating that statement in a try...catch then abort the transaction on the original connection in case the catch is tripped.
I had the same error message. For me changing pooling=False to ;pooling=true;Max Pool Size=200 in the connection string fixed the problem.
I am new to unit testing for web applications
I have a function which creates a connection to a remote mysql database and perform some operations on it .
I want to have a test case which tests the connection is closed or not after the operations on database.
for example
fun1()
{
ODBCConnection con = new ODBCConnection(connString);
con.open();
}
in the above function, the connection is not closed?
how do i check this? can any one help?
In .Net, it's generally best to open your connections immediately before you use them. So rather than building (and testing) a function that connects to the database, you build and test a function that returns the correct connectionstring. You also have a reference database for your testing environment, and so you build your data access methods and create their own connection and test them against your reference database, that the right results come back.
Okay, based on your comment I can help you. Since you will be opening and closing the connection in the same function (as you should), you can do this:
public void fun1()
{
using (ODBCConnection con = new ODBCConnection(connString))
{
con.open();
//use the connection here
}
//connection is closed here because of the using block, even if an exception is thrown
}
There is no need to check if the connection closes in the code above. It will be closed in a timely manner by the using block, and that's guaranteed as much as anything can be in software. Just make sure you use that pattern everywhere you use connections.
In unit testing, the "units" to be tested are methods/functions. You test that the function performs as you expect it to, and nothing more. If you want to test specifically if a connection is closed, than the way to do it is to write a function to close the connection, and test that.