I made 3 Ajax processes to run the below code at the same time.
but one of the processes throw exception that message says "The underlying provider failed on Open."
try{
orderRepository orderRepo = new orderRepository(); // get context (Mysql)
var result = (from x in orderRepo.orders
where x.orderid == orderno
select new {x.tracking, x.status, x.charged }).SingleOrDefault();
charged = result.charged;
}catch(Exception e){
log.Error(e.Message); // The underlying provider failed on Open.
}
And, I run the 1 Ajax call that failed before, then It passes through.
It happen to 1 of 3 (Ajax) process, sometimes, 2 of 5 process.
I guess it because all process try to using Database same time. but I couldn't find the solution.
This is my connection string,
<add name="EFMysqlContext" connectionString="server=10.0.0.10;User Id=root;pwd=xxxx;Persist Security Info=True;database=shop_db" providerName="Mysql.Data.MySqlClient" />
Anybody know the solution or something I can try, please advise me.
Thanks
It sounds like a problem because of concurrent connection with SQL Server using same username. Have you tried destroying/disposing the repository(or connection) object after using it?
Give it a try:
try{
using( orderRepository orderRepo = new orderRepository()) // get context (Mysql)
{
var result = (from x in orderRepo.orders
where x.orderid == orderno
select new {x.tracking, x.status, x.charged }).SingleOrDefault();
charged = result.charged;
} // orderRepo object automatically gets disposed here
catch(Exception e){
log.Error(e.Message); // The underlying provider failed on Open.
} }
Not sure if it matters, but your provider name is Mysql.Data.MySqlClient and not MySql.Data.MySqlClient (if it is case-sensitive, this could be the cause).
Related
I'm using Lucene in asp-core project. But every 2-4 days my index is broken. So i logged the exceptions and got the following stacktrace:
2017-09-06 10:13:50.8338|An unhandled exception has occurred: Lock
obtain timed out:
SimpleFSLock#e:\inetpub\Static_Data\GKHUB\lucene_index\write.lock:
System.IO.IOException: lockFile
'e:\inetpub\Static_Data\GKHUB\lucene_index\write.lock' alredy
exists.EXCEPTION OCCURRED:Lucene.Net.Store.LockObtainFailedException
After some researches i found out, that
This exception is thrown when the write.lock could not be acquired. This happens when a writer tries to open an index that another writer already has open.[ src ]
So the second Update-Request couldn't update the index. But why is the whole index broken afterwards?
My Code doesn't do anything special:
public void UpdateIndex(Document doc, int idToUpadate)
{
var indexwriterConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, Analyzer);
indexwriterConfig.WriteLockTimeout = 5000; // doesn't fix the problem
using (var writer = new IndexWriter(GetLuceneDirectory, indexwriterConfig)) {
try {
writer.UpdateDocument(new Term("Id", idToUpadate.ToString()), doc);
writer.Commit();
}
catch (Exception e) {
Debug.WriteLine(e);
writer.Rollback();
}
}
}
As you already mention in the question:
LockObtainFailedException is thrown when the write.lock could not be
acquired. This happens when a writer tries to open an index that
another writer already has open.
That's happening if you have multiple updates, in you code, you're creating multiple instances of the IndexWriter that are trying to obtain the lock on the index. You should try to re-use a single writer instead of closing and opening/creating a new one. This should solve your problem.
Also, do not forget, that
IndexWriter instances are completely thread safe, meaning multiple
threads can call any of its methods, concurrently
Is it possible to execute a query asynchronously in hive server?
For eg, How can I /Is it possible to do something like this from the client-
QueryHandle handle = executeAsyncQuery(hiveQuery);
Status status = handle.checkStatus();
if(status.isCompleted()) {
QueryResult result = handle.fetchResult();
}
I also had a look at How do I make an async call to Hive in Java?. But did not help. The answers were mostly around the thrift clients taking a callback argument.
Any help would be appreciated. Thanks!
[EDIT 1]
I went through the HiveConnection.java in hive-jdbc. hive-jdbc by default uses the async thrift APIs. Hence it submits a query and polls for result sets (look at HiveStatement.java). Now i am able to write a piece of code which is purely non blocking. But the problem is as soon as the client disconnect the foot print about the query is lost.
Client 1
final TCLIService.Client client = new TCLIService.Client(createBinaryTransport(host, port, loginTimeout, sessConf, false)); // from HiveConnection.java
TSessionHandle sessionHandle = openSession(client) // from HiveConnection.java
TExecuteStatementReq execReq = new TExecuteStatementReq(sessionHandle, sql);
execReq.setRunAsync(true);
execReq.setConfOverlay(sessConf);
final TGetOperationStatusReq handle = client.ExecuteStatement(execReq)
writeHandleToFile("~/handle", handle)
Client 2
final TGetOperationStatusReq handle = readHandleFromFile("~/handle")
final TCLIService.Client client = new TCLIService.Client(createBinaryTransport(host, port, loginTimeout, sessConf, false));
while (true) {
System.out.println(client.GetOperationStatus(handle).getOperationState());
Thread.sleep(1000);
}
Client 2 keeps printing FINISHED_STATE as long as Client 1 is alive. But if client 1 process completes or gets killed, client 2 starts printing null which means hiveserver2 is cleaning up the resources as soon as a client disconnects.
Is it possible to configure hiveserver2 to configure this clean up process based on time or something?
Thanks!
Did some research and figured out that this happens only with binary transport (tcp)
#Override
public void deleteContext(ServerContext serverContext,
TProtocol input, TProtocol output) {
Metrics metrics = MetricsFactory.getInstance();
if (metrics != null) {
try {
metrics.decrementCounter(MetricsConstant.OPEN_CONNECTIONS);
} catch (Exception e) {
LOG.warn("Error Reporting JDO operation to Metrics system", e);
}
}
ThriftCLIServerContext context = (ThriftCLIServerContext) serverContext;
SessionHandle sessionHandle = context.getSessionHandle();
if (sessionHandle != null) {
LOG.info("Session disconnected without closing properly, close it now");
try {
cliService.closeSession(sessionHandle);
} catch (HiveSQLException e) {
LOG.warn("Failed to close session: " + e, e);
}
}
}
The above stub (from ThriftBinaryCLIService) gets executed through this piece of code from TThreadPoolServer which is used by ThriftBinaryCLIService.
eventHandler.deleteContext(connectionContext, inputProtocol,
outputProtocol);
Apparently http transport (ThriftHttpCLIService) has a different strategy of cleaning up operation handles (not greedy like tcp)
Will check with hive community on this to understand a bit more and see if there is an issue addressing this already.
So this code runs in an asp.net app on Linux. The code calls one of my services. (WCF doesn't work on mono currently, that is why I'm using asmx). This code works AS INTENDED when running from Windows (while debugging). As soon as I deploy to Linux, it stops working. I'm definitely baffled. I've tested the service thoroughly and the service is fine.
Here is the code producing an error: (NewVisitor is a void function taking 3 strings in)
//This does not work.
try
{
var client = new Service1SoapClient();
var results = client.NewVisitor(Request.UserHostAddress, Request.UrlReferrer == null ? String.Empty : Request.UrlReferrer.ToString(), Request.UserAgent);
Logger.Debug("Result of client: " + results);
}
Here is the error generated: Object reference not set to an instance of an object
Here is the code that works perfectly:
//This works (from the service)
[WebMethod(CacheDuration = _cacheTime, Description = "Returns a List of Dates", MessageName = "GetDates")]
public List<MySqlDateTime> GetDates()
{
return db.GetDates();
}
//Here is the code for the method above
var client = new Service1Soap12Client();
var dbDates = client.GetDates();
I'd love to figure out why it is saying that the object is not set.
Methods tried:
new soap client.
new soap client with binding and endpoint address specified
Used channel factory to create and open the channel.
If more info is needed I can give more. I'm out of ideas.
It looks like a bug in mono. You should file a bug with a reproducible test case so it can be fixed (and possibly find a workaround you can use).
Unfortunately, I don't have Linux to test it but I'd suggest you put the client variable in an using() statement:
using(var client = new Service1SoapClient())
{
var results = client.NewVisitor(Request.UserHostAddress, Request.UrlReferrer == null ?
String.Empty : Request.UrlReferrer.ToString(), Request.UserAgent);
Logger.Debug("Result of client: " + results);
}
I hope it helps.
RC.
I am doing a project in wicket
How to solve the problem.
I came across such a message:
WicketMessage: Can't instantiate page using constructor public itucs.blg361.g03.HomePage()
Root cause:
java.lang.UnsupportedOperationException: [SQLITE_BUSY] The database file is locked (database is locked)
at itucs.blg361.g03.CategoryEvents.CategoryEventCollection.getCategoryEvents(CategoryEventCollection.java:41)
public List<CategoryEvent> getCategoryEvents() {
List<CategoryEvent> categoryEvents = new
LinkedList<CategoryEvent>();
try {
String query = "SELECT id, name, group_id"
+ " FROM event_category";
Statement statement = this.db.createStatement();
ResultSet result = statement.executeQuery(query);
while (result.next()) {
int id = result.getInt("id");
String name = result.getString("name");
int group_id = result.getInt("group_id");
categoryEvents.add(new CategoryEvent(id, name, group_id));
}
} catch (SQLException ex) {
throw new UnsupportedOperationException(ex.getMessage());
}
return categoryEvents;
}
at itucs.blg361.g03.HomePage.(HomePage.java:71)
categories = categoryCollection.getCategoryEvents();
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
Sqlite allows only one writer to the whole database at a time and, unless you selected "WAL" journal mode, no reader while writing. Moreover unless you explicitly ask it to wait, it simply returns the SQLITE_BUSY status for any attempt to access the database while conflicting operation is running.
You can tell sqlite to wait for the database to become available for a specified amount of time. The C-level API is sqlite3_busy_timeout; I never used sqlite from Java though, so I don't know where to find it there.
(...) tell sqlite to wait for the database to become available for specified amount of time.
In order to do it from Java, run the following statement just like a simple SQL statement:
pragma busy_timeout=30000; -- Busy timeout set to 30000 milliseconds
I'm using the Booksleeve library in a C#/ASP.NET 4 application. Currently the RedisConnection object is a static object across my MonoLink class. Should I be keeping this connection open, or should I be open/closing it after each query/transaction (as I'm doing now)? Just slightly confused. Here's how I'm using it, as of now:
public static MonoLink CreateMonolink(string URL)
{
redis.Open();
var transaction = redis.CreateTransaction();
string Key = null;
try
{
var IncrementTask = transaction.Strings.Increment(0, "nextmonolink");
if (!IncrementTask.Wait(5000))
{
transaction.Discard();
throw new System.TimeoutException("Monolink index increment timed out.");
}
// Increment complete
Key = string.Format("monolink:{0}", IncrementTask.Result);
var AddLinkTask = transaction.Strings.Set(0, Key, URL);
if (!AddLinkTask.Wait(5000))
{
transaction.Discard();
throw new System.TimeoutException("Add monolink creation timed out.");
}
// Run the transaction
var ExecTransaction = transaction.Execute();
if (!ExecTransaction.Wait(5000))
{
throw new System.TimeoutException("Add monolink transaction timed out.");
}
}
catch (Exception ex)
{
transaction.Discard();
throw ex;
}
finally
{
redis.Close(false);
}
// Link has been added to redis
MonoLink ml = new MonoLink();
ml.Key = Key;
ml.URL = URL;
return ml;
}
Thanks, in advance, for any responses/insight. Also, is there any sort of official documentation for this library? Thank you S.O. ^_^.
According to the author of Booksleeve,
The connection is thread safe and intended to be massively shared;
don't do a connection per operation.
Should I be keeping this connection open, or should I be open/closing
it after each query/transaction (as I'm doing now)?
There is probably a little overhead if you will open a new connection each time you want to make a query/transaction and although redis is designed for high level of concurrently connected clients, there might be performance problems if their number is around tens of thousands. As far as I know connection pooling should be done by the client libraries (because redis itself doesn't have this functionality), so you should check if booksleeve supports this stuff. Otherwise you should open the connection when your application starts and keep it open for it's lifetime (in case you don't need parallel clients connected to redis for some reason).
Also, is there any sort of official documentation for this library?
The only documentation I was able to find regarding how to use it was tests folder in it's source codes.
For reference (continuing #bzlm's answer), I created a Singleton that always provides the same Redis connection using BookSleeve (if it's closed, it's being created. Else, the existing connection is being served).
Look at this: https://stackoverflow.com/a/8777999/290343
You consume it like that:
RedisConnection connection = Redis.RedisConnectionGateway.Current.GetConnection();