Getting App_Start Code First Migrations to work with Miniprofiler - ef-code-first

I am running code first migrations. (EF 4.3.1)
I am also running Miniprofiler.
I run my code first migrations through code on App_Start.
My code looks like this:
public static int IsMigrating = 0;
private static void UpdateDatabase()
{
try
{
if (0 == System.Threading.Interlocked.Exchange(ref IsMigrating, 1))
{
try
{
// Automatically migrate database to catch up.
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(new Exception("Checking db for pending migrations.")));
var dbMigrator = new DbMigrator(new Ninja.Data.Migrations.Configuration());
var pendingMigrations = string.Join(", ", dbMigrator.GetPendingMigrations().ToArray());
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(new Exception("The database needs these code updates: " + pendingMigrations)));
dbMigrator.Update();
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(new Exception("Done upgrading database.")));
}
finally
{
System.Threading.Interlocked.Exchange(ref IsMigrating, 0);
}
}
}
catch (System.Data.Entity.Migrations.Infrastructure.AutomaticDataLossException ex)
{
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(ex));
}
catch (Exception ex)
{
Elmah.ErrorLog.GetDefault(null).Log(new Elmah.Error(ex));
}
}
The problem is that my DbUpdate is about to get called and then my app throws an exception which I think comes from the app on the first web page request.
saying:
Unable to update database to match the current model because there are pending changes and automatic migration is disabled. Either write the pending model changes to a code-based migration or enable automatic migration. Set DbMigrationsConfiguration.AutomaticMigrationsEnabled to true to enable automatic migration.
The problem is that I think my homepage is firing the dbcontext and this error before my dbupdate has finished.
How would you go about solving this?
Should I make the context wait using locks etc or is there an easier way?
More interestingly, If i start and stop the app a few times the db changes are pushed and the error goes away...
So I need to find a way to have the first request to the database on App_Start wait for the migrations to happen.
Thoughts?

Related

How to deal with RedisMessageListenerContainer death

I've encountered a case where the redis pubsub RedisMessageListenerContainer in my spring boot application died with
ERROR .RedisMessageListenerContainer: SubscriptionTask aborted with exception:
org.springframework.dao.QueryTimeoutException: Redis command timed out; nested exception is com.lambdaworks.redis.RedisCommandTimeoutException: Command timed out
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:66)
at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41)
at org.springframework.data.redis.PassThroughExceptionTranslationStrategy.translate(PassThroughExceptionTranslationStrategy.java:37)
at org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java:37)
at org.springframework.data.redis.connection.lettuce.LettuceConnection.convertLettuceAccessException(LettuceConnection.java:330)
at org.springframework.data.redis.connection.lettuce.LettuceConnection.subscribe(LettuceConnection.java:3179)
at org.springframework.data.redis.listener.RedisMessageListenerContainer$SubscriptionTask.eventuallyPerformSubscription(RedisMessageListenerContainer.java:790)
at org.springframework.data.redis.listener.RedisMessageListenerContainer$SubscriptionTask.run(RedisMessageListenerContainer.java:746)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.lambdaworks.redis.RedisCommandTimeoutException: Command timed out
at com.lambdaworks.redis.LettuceFutures.await(LettuceFutures.java:113)
at com.lambdaworks.redis.LettuceFutures.awaitOrCancel(LettuceFutures.java:92)
at com.lambdaworks.redis.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:63)
at com.lambdaworks.redis.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy156.subscribe(Unknown Source)
at org.springframework.data.redis.connection.lettuce.LettuceSubscription.doSubscribe(LettuceSubscription.java:63)
at org.springframework.data.redis.connection.util.AbstractSubscription.subscribe(AbstractSubscription.java:142)
at org.springframework.data.redis.connection.lettuce.LettuceConnection.subscribe(LettuceConnection.java:3176)
... 3 common frames omitted
..
I think that shouldn't be an unrecoverable error in the first place because it's a temporary connection issue (and a TransientDataAccessException) but the application apparently needs to deal with exceptions in those case.
Currently this leaves the application in a state that is not acceptable. It merely logs the error but I would either need to kill the application so it gets replaced or better try to restart that container and ideally report via /health that the application is impacted as long as it's not all good.
Is there anything I'm overlooking that is less awkward than either trying to start() the container every x seconds or subclass it and overwrite handleSubscriptionException() and try to act from there? The latter needs much deeper integration with internals than I'd like to have in my code but it's what I so far went with:
RedisMessageListenerContainer container = new RedisMessageListenerContainer() {
#Override
protected void handleSubscriptionException(Throwable ex) {
super.handleSubscriptionException(ex); // don't know what actually happened in here and no way to find out :/
if (ex instanceof RedisConnectionFailureException) {
// handled by super hopefully, don't care
} else if (ex instanceof InterruptedException){
// can ignore those I guess
} else if (ex instanceof TransientDataAccessException || ex instanceof RecoverableDataAccessException) {
// try to restart in those cases?
if (isRunning()) {
logger.error("Connection failure occurred. Restarting subscription task manually due to " + ex, ex);
sleepBeforeRecoveryAttempt();
start(); // best we can do
}
} else {
// otherwise shutdown and hope for the best next time
if (isRunning()) {
logger.warn("Shutting down application due to unknown exception " + ex, ex);
context.close();
}
}
}
};

Will scheduled Job/Trigger in RAMJobStore lost if we set offline.html

Currently, I plan to run schedule to send email every week.
What I hope is the trigger will stop when the apps is offline. Then reschedule again when AppStart.
After read the documentation, still can't find out.
I tried in local machine, it seem like RAMJobStore continue running, even apps is offline. How can I stop it when bring the Apps offline ?
Please share me some idea or information. Thanks
After several tried, I found out the trigger will off when App_offline.htm in the apps folder.
I setup below code, then click on Start button to trigger the schedule.
Suppose will receive total 5 Emails every 30 second. If the Testing Email failed to send, suppose will receive an Error Email.
After received 2nd Email, I copy paste App_offline.html into folder. At the end, I never receive anything. Of course, after I remove App_offline.htm nothing happen too. Thanks !
p/s: the code just like that, no extra Web.Config applied so far.
public ActionResult Start()
{
IScheduler scheduler = StdSchedulerFactory.GetDefaultScheduler();
scheduler.Start();
IJobDetail job = JobBuilder.Create<Jobclass>().Build();
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("trigger1", "group1")
.StartNow()
.WithSimpleSchedule(x => x
.WithIntervalInSeconds(30)
.WithRepeatCount(4))
.Build();
scheduler.ScheduleJob(job, trigger);
return RedirectToAction("Index");
}
public class Jobclass:IJob
{
public void Execute(IJobExecutionContext context)
{
string sMailStatus = MyClass.mailTemplate("myEmail#gmail.com", "myEmail#gmail.com",
"Testing Email - " + DateTime.Now.ToShortDateString(), "Testing");
if (sMailStatus != "sent")
{
MyClass.mailTemplate("myEmail#gmail.com", "myEmail#gmail.com",
"Error : Testing Email", sMailStatus);
}
}
}

Can't get user information after login successfully in WSO2 Identity server

After login successfully into WSO IS with service URL (https://localhost:9443/services/")
I tried to get User Information as below :
try {
UserRealm realm = WSRealmBuilder.createWSRealm(serviceURL, authCookie, configCtx);
UserStoreManager storeManager = realm.getUserStoreManager();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
But I had exception relating to this as below image. I can't get any info.
I tried and found out that the main error is I can't create ConfixContext with the following code :
configCtx = ConfigurationContextFactory.createConfigurationContextFromFileSystem(null, null);
I also read about ConfigContext in the below link and tried with other methods in this link but I can't create configContext.
http://axis.apache.org/axis2/java/core/apidocs/org/apache/axis2/context/ConfigurationContextFactory.html
I appreciate your help in this case.
Thanks
The problem is your runtime doesnt have org.wso2.carbon.user.api.UserStoreException class. Therefore you can't identify the real exception.
For now, just use Exception e instead, and see if you can log the real exception.

Hangfire and ASP.NET MVC

I'm currently using Hangfire in an ASP.NET MVC 5 project, which uses Ninject to use the same Context in RequestScope.
In Hangfire dashboard, I get random errors like:
System.Data.Entity.Core.EntityException: An error occurred while starting a transaction on the provider connection. See the inner exception for details. ---> System.Data.SqlClient.SqlException: New transaction is not allowed because there are other threads running in the session.
How can I make Entity, ASP.NET and Hangfire work without getting all those transaction errors?
I bet those errors can happen on the other side (in web).
We also encountered some issues like this with Hangfire along side Ninject. So we actually create a separate kernel for Hangfire, where everything is bound in thread scope. Something like this:
public class NinjectHangfire
{
public static IKernel CreateKernelForHangfire()
{
var kernel = new StandardKernel(/*modules*/);
try
{
kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel).InThreadScope();
kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>().InThreadScope();
//other bindings
}
catch
{
kernel.Dispose();
throw;
}
}
}
And then in Startup:
GlobalConfiguration.Configuration.UseNinjectActivator(NinjectHangfire.CreateKernelForHangfire());

SDL Tridion 2009: Creating components through TOM API (via Interop) fails

Am facing a problem, while creating components through TOM API using .NET/COM Interop.
Actual Issue:
I have 550 components to be created through custom page. I am able to create between 400 - 470 components but after that it is getting failed and through an error message saying that
Error: Thread was being aborted.
Any idea / suggestion, why it is getting failed?
OR
Is there any restriction on Tridion 2009?
UPDATE 1:
As per #user978511 request, below is error on Application event log:-
Event code: 3001
Event message: The request has been aborted.
...
...
Process information:
Process ID: 1016
Process name: w3wp.exe
Account name: NT AUTHORITY\NETWORK SERVICE
Exception information:
Exception type: HttpException
Exception message: Request timed out.
...
...
...
UPDATE 2:
#Chris: This is my common function, which is called in a loop by passing list of params. Here am using Interop dll's.
public static bool CreateFareComponent(.... list of params ...)
{
TDSE mTDSE = null;
Folder mFolder = null;
Component mComponent = null;
bool flag = false;
try
{
mTDSE = TDSEInitialize();
mComponent = (Component)mTDSE.GetNewObject(ItemType.ItemTypeComponent, folderID, null);
mComponent.Schema = (Schema)mTDSE.GetObject(constants.SCHEMA_ID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadAll);
mComponent.Title = compTitle;
...
...
...
...
mComponent.Save(true);
flag = true;
}
catch (Exception ex)
{
CustomLogger.Error(String.Format("Logged User: {0} \r\n Error: {1}", GetRemoteUser(), ex.Message));
}
return flag;
}
Thanks in advance.
Sounds like a timeout, most likely in IIS which is hosting your custom page.
Are you creating them all in one synchronous request? Because that is indeed likely to time out.
You could instead create them in batches - or make sure your operations are done asynchronously and then polling the status regularly.
The easiest would just be to only create say 10 Components in one request, wait for it to finish, and then create another 10 (perhaps with a nice progress bar? :))
How you call TDSE object. I would like to mention here "Marshal.ReleaseComObject" procedure. Without releasing COMs objects can lead to enormous memory leaks.
Here is code for component creating:
private Component NewComponent(string componentName, string publicationID, string parentID, string schemaID)
{
Publication publication = (Publication)mTdse.GetObject(publicationID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadContext);
Folder folder = (Folder)mTdse.GetObject(parentID, EnumOpenMode.OpenModeView, null, XMLReadFilter.XMLReadContext);
Schema schema = (Schema)mTdse.GetObject(schemaID, EnumOpenMode.OpenModeView, publicationID, XMLReadFilter.XMLReadContext);
Component component = (Component)mTdse.GetNewObject(ItemType.ItemTypeComponent, folder, publication);
component.Title = componentName;
component.Schema = schema;
return component;
}
After that please not forget to release mTdse ( in my case it is previously created TDSE object). Disposing "Components" object can be useful also after finish working with them.
For large Tridion batch operations I always use a Console Application and run it directly on the server.
Use Console.WriteLine to write to the output window and Console.ReadLine as the last line of code in the app (so the window stays open). I also use Log4Net as the logger.
This is by far the best approach if you have access to a remote session on the server - or can ask an admin to run it for you and give you access to the log folder via a network share.
As per #chris suggestions and part of immediate fix I have changed my web.config execution time out to 8000 seconds.
<httpRuntime executionTimeout="8000"/>
With this change, custom page is able to handle as of now.
Any more best suggestion, please post it.

Resources