I am using entity framework code first
and also I have data seeding code
Now when I run my application my database gets generated but is not seeded with my dummy data.
I have to run entity framework once more to get all data populated.
Any idea why and how to fix that so i do not have to run my app 2x to get database and data?
thnx
my context definition file is:
public class Context : DbContext
{
public DbSet<Task> Tasks { get; set; }
public DbSet<Agency> Agency { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
}
}
And here is my seeding file
public class Configuration : DbMigrationsConfiguration<Context>
{
public Configuration()
{
AutomaticMigrationsEnabled = true;
AutomaticMigrationDataLossAllowed = true;
}
protected override void Seed(Context context)
{
GenerateTasks(context);
context.SaveChanges();
}
private static void GenerateTasks(Context context)
{
if (context.Task.Any()) return;
context.Task.Add(new Task() { Name = "Received" });
}
}
And hook to create database is:
Database.SetInitializer(new MigrateDatabaseToLatestVersion<Context, Configuration>());
var context = new Context();
context.Database.Initialize(true);
If your app is an ASP.NET one and is a seperate assembly from the data layer, then you can, instead of configuring the initialization like you did, configure it directly in the web.config. Maybe this is the reason for your problem.
So if it's an ASP.NET app, you can try the following:
(1)
Comment this out:
Database.SetInitializer(new MigrateDatabaseToLatestVersion<Context, Configuration>());
var context = new Context();
context.Database.Initialize(true);
(2)
In web.config, insert this right at the end before the closing /configuration tag:
<entityFramework>
<contexts>
<context type="**fully-qualified name of your context class**,
**assembly name of your context**">
<databaseInitializer type="System.Data.Entity.MigrateDatabaseToLatestVersion`2[[**fully-qualified name of your context class**, **assembly name of your context**],
[**fully-qualified configuration type**, **assembly name of your context**, Version=1.0.0.0, Culture=neutral]], EntityFramework"/>
</context>
</contexts>
where fully-qualified configuration type is the class with your migrations-configuration (something like [...]Context.Migrations.Configuration)
I use this configuration-approach for myself in my projects and it works well!
That's true. call context.Tasks.Find(1) and then it will hit the database.
EF code-first uses this trick to postpone every thing. this way application's startup time seems much faster. (but actually it's not!)
Related
We are trying to move to using an in-memory SQLite instance for our unit test automation, instead of SQL Server or SQL Express. We use Entity Framework Core.
I think I have everything configured correctly, but it's still failing, so I must be missing a step, but I'm not sure what it is.
In our test project's app.config, I've specified:
<connectionStrings>
<add name="BusinessDb" providerName="System.Data.SQLite.EF6" connectionString="data source=:memory:"/>
</connectionStrings>
Our production concrete class is a bit more complex (it has many more modelBuilder calls in the OnModelCreating() method and many more DbSet objects, but it is basically like this:
namespace Business.Base.Concrete
{
public class SqlBusinessDb
: DbContext
, IBusinessDb
{
public string ConnectionString { get; set; }
public SqlBusinessDb(string connectionString)
{
ConnectionString = connectionString;
}
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
base.OnConfiguring(optionsBuilder);
if (ConnectionString.Contains("memory"))
{
optionsBuilder
.UseLazyLoadingProxies()
.UseSqlite(ConnectionString,
options =>
options.CommandTimeout(SqlSettings.s_CommandTimeoutInSec.CurrentValue)
.MigrationsHistoryTable("_BusinessDB_Migrations"))
.AddInterceptors(new Deals.Base.SQL.SqlPerfCounterInterceptor());
}
else
{
optionsBuilder
.UseLazyLoadingProxies()
.UseSqlServer(ConnectionString,
options =>
options.CommandTimeout(SqlSettings.s_CommandTimeoutInSec.CurrentValue)
.MigrationsHistoryTable("_BusinessDB_Migrations")
.EnableRetryOnFailure())
.AddInterceptors(new Deals.Base.SQL.SqlPerfCounterInterceptor());
}
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Has<BillingPlan>()
.HasManyToOne(p => p.Companies, a => a.BillingPlan, a => a.BillingPlan_Id)
}
public int ExecuteStoreCommand(string commandText, params object[] parameters)
{
return Database.ExecuteSqlRaw(commandText, parameters);
}
public DbSet<Features.FeatureOverride_Plan> FeaturesPlan { get; set; }
public DbSet<Business> Businesses { get; set; }
}
}
In our test project we call it like so:
public static TestBusinessDb GetInstance()
{
SqlBusinessDb realRepository = new SqlBusinessDb();
if (!_hasBeenMigrated)
{
_hasBeenMigrated = true;
DatabaseFacade dbf = realRepository.Database;
var issqlite = dbf.IsSqlite();
var tables = dbf.ExecuteSqlRaw("SELECT * FROM information_schema.tables;");
// for the Test Repository, we migrate once when we first try and connect.
realRepository.Database.Migrate();
}
}
This code fails on the "dbf.ExecuteSqlRaw()" line with:
Microsoft.Data.Sqlite.SqliteException : SQLite Error 1: 'no such table: information_schema.tables'.
If I remove that line, it fails on: realRepository.Database.Migrate(); with
Microsoft.Data.Sqlite.SqliteException : SQLite Error 1: 'no such table: _BusinessDB_Migrations'.
When debugging it successfully ran the OnConfiguring and OnModelCreating methods and I watched it execute a SQL command that created that table. dbf.ProviderName returns "Microsoft.EntityFrameworkCore.Sqlite". So, why aren't the tables being found? Is there something else that needs to be in place that I'm missing?
It turns out that SQLite is unable to handle migrations anyway, so it is not a viable option.
I have my connection string to SQL stored in the Web project in appsettings.json
"ConnectionStrings": {
"MyDbConnectionString": "***"
},
Then I added a DB context using Scaffold
Scaffold-DbContext -Connection "name=MyDbConnectionString" -Provider "Microsoft.EntityFrameworkCore.SqlServer" ... -Force
I can use the context in a controller and I have no issues getting data or writing. However, I would like all my business logic to be on a separate class library. So here is my repository from my Library:
public class MyRepository
{
private static MyContext CurrentContext
{
get { return new MyContext(); }
}
public static async void AddEventLog(EventLog eventLog)
{
using (var context = CurrentContext)
{
context.EventLog.Add(eventLog);
await context.SaveChangesAsync();
}
}
}
But it fails when it tries to write to the DB.
System.InvalidOperationException: 'A named connection string was used, but the name 'MyDbConnectionString' was not found in the application's configuration.
Should I be adding appsettings.json to the library project (This seems redundant, and incorrect)? What am I missing? How do I reference back to the web projects appsettings.json file?
Any help would be greatly appreciated.
Here is my startup
services.AddDbContext<MyContext>(options =>options.UseSqlServer(Configuration.GetConnectionString("MyDbConnectionString")));
***** HERE ARE CHANGES I HAVE MADE TO THE WORK *****
I have found the issue I believe so here we go.
Remove the following from MySsdCaseContext.
public MySsdCaseContext()
{
}
and keep this one..
public MySsdCaseContext(DbContextOptions<MySsdCaseContext> options) : base(options)
{
}
For the purposes of fixing this comment out the following from OnConfiguring.
if (!optionsBuilder.IsConfigured)
{
optionsBuilder.UseSqlServer("name=MySsdCaseDb");
}
In startup.cs add the following inside ConfigureService method.
services.AddDbContext<MySsdCaseContext>(options
=>options.UseSqlServer(Configuration.GetConnectionString("MySsdCaseDb")));
This should prompt you to add a reference to MySsdCase.Core.Data class library. You don't currently have this. Basically put
the following at the top of startup.cs
using MySsdCase.Core.Data;
Ensure the following is inside MySsdCase.Web.cspoj
<ItemGroup>
<ProjectReference Include="..\MySsdCase.Core\MySsdCase.Core.csproj" />
</ItemGroup>
Do it like this...
public class EventLogRepository
{
private readonly MySsdCaseContext _context;
public async Task AddEventLogAsync(EventLog eventLog)
{
var myVar = await _context.Set<ClientDetails>()
.AsNoTracking()
.Select(p => p)
.Take(2)
.ToListAsync();
}
}
I think overall there was no reference to the DAL from the BL in startup.cs.
After watching the "Enhancements to Code First Migrations: Using HasDefaultSchema and ContextKey for Multiple Model Support" section of Julie Lerman's PluralSite video, "Entity Framework 6: Ninija Edition-What's New in EF 6" (https://app.pluralsight.com/library/courses/entity-framework-6-ninja-edition-whats-new/table-of-contents), it seems there is a way to run multiple schemas under a single database in Entity Framwork 6 using Code First Migrations...
However, based on the video you still need to these package manager commands for each project that houses a separate context:
1. enable-migrations
2. add-migration [MIGRATION NAME]
3. update-database
This is fine and good if you actually care about maintaining migrations going forward, which is not a concern of mine.
What I'd like to do is have each of my Context's initializers set to DropCreateDatabaseAlways, and when I start up my client app (in this case, an MVC site), code first will create the database for the first context used, create the tables in with the correct schema for that context, and then create the tables for the rest of the contexts with the correct schema.
I don't mind if the whole database is dropped and recreated every time I hit F5.
What is happening now is the last context that is accessed in the client app is the only context tables that are created in the database... any contexts being accessed before the last get their tables blown away.
I am currently using two contexts, a Billing context and a Shipping context.
Here is my code:
My client app is an MVC website, and its HomeController's Index method looks like this:
public ActionResult Index()
{
List<Shipping.Customer>
List<Billing.Customer> billingCustomers;
using (var shippingContext = new Shipping.ShippingContext())
{
shippingCustomers = shippingContext.Customers.ToList();
}
using (var billingContext = new Billing.BillingContext())
{
billingCustomers = billingContext.Customers.ToList();
}
}
Here is my DbMigrationsConfigurationClass and ShippingContext class for the Shipping Context:
internal sealed class Configuration : DbMigrationsConfiguration<ShippingContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = false;
}
protected override void Seed(ShippingContext context)
{
}
}
public class ShippingContext : DbContext
{
public ShippingContext() : base("MultipleModelDb")
{
}
static ShippingContext()
{
Database.SetInitializer(new ShippingContextInitializer());
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.HasDefaultSchema("Shipping");
base.OnModelCreating(modelBuilder);
}
public DbSet<Customer> Customers { get; set; }
class ShippingContextInitializer : DropCreateDatabaseAlways<ShippingContext>
{
}
}
Likewise, here is the DbMigrationConfiguration class for the Billing Context and the BillingContext class:
internal sealed class Configuration : DbMigrationsConfiguration<BillingContext>
{
public Configuration()
{
AutomaticMigrationsEnabled = false;
}
protected override void Seed(BillingContext context)
{
}
}
public class BillingContext : DbContext
{
public BillingContext() : base("MultipleModelDb")
{
}
static BillingContext()
{
Database.SetInitializer(new BillingContextInitializer());
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.HasDefaultSchema("Billing");
base.OnModelCreating(modelBuilder);
}
public DbSet<Customer> Customers { get; set; }
class BillingContextInitializer : DropCreateDatabaseAlways<BillingContext>
{
}
}
based on the order that the contexts are being called in the controller's action method, whichever context is accessed last is the only context that is created... the other context is wiped out.
I feel like what I'm trying to do is very simple, yet code first migrations, as well as trying to "shoehorn" Entity Framework to represent multiple contexts as separate schemas in the same physical database seems a bit "hacky"...
I'm not that versed with migrations to begin with, so what I'm trying to do might not make any sense at all.
Any feedback would be helpful.
Thanks,
Mike
I am actually working in an ASP.Net MVC 4 web application where we are using NInject for dependency injection. We are also using UnitOfWork and Repositories based on Entity framework.
We would like to use Quartz.net in our application to start some custom job periodically. I would like that NInject bind automatically the services that we need in our job.
It could be something like this:
public class DispatchingJob : IJob
{
private readonly IDispatchingManagementService _dispatchingManagementService;
public DispatchingJob(IDispatchingManagementService dispatchingManagementService )
{
_dispatchingManagementService = dispatchingManagementService ;
}
public void Execute(IJobExecutionContext context)
{
LogManager.Instance.Info(string.Format("Dispatching job started at: {0}", DateTime.Now));
_dispatchingManagementService.DispatchAtomicChecks();
LogManager.Instance.Info(string.Format("Dispatching job ended at: {0}", DateTime.Now));
}
}
So far, in our NInjectWebCommon binding is configured like this (using request scope):
kernel.Bind<IDispatchingManagementService>().To<DispatchingManagementService>();
Is it possible to inject the correct implementation into our custom job using NInject ? and how to do it ? I have read already few posts on stack overflow, however i need some advises and some example using NInject.
Use a JobFactory in your Quartz schedule, and resolve your job instance there.
So, in your NInject config set up the job (I'm guessing at the correct NInject syntax here)
// Assuming you only have one IJob
kernel.Bind<IJob>().To<DispatchingJob>();
Then, create a JobFactory: [edit: this is a modified version of #BatteryBackupUnit's answer here]
public class NInjectJobFactory : IJobFactory
{
private readonly IResolutionRoot resolutionRoot;
public NinjectJobFactory(IResolutionRoot resolutionRoot)
{
this.resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
// If you have multiple jobs, specify the name as
// bundle.JobDetail.JobType.Name, or pass the type, whatever
// NInject wants..
return (IJob)this.resolutionRoot.Get<IJob>();
}
public void ReturnJob(IJob job)
{
this.resolutionRoot.Release(job);
}
}
Then, when you create the scheduler, assign the JobFactory to it:
private IScheduler GetSchedule(IResolutionRoot root)
{
var schedule = new StdSchedulerFactory().GetScheduler();
schedule.JobFactory = new NInjectJobFactory(root);
return schedule;
}
Quartz will then use the JobFactory to create the job, and NInject will resolve the dependencies for you.
Regarding scoping of the IUnitOfWork, as per a comment of the answer i linked, you can do
// default for web requests
Bind<IUnitOfWork>().To<UnitOfWork>()
.InRequestScope();
// fall back to `InCallScope()` when there's no web request.
Bind<IUnitOfWork>().To<UnitOfWork>()
.When(x => HttpContext.Current == null)
.InCallScope();
There's only one caveat that you should be aware of:
With incorrect usage of async in a web request, you may mistakenly be resolving a IUnitOfWork in a worker thread where HttpContext.Current is null. Now without the fallback binding, this would fail with an exception which would show you that you've done something wrong. With the fallback binding however, the issue may present itself in an obscured way. That is, it may work sometimes, but sometimes not. This is because there will be two (or even more) IUnitOfWork instances for the same request.
To remedy this, we can make the binding more specific. For this, we need some parameter to tell us to use another than InRequestScope(). Have a look at:
public class NonRequestScopedParameter : Ninject.Parameters.IParameter
{
public bool Equals(IParameter other)
{
if (other == null)
{
return false;
}
return other is NonRequestScopedParameter;
}
public object GetValue(IContext context, ITarget target)
{
throw new NotSupportedException("this parameter does not provide a value");
}
public string Name
{
get { return typeof(NonRequestScopedParameter).Name; }
}
// this is very important
public bool ShouldInherit
{
get { return true; }
}
}
now adapt the job factory as follows:
public class NInjectJobFactory : IJobFactory
{
private readonly IResolutionRoot resolutionRoot;
public NinjectJobFactory(IResolutionRoot resolutionRoot)
{
this.resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
return (IJob) this.resolutionRoot.Get(
bundle.JobDetail.JobType,
new NonrequestScopedParameter()); // parameter goes here
}
public void ReturnJob(IJob job)
{
this.resolutionRoot.Release(job);
}
}
and adapt the IUnitOfWork bindings:
Bind<IUnitOfWork>().To<UnitOfWork>()
.InRequestScope();
Bind<IUnitOfWork>().To<UnitOfWork>()
.When(x => x.Parameters.OfType<NonRequestScopedParameter>().Any())
.InCallScope();
This way, if you use async wrong, there'll still be an exception, but IUnitOfWork scoping will still work for quartz tasks.
For any users that could be interested, here is the solution that finally worked for me.
I have made it working doing some adjustment to match my project. Please note that in the method NewJob, I have replaced the call to Kernel.Get by _resolutionRoot.Get.
As you can find here:
public class JobFactory : IJobFactory
{
private readonly IResolutionRoot _resolutionRoot;
public JobFactory(IResolutionRoot resolutionRoot)
{
this._resolutionRoot = resolutionRoot;
}
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
try
{
return (IJob)_resolutionRoot.Get(
bundle.JobDetail.JobType, new NonRequestScopedParameter()); // parameter goes here
}
catch (Exception ex)
{
LogManager.Instance.Info(string.Format("Exception raised in JobFactory"));
}
}
public void ReturnJob(IJob job)
{
}
}
And here is the call schedule my job:
public static void RegisterScheduler(IKernel kernel)
{
try
{
var scheduler = new StdSchedulerFactory().GetScheduler();
scheduler.JobFactory = new JobFactory(kernel);
....
}
}
Thank you very much for your help
Thanks so much for your response. I have implemented something like that and the binding is working :):
public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
{
var resolver = DependencyResolver.Current;
var myJob = (IJob)resolver.GetService(typeof(IJob));
return myJob;
}
As I told before I am using in my project a service and unit of work (based on EF) that are both injected with NInject.
public class DispatchingManagementService : IDispatchingManagementService
{
private readonly IUnitOfWork _unitOfWork;
public DispatchingManagementService(IUnitOfWork unitOfWork)
{
_unitOfWork = unitOfWork;
}
}
Please find here how I am binding the implementations:
kernel.Bind<IUnitOfWork>().To<EfUnitOfWork>()
kernel.Bind<IDispatchingManagementService>().To<DispatchingManagementService>();
kernel.Bind<IJob>().To<DispatchingJob>();
To resume, the binding of IUnitOfWork is done for:
- Eevery time a new request is coming to my application ASP.Net MVC: Request scope
- Every time I am running the job: InCallScope
What are the best practices according to the behavior of EF ? I have find information to use CallInScope. Is it possible to tell NInject to get a scope ByRequest everytime a new request is coming to the application, and a InCallScope everytime my job is running ? How to do that ?
Thank you very much for your help
I'm starting a web application that contains the following projects:
Booking.Web
Booking.Services
Booking.DataObjects
Booking.Data
I'm using the repository pattern in my data project only. All services will be the same, no matter what happens. However, if a customer wants to use Access, it will use a different data repository than if the customer wants to use SQL Server.
I have StructureMap, and want to be able to do the following:
Web project is unaffected. It's a web forms application that will only know about the services project and the dataobjects project.
When a service is called, it will use StructureMap (by looking up the bootstrapper.cs file) to see which data repository to use.
An example of a services class is the error logging class:
public class ErrorLog : IErrorLog
{
ILogging logger;
public ErrorLog()
{
}
public ErrorLog(ILogging logger)
{
this.logger = logger;
}
public void AddToLog(string errorMessage)
{
try
{
AddToDatabaseLog(errorMessage);
}
catch (Exception ex)
{
AddToFileLog(ex.Message);
}
finally
{
AddToFileLog(errorMessage);
}
}
private void AddToDatabaseLog(string errorMessage)
{
ErrorObject error =
new ErrorObject
{
ErrorDateTime = DateTime.Now,
ErrorMessage = errorMessage
};
logger.Insert(error);
}
private void AddToFileLog(string errorMessage)
{
// TODO: Take this value from the web.config instead of hard coding it
TextWriter writer = new StreamWriter(#"E:\Work\Booking\Booking\Booking.Web\Logs\ErrorLog.txt", true);
writer.WriteLine(DateTime.Now.ToString() + " ---------- " + errorMessage);
writer.Close();
}
}
I want to be able to call this service from my web project, without defining which repository to use for the data access. My boostrapper.cs file in the services project is defined as:
public class Bootstrapper
{
public static void ConfigureStructureMap()
{
ObjectFactory.Initialize(x =>
{
x.AddRegistry(new ServiceRegistry());
}
);
}
public class ServiceRegistry : Registry
{
protected override void configure()
{
ForRequestedType<IErrorLog>().TheDefaultIsConcreteType<Booking.Services.Logging.ErrorLog>();
ForRequestedType<ILogging>().TheDefaultIsConcreteType<SqlServerLoggingProvider>();
}
}
}
What else do I need to get this to work? When I defined a test, the ILogger object was null.
Perhaps some details on how you are calling this code from a test would be useful.
My understanding is that you need to ensure that the ConfigureStructureMap call has been made early in the applications life (e.g. in the Global.asax in a web project).
After that you would be calling for instances of IErrorLog using something like:
IErrorLog log = StructureMap.ObjectFactory.GetNamedInstance<IErrorLog>();