EF Code first - Automatic generated Id - ef-code-first

================ Problem ================
First of all, I'm using Entity framework (code first). I have the following class:
public class RandomObject
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity), Column(Order = 1)]
public int Id { get; protected set; }
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity), Column(Order = 2)]
public Guid UniqueId { get; protected set; }
public string Other { get; set; }
}
I also have a migration-folder (package) where I seed the database with some objects.
private void AddDefaultObjects(Context context)
{
var object1 = new RandomObject{ Other = "qsdf1" };
var object2 = new RandomObject{ Other = "qsdf2" };
var object3 = new RandomObject{ Other = "qsdf3" };
var objects = new RandomObject[3]{object1 , object2 , object3 };
context.RandomObjects.AddOrUpdate(object => object.Id, objects);
context.SaveChanges();
}
But when I add the objects, they all get the same UniqueId => NOK (but a different Id => OK)
================ Attempts ================
I tried adding the attribute
[Index(IsUnique = true)]
or setting (in my migration-configuration file)
column.DefaultValueSql = "newid()";
================ Result ================
Id | UniqueId
--------------------------------------------
19 | db1e353e-7dd7-e311-8250-8c89a5c20da1
20 | dc1e353e-7dd7-e311-8250-8c89a5c20da1
21 | dd1e353e-7dd7-e311-8250-8c89a5c20da1

Try this....I think your AddOrUpdate needs both parts of your composite primary key in the first argument:
context.RandomObjects.AddOrUpdate(object =>
new { object.Id, object.UniqueId },
objects);

Related

Dynamically paging EF Core results

I have a UI grid which permits sorting by column :
Id
Organisation Name
Organisation type
Departments
1
first
some type
def
2
second
another type
abc, def
2
third
some type
xyz
See the entities below:
public class Organisation
{
public int Code { get; set; }
public string Type { get; set; }
public string Name { get; set; }
public List<Department> Departments { get; set; }
}
public class Department
{
public int Code { get; set; }
public string Name { get; set; }
}
I want to be able to sort the table values by Departments which is a comma separated values that comes from Organization.Departments.Select(p=> p.Name);
I would like to make the sorting as an IQueryable and avoid bringing all the data in memory because After sorting I will apply the pagination and I don't want to bring all the DB records in memory.
I'm using the following extension method for sorting, but it is not working for nested collections:
public static IQueryable<T> OrderBy<T>(this IQueryable<T> source, string sortProperty, ListSortDirection sortOrder)
{
var type = typeof(T);
var property = type.GetProperty(sortProperty, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance);
if (property == null)
throw new OperationFailedException($"Sorting by {sortProperty}");
var parameter = Expression.Parameter(type, "p");
var propertyAccess = Expression.MakeMemberAccess(parameter, property);
var orderByExp = Expression.Lambda(propertyAccess, parameter);
var typeArguments = new Type[] { type, property.PropertyType };
var methodName = sortOrder == ListSortDirection.Ascending ? "OrderBy" : "OrderByDescending";
var resultExp = Expression.Call(typeof(Queryable), methodName, typeArguments, source.Expression, Expression.Quote(orderByExp));
return source.Provider.CreateQuery<T>(resultExp);
}
This method works fine for properties that are at object level.
IQueryable I'm using later for sorting looks something like this:
var iQueryableToBeSorted = _dbContext.Organization.Include(p=>p.Departments).AsQueryable();

A circular reference was detected while serializing entities with one to many relationship

How to solve one to many relational issue in asp.net?
I have Topic which contain many playlists.
My code:
public class Topic
{
public int Id { get; set; }
public String Name { get; set; }
public String Image { get; set; }
---> public virtual List<Playlist> Playlist { get; set; }
}
and
public class Playlist
{
public int Id { get; set; }
public String Title { get; set; }
public int TopicId { get; set; }
---> public virtual Topic Topic { get; set; }
}
My controller function
[Route("data/binding/search")]
public JsonResult Search()
{
var search = Request["term"];
var result= from m in _context.Topics where m.Name.Contains(search) select m;
return Json(result, JsonRequestBehavior.AllowGet);
}
When I debug my code I will see an infinite data because Topics will call playlist then playlist will call Topics , again the last called Topic will recall playlist and etc ... !
In general when I just use this relation to print my data in view I got no error and ASP.NET MVC 5 handle the problem .
The problem happens when I tried to print the data as Json I got
Is there any way to prevent an infinite data loop in JSON? I only need the first time of data without call of reference again and again
You are getting the error because your entity classes has circular property references.
To resolve the issue, you should do a projection in your LINQ query to get only the data needed (Topic entity data).
Here is how you project it to an anonymous object with Id, Name and Image properties.
public JsonResult Search(string term)
{
var result = _context.Topics
.Where(a => a.Name.Contains(term))
.Select(x => new
{
Id = x.Id,
Name = x.Name,
Image = x.Image
});
return Json(result, JsonRequestBehavior.AllowGet);
}
If you have a view model to represent the Topic entity data, you can use that in the projection part instead of the anonymous object
public class TopicVm
{
public int Id { set;get;}
public string Name { set;get;}
public string Image { set;get;}
}
public JsonResult Search(string term)
{
var result = _context.Topics
.Where(a => a.Name.Contains(term))
.Select(x => new TopicVm
{
Id = x.Id,
Name = x.Name,
Image = x.Image
});
return Json(result, JsonRequestBehavior.AllowGet);
}
If you want to include the Playlist property data as well, you can do that in your projection part.
public JsonResult Search(string term)
{
var result = _context.Topics
.Where(a => a.Name.Contains(term))
.Select(x => new
{
Id = x.Id,
Name = x.Name,
Image = x.Image,
Playlist = x.Playlist
.Select(p=>new
{
Id = p.Id,
Title = p.Title
})
});
return Json(result, JsonRequestBehavior.AllowGet);
}

DocumentDb CreateDocumentQuery<T> returns items not of type T

I'm trying to get a list of documents from documentdb of specific object type -
_client.CreateDocumentQuery<RuleSetGroup>(_collectionLink)
.Where(f => f.SourceSystemId == sourceSystemId).AsEnumerable().ToList();
This returns objects of types other than RuleSetGroup, as long as they have a property SourceSystemId matching what I pass in. I understand this is how documentdb works, is there a way to enforce the type T so only those objects are returned?
I am using Auto Type Handling:
JsonConvert.DefaultSettings = () => new JsonSerializerSettings()
{
TypeNameHandling = TypeNameHandling.Auto
};
You will get different document types unless you implement a Type Pattern (adding a Type attribute to each Class) and use it as extra filter.
The reason is because you are storing NoSQL documents, which can obviously have different schema. DocumentDB treats them all equally, they are all documents; when you query, it's your responsability (because only you know the difference) to separate the different document types.
If you document Types all have an attribute "Client" (for example, Orders and Invoices) and you create a query with that attribute but mapped to one Type (Orders), you will get both Orders and Invoices that match the filter because they are documents that match the query. The deserialization logic is on your end, not within DocDB.
Here is an article regarding that Type Pattern when storing different document Types on DocDB (check the Base Type Pattern section).
Something like this might solve it:
public abstract class Entity
{
public Entity(string type)
{
this.Type = type;
}
/// <summary>
/// Object unique identifier
/// </summary>
[Key]
[JsonProperty("id")]
public string Id { get; set; }
/// <summary>
/// Object type
/// </summary>
public string Type { get; private set; }
}
public class RuleSetGroup : Entity
{
public RuleSetGroup():base("rulesetgroup")
}
public class OtherType : Entity
{
public OtherType():base("othertype")
}
_client.CreateDocumentQuery<RuleSetGroup>(_collectionLink).Where(f => f.Type == "rulesetgroup" && f.SourceSystemId == sourceSystemId).AsEnumerable().ToList();
You can wrap queries on helpers that set the type as a Where clause before applying your other filters (in LINQ you can chain Wheres without problem).
My repository might be a little too much for you, the short answer is that you can return .AsDocumentQuery() instead of .ToList()
public async Task<IEnumerable<T>> GetDocumentsAsync<T>(Expression<Func<T, bool>> predicate, int maxReturnedDocuments = -1,
bool enableCrossPartitionQuery = true, int maxDegreeOfParallellism = -1, int maxBufferedItemCount = -1)
{
//MaxDegreeofParallelism default = 0, add -1 to let SDK handle it instead of a fixed 1 network connection
var feedOptions = new FeedOptions
{
MaxItemCount = maxReturnedDocuments,
EnableCrossPartitionQuery = enableCrossPartitionQuery,
MaxDegreeOfParallelism = maxDegreeOfParallellism,
MaxBufferedItemCount = maxBufferedItemCount
};
IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
UriFactory.CreateDocumentCollectionUri(_databaseName, _collectionName), feedOptions)
.Where(predicate)
.AsDocumentQuery();
List<T> results = new List<T>();
while (query.HasMoreResults)
{
var res = await query.ExecuteNextAsync<T>();
results.AddRange(res);
}
return results;
}
You can call the above method like this:
var ecsterConfigs = await repoBO.GetDocumentsAsync<EcsterPaymentConfig>(c => c.ValidTo == null && c.Type == type);
And then I have a wrapper around it sometimes when I "might" do an update of document, to keep track of the _Etag which will change if there is another update on the document before I write it down again.
public class DocumentWrapper<DocumentType>
{
public DocumentWrapper(Document document)
{
Value = (DocumentType)(dynamic)document;
ETag = document.ETag;
TimeStamp = document.Timestamp;
}
public DocumentType Value { get; set; }
public string ETag { get; set; }
public DateTime TimeStamp { get; set; }
}
#Granlund how do you make GetDocumentsAsync return DocumentWrapper instances while still allowing the predicate to query on the properties of the Value?
Here is what I came up with but maybe you have a better way:
[TestMethod]
[TestCategory("CosmosDB.IntegrationTest")]
public async Task AddAndReadDocumentWrapperViaQueryAsync()
{
var document = new Foo { Count = 1, Name = "David" };
var response = await client.CreateDocumentAsync(documentCollectionUri, document);
var id = response.Resource.Id;
var queryResult = await GetWrappedDocumentsAsync<Foo>(f => f.Where(a => a.Name == "David"));
foreach (var doc in queryResult)
{
Assert.AreEqual("David", doc.Value.Name);
}
}
public class Foo
{
public int Count { get; set; }
public string Name { get; set; }
}
public class DocumentWrapper<DocumentType>
{
public DocumentWrapper(Document document)
{
Value = (DocumentType)(dynamic)document;
ETag = document.ETag;
TimeStamp = document.Timestamp;
}
public DocumentType Value { get; set; }
public string ETag { get; set; }
public DateTime TimeStamp { get; set; }
}
public async Task<IEnumerable<DocumentWrapper<T>>> GetWrappedDocumentsAsync<T>(
Func<IQueryable<T>, IQueryable<T>> query,
int maxReturnedDocuments = -1,
bool enableCrossPartitionQuery = true,
int maxDegreeOfParallellism = -1,
int maxBufferedItemCount = -1)
{
//MaxDegreeofParallelism default = 0, add -1 to let SDK handle it instead of a fixed 1 network connection
var feedOptions = new FeedOptions
{
MaxItemCount = maxReturnedDocuments,
EnableCrossPartitionQuery = enableCrossPartitionQuery,
MaxDegreeOfParallelism = maxDegreeOfParallellism,
MaxBufferedItemCount = maxBufferedItemCount
};
IDocumentQuery<T> documentQuery =
query(client.CreateDocumentQuery<T>(documentCollectionUri, feedOptions)).AsDocumentQuery();
var results = new List<DocumentWrapper<T>>();
while (documentQuery.HasMoreResults)
{
var res = await documentQuery.ExecuteNextAsync<Document>();
results.AddRange(res.Select(d => new DocumentWrapper<T>(d)));
}
return results;
}

EF Code First + remove orphans which marked as Modified (IsDeleted = 1)

I have the next problem. My code context + model:
public class MediaPlanContext : DbContext
{
public MediaPlanContext() : base(lazyLoading:false) {}
public DbSet<MediaPlan> MediaPlan { get; set; }
public DbSet<MovieType> MovieType { get; set; }
public DbSet<MediaPlanItem> MediaPlanItems { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder
.Entity<MediaPlanItem>()
.HasKey(mpi => new {mpi.Id, mpi.MediaPlanId});
modelBuilder
.Entity<MediaPlanItem>()
.Property(mpi => mpi.Id)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
modelBuilder
.Entity<MediaPlan>()
.HasMany(mp => mp.MediaPlanItems)
.WithRequired()
.HasForeignKey(mpi => mpi.MediaPlanId)
.WillCascadeOnDelete();
}
}
public class MediaPlan : IBaseObject
{
public virtual ICollection<MediaPlanItem> MediaPlanItems { get; set; }
}
public class MediaPlanItem : IBaseObject
{
public int MediaPlanId {get;set;}
public MediaPlan MediaPlan {get;set;}
}
public interface IBaseObject
{
public int Id {get;}
public DateTime DateCreated {get;}
public DateTime DateModified {get;set;}
}
Also I use repository to handle with my objects (IBaseObject-s) with root-object MediaPlan.
When object in my DB will become deleted I mark entity (record) as IsDeleted = 1 and I have some logic in my repository class to handle regular delete as update, change EntityState to Modified instead of Deleted.
Problem with the next code:
var rep = new MediaPlanRepository(new MediaPlanContext());
var withItems = rep.GetWithMediaPlanItems();
var m1 = withItems.First();
var mpi1 = m1.MediaPlanItems.First();
m1.MediaPlanItems.Remove(mpi1); // 6 items before remove
// 5 items after remove
rep.SaveChanges();
// 6 items after save changes :(
Question: Can I handle the moment after saveChanges occurs and detach my IsDeleted = 1 entity? Is is resolve my problem?
Remark: Related entities loaded to root object as projection and as Julie says in paragraph 'Scenario When This May Not Work As Expected' can produce problems with entities that is already tracked by context.
Code:
public override int SaveChanges()
{
var result = base.SaveChanges();
// AfterSave code
var isDeletedEntities = EfContext.ChangeTracker
.Entries()
.Select(dbE => new {
DBEntity = dbE,
BaseObject = (dbE.Entity as IBaseObject)})
.Where(dbe => dbe.BaseObject.IsDeleted);
foreach (var isDeletedEntity in isDeletedEntities)
{
isDeletedEntity.DBEntity.State = EntityState.Detached;
}
}

Proper way to handle nullable fields in RavenDB Map/Reduces?

How should I Map an object with a nullable field? I guess I must turn the nullable field into a non-nullable version, and it's that step that I stumble upon.
What is the proper way to map nullable properties?
public class Visit {
public string Id { get; set; }
public int? MediaSourceId { get; set; }
}
public class MapReduceResult
{
public string VisitId { get; set; }
public int MediaSourceId { get; set; }
public string Version { get; set; }
public int Count { get; set; }
}
AddMap<Visit>(
visits =>
from visit in visits
select new
{
VisitId = visit.Id,
MediaSourceId =
(visit.MediaSourceId.HasValue)
? visit.MediaSourceId
: UNUSED_MEDIASOURCE_ID,
Version = (string) null,
Count = 1
});
This doesn't work! In fact; this Map is completely ignored, while the other Maps work fine, and they are in the end Reduced as expected.
Thanks for helping me!
Below is a newly added test case that fails with a "Cannot assign <null> to anonymous type property". How am I supposed to get this flying with the least amount of pain?
[TestFixture]
public class MyIndexTest
{
private IDocumentStore _documentStore;
[SetUp]
public void SetUp()
{
_documentStore = new EmbeddableDocumentStore {RunInMemory = true}.Initialize();
_documentStore.DatabaseCommands.DisableAllCaching();
IndexCreation.CreateIndexes(typeof (MyIndex).Assembly, _documentStore);
}
[TearDown]
public void TearDown()
{
_documentStore.Dispose();
}
[Test]
public void ShouldWork()
{
InitData();
IList<MyIndex.MapReduceResult> mapReduceResults = null;
using (var session = _documentStore.OpenSession())
{
mapReduceResults =
session.Query<MyIndex.MapReduceResult>(
MyIndex.INDEX_NAME)
.Customize(x => x.WaitForNonStaleResults()).ToArray();
}
Assert.That(mapReduceResults.Count, Is.EqualTo(1));
}
private void InitData()
{
var visitOne = new Visit
{
Id = "visits/64",
MetaData = new MetaData {CreatedDate = new DateTime(1975, 8, 6, 0, 14, 0)},
MediaSourceId = 1,
};
var visitPageVersionOne = new VisitPageVersion
{
Id = "VisitPageVersions/123",
MetaData = new MetaData {CreatedDate = new DateTime(1975, 8, 6, 0, 14, 0)},
VisitId = "visits/64",
Version = "1"
};
using (var session = _documentStore.OpenSession())
{
session.Store(visitOne);
session.Store(visitPageVersionOne);
session.SaveChanges();
}
}
public class MyIndex :
AbstractMultiMapIndexCreationTask
<MyIndex.MapReduceResult>
{
public const string INDEX_NAME = "MyIndex";
public override string IndexName
{
get { return INDEX_NAME; }
}
public class MapReduceResult
{
public string VisitId { get; set; }
public int? MediaSourceId { get; set; }
public string Version { get; set; }
public int Count { get; set; }
}
public MyIndex()
{
AddMap<Visit>(
visits =>
from visit in visits
select new
{
VisitId = visit.Id,
MediaSourceId = (int?) visit.MediaSourceId,
Version = (string) null,
Count = 1
});
AddMap<VisitPageVersion>(
visitPageVersions =>
from visitPageVersion in visitPageVersions
select new
{
VisitId = visitPageVersion.VisitId,
MediaSourceId = (int?) null,
Version = visitPageVersion.Version,
Count = 0
});
Reduce =
results =>
from result in results
group result by result.VisitId
into g
select
new
{
VisitId = g.Key,
MediaSourceId = (int?) g.Select(x => x.MediaSourceId).FirstOrDefault(),
Version = g.Select(x => x.Version).FirstOrDefault(),
Count = g.Sum(x => x.Count)
};
}
}
}
You don't need to do anything to give nullables special treatment.
RavenDB will already take care of that.

Resources