I am using Sqlite database in my app with RijndaelManaged Encryption. Encryption works fine. The problem is how do i add encryption/decryption to my object class. Below is a class
public class MTBL_USER
{
[PrimaryKey, AutoIncrement]
public int UserID { get; set; }
public string LoginID { get; set; }
public string Password { get; set; }
}
I would like to add encryption/decryption logic for LoginID and Password in getter and setter. something like
public string LoginID
{
get {
EncryptDecryptController decrypt = new EncryptDecryptController ();
return decrypt.Decrypt(LoginID);
}
set {
EncryptDecryptController encrypt = new EncryptDecryptController ();
LoginID = encrypt.Encrypt (value);
}
}
This won't work. What is the best way to achieve this.
Your code has StackOverflow in it, it happens if you set LoginID from LoginID's set method.
Try adding a private field that stores the data.
private string _encryptedLoginID = null;
public string LoginID{
get{ return (new EncruptDecryptController()).Decrypt(_encryptedLoginID);}
set{ _encryptedLoginID = (new EncruptDecryptController()).Encrypt(value);}
}
Notice that a database/json/xml serializer will still see the decrypted value when they ask for it, so this is probably not going to do the trick for you.
Continue with this code and mark the LoginID property as ignored for your method of serialization; mark it internal if you can. For example:
[DataMemberIgnore]
[JsonIgnore]
[XmlIgnore]
public string LoginID{...}
Then add another property that communicates _encryptedLoginID directly
public string EncryptedLoginID {
get {return _encryptedLoginID;}
set {_encryptedLoginID=value;}
}
You can also rename LoginID to CleartextLoginID and EncryptedLoginID to LoginID if you want to keep things with less attributes.
Keep in mind that "encryption" is a term lightly tossed around without ever mentioning a very important part of crypto-security: key management. If your key is easy to recover from your code or config files this entire exercise is pointless. You'd be surprised how easy it is some times to get through such defenses. If you're only slowing your attacker down by a few hours you might as well just B64-encode :). Make sure that's not the case and that your key is properly protected by whatever the OS has to offer - don't store it in config files or code.
Related
I know that keeping large collections in Aggregates impacts performance.
In my use case i have STORE which can have multiple PRODUCT and each product can have CUSTOMIZATION(Not more than 10-20 customization).
I thought of creating one store aggregate only and update product and customization through it but as product collection can be large so it will impact performance. So I have two aggregates STORE(to create store) and PRODUCT(with storeId,all product operation) with this approach I am not able to check if product already exist or not.
what i am doing now is getting all products by StoreId in my handler and checking duplicate which is not right way as it should belong to my domain model.
Anyone has better idea to solve this.
below are my domain models.
public class Store : Entity<Guid>, IAggregateRoot
{
private Store()
{
this.Products = new List<Product>();
}
private Store(string name, Address address) : base(System.Guid.NewGuid())
{
this.Name = name;
this.Address = address;
}
private Store(string name, Address address, ContactInfo contact) : this(name, address)
{
this.Contact = contact;
}
public string Name { get; private set; }
public Address Address { get; private set; }
public ContactInfo Contact { get; private set; }
}
public class Product : Entity<Guid>, IAggregateRoot
{
private Product()
{
}
private Product(Guid storeId, ProductInfo productInfo) : base(Guid.NewGuid())
{
this.ProductInfo = productInfo;
this.StoreId = storeId;
this.Customizations = new List<Customization>();
}
private Product(Guid storeId, ProductInfo productInfo, IEnumerable<Customization> customizations) : this(storeId, productInfo)
{
this.Customizations = customizations;
}
public ProductInfo ProductInfo { get; private set; }
private List<Customization> _customizations;
public IEnumerable<Customization> Customizations
{
get
{
return _customizations.AsReadOnly();
}
private set
{
_customizations = (List<Customization>)value;
}
}
public Guid StoreId { get; private set; }
public static Product Create(Guid storeId, ProductInfo productInfo)
{
return new Product(storeId, productInfo);
}
public void UpdateInfo(ProductInfo productInfo)
{
this.ProductInfo = productInfo;
}
public void AddCustomization(Customization customization)
{
this._customizations.Add(customization);
}
public void RemoveCustomization(Customization customization)
{
this._customizations.Remove(customization);
}
}
Well as correctly Jonatan Dragon mentioned and you found the solution in an article of course you can use domain services but taking this approach for solving these kind of problems has the danger to fall in the anemic domain model pitfalls in future developments. This is the most common cause of loosing technical excellency in the domain layer. In general whenever a problem must be solved with objects collaborations, this kind of problems will be occurred. Therefore whenever is possible to avoid using domain services it's better to find the other answers that doesn't utilize this pattern. For your case the problem can be solved without using domain services by working around on some trade-offs to handle non-functional issues (like performance) and keeping the models rich and clean!
Let's consider some assumptions for designing aggregates to identify where do we want to involve trade-offs which we will accept for solving this problem:
1- In designing aggregates, just one aggregate's state must changes during one transactional use-case. (Greg Young)
2- In designing aggregates, the things can be shared among aggregates are only their IDs. (Eric Evans)
It seems these two assumptions make our minds enclosed in a frame that solve this kind of problems by only utilizing domain services. So let's look at them more deeply.
Many DDD practitioners and mentors like Nick Tune knows the transaction default scope over the entire BC in a use-case instead of only consider it for one aggregate. Therefor this is the place where we have some degrees of freedom to involve with trade-offs.
For number 2, the philosophy behind this assumption is to share only the part of aggregates that it's invariant and never modifies during the aggregate's lifespan. Therefor not so many aggregates get locked during a transaction in one use-case. Well if there's case that a shared state of aggregates changes on one transaction scope and there's no way for that to modify separately, technically there will be no problem in sharing it.
By mixing these two we can conclude to this answer for this problem:
You can let the Store aggregate to decide for creating a Product aggregate. In OOP words you can make The Store aggregate be the Product aggregate Factory.
public class Store : Entity<Guid>, IAggregateRoot
{
private Store()
{
this.Products = new List<Product>();
}
private Store(string name, Address address) : base(System.Guid.NewGuid())
{
this.Name = name;
this.Address = address;
}
private Store(string name, Address address, ContactInfo contact) : this(name, address)
{
this.Contact = contact;
}
public Product CreateProduct(Guid storeId, ProductInfo productInfo)
{
if(ProductInfos.Contains(productInfo))
{
throw new ProductExistsException(productInfo);
}
this.ProductInfos.Add(productInfo);
return new Product(storeId, productInfo);
}
public string Name { get; private set; }
public Address Address { get; private set; }
public ContactInfo Contact { get; private set; }
public List<ProductInfo> ProductInfos {get; private set;} = new();
}
In this solution i considered ProductInfo as a value object, hence checking duplication can easily be done by checking their equality. For ensuring the Product aggregate can not be constructed independently you can make it's ctor's access modifier as internal. Usually aggregate models placed in one assembly and ORMs can use non public ctors too, therefor this will create no problem.
There are some points to notice in this answer:
1- The Store aggregate must not use the internal parts of ProductInfo. With this approach ProductInfo can change freely as it's owner ship belongs to Product aggregate.
2- As ProductInfo is a value object, storing and recovering the Store aggregate is not a heavy operation and by converting techniques in ORMs this can reduce to storing and recovering data from only one field for ProductInfos collection.
3- The Store and the Product aggregates are only coupled for Product creation use-case. They can operate freely separate in other use-cases.
So with this approach you will achieve small aggregate separation in 99% of use-cases and the duplicate checking as the domain model invariant.
PS: This is the core idea of how to solve the problem. You can cook it with other patterns and techniques like Explicit State Pattern and Row Versions if it's required.
At first make sure it really impacts the performance. If you really need 2 aggregates, you can use a Domain Service to solve your problem. Check this article by Kamil Grzybek, section BC scope validation implementation.
public interface IProductUniquenessChecker
{
bool IsUnique(Product product);
}
// Product constructor
public Product(Guid storeId, ProductInfo productInfo, IProductUniquenessChecker productUniquenessChecker)
{
if (!productUniquenessChecker.IsUnique(this))
{
throw new BusinessRuleValidationException("Product already exists.");
}
...
}
My current employer is developing a mobile app using Xamarin.Forms and Asp.net mvc on the backend. I suggested to use realm in the mobile app. My manager want to see a POC(Proof of concept) app using realm with backlink feature before allowing it to be used in the app. I am working on the POC on GitHub . The documentation is very limiting and the GitHub repo of realm-dotnet don’t have good sample.
I completed the project. But unable to implement backlink. The sample app I have developed allow user to create assignees(employees) in the first page. The user can delete or edit the employees using context menu. When the user clicks on the employee name the app navigates to the ToDoListPage of that particular employee. Here the user can create ToDoItems. On this ToDoList page I want to show the ToDoItems that where assigned to that employee only.
The models were as follows:
public class Assignee : RealmObject
{
public Assignee()
{
ToDoItems = Enumerable.Empty<ToDoItem>().AsQueryable();
}
[PrimaryKey]
public string Id { get; set; } = Guid.NewGuid().ToString();
public string Name { get; set; }
public string Role { get; set; }
[Backlink(nameof(ToDoItem.Employee))]
public IQueryable<ToDoItem> ToDoItems { get; }
}
public class ToDoItem : RealmObject
{
[PrimaryKey]
public string Id { get; set; } = Guid.NewGuid().ToString();
public string Name { get; set; }
public string Description { get; set; }
public bool Done { get; set; }
public Assignee Employee { get; set; }
}
I am adding employee to each ToDo Item:
Item.Employee = Employee;
_realm.Add(Item);
Now I want to access the ToDoItems for the Employee:
Items = _realm.All<Assignee>().Where(x => x.Id == EmployeeId).FirstOrDefault().ToDoItems;
But this does not work. I will be grateful if someone can help me out by preferably writing code in my sample app or write the correct code in the reply.
Thank you
Firstly, Realm .NET doesn't currently support traversing properties (x.Employee.Id). Due to this, when I start the app and try to go to the ToDoListPage, the app crashes with the exception:
The left-hand side of the Equal operator must be a direct access to a persisted property in Realm
Realm supports object comparison, so we can fix this like so:
var employee = _realm.Find<Assignee>(EmployeeId);
Items = _realm.All<ToDoItem>().Where(x => x.Employee == employee);
Secondly, everything seemed fine in your code, so I dug a bit deeper and saw why it isn't working. The issue is that when we try to get all items with the code above, the EmployeeId parameter is null. Since the EmployeeId is being populated after the load logic has been triggered, we don't need to load the data in the ctor. So you can remove this code.
Finally, since you won't be loading the data in the ctor, and instead in the SetValues method, the UI needs to know, when the data has been updated, what exactly to redraw. Thus, you need to mark the collection to be Reactive too:
[Reactive]
public IEnumerable<ToDoItem> Items { get; set; }
Then, you need to change the SetValues method to use object comparison, instead of traversing:
async Task SetValues()
{
Employee = _realm.Find<Assignee>(EmployeeId);
Title = Employee.Name;
Items = _realm.All<ToDoItem>().Where(x => x.Employee == Employee);
}
To sum up - you don't need to try and load the data in the ctor, since you don't know when the EmployeeId will be set. You are already tracking when the property will change and inside the SetValues command you simply need to change the expression predicate.
I have a Shop model which contains several fields. One of which is a virtual User one. Whenever I try to edit one entry I get an error saying that User field is required.
public class Shop
{
//..
public virtual ApplicationUser User { get; set; }
//..
}
My workaround is this:
shop.User = shop.User; //re-set the value
shop.Active = true;
db.Entry(restaurant).State = EntityState.Modified;
db.SaveChanges();
And I have to do this for all the fields. Is this the standard approach for this or is there a better way?
Change your model to this:
public class Shop
{
//..
public int UserId {get; set; }
public virtual ApplicationUser User { get; set; }
//..
}
Entity Framework will automatically detect that UserId is the foreign key for object User. You had this problem because User is virtual (lazy loaded). When changing the model without accessing or setting this property EF thinks it's empty (I assume). The foreign key UserId is not virtual, and will be fetched together with the other properties of model Shop, so you don't have to re-set the value when saving the model.
To set a new user, you now have to do for example:
myShop.UserId = 1; // instead of setting myShop.User
For more information, see this article: http://msdn.microsoft.com/en-us/data/jj713564.aspx
I have a simple class in asp.net mvc that looks like this:
public class JsonResponseItem
{
public string Key { get; set; }
public string Value { get; set; }
public JsonResponseItem(string key, string value)
{
Key = key;
Value = value;
}
}
In my controllers I create a list of that type
List<JsonResponseItem> response = new List<JsonResponseItem>();
so I can easily manage and add to the Json response. A dictionary object is kind of hard to do that with.
When I return the json object
return Json(response);
It deserializes it so I have to reference everything by index first, because of the list. So if I had a property called "IsValid" I would have to reference it like this "IsValid[0]". I have way too much javascript code to make these changes.
How could I deserialize the JsonResponseItem class so I don't need the index reference in there?
Thanks!
A Dictionary<string, string> would serialize into exactly the Json you're asking for. If you don't want to expose directly a dictionary, wrap it around in another class or use Json(response.ToDictionary(item => item.Key).
We're trying to get a conditional attribute to work, case in point, there's a boolean (checkbox) that if checked, its related text is required. So, ideally we'd have something like ...
public bool Provision { get; set; }
[ConditionalRequirement(IsNeededWhenTrue = Provision)]
public string ProvisionText { get; set; }
Is this even possible?
Alternate idea (not as elegant?)
public bool Provision2 { get; set; }
[PropertyRequired(RequiredBooleanPropertyName = "Provision2")]
public string Provision2Text { get; set; }
I'd hate to use the magic string method ... but any other ideas?
Ended up rolling my own. Basically you create a valiation method that does your normal check of yes, no, whatever and collects them in some kind of error collection. The rub with this is sending it BACK to the Model itself. So I got lazy and strongly typed it as such ...
public static void AddError<T>(this ErrorCollection errorCollection, Expression<Func<T, object>> expression, string friendlyUiName)
{
var propertyName = GetPropertyName(expression.ToString(), expression.Parameters[0].Name);
var propertyInfo = typeof (T).GetProperty(propertyName);
var resultError = DetermineOutput(friendlyUiName, propertyInfo.PropertyType);
errorCollection.Errors.Add(new ValidationError(propertyName, resultError));
}
so then you're validation statements have something like this in them ...
if (FirstName.IsEmpty())
EntityErrorCollection.AddError<SomeClass>(x => x.FirstName, "First Name");
Then within the controller, a simple check and port it BACK to the model if it (isn't valid of course) ...
foreach (var error in someObject.EntityErrorCollection.Errors)
ModelState.AddModelError(error.Property, error.Message);
There's probably a more cleaner way of doing this but so far, this has been working just fine.