How to change the secretKey in Dexie-Encrypted? - encryption

I've been looking for an example about how to modify the secretKey of a IndexedDB that is protected by Dexie-Encrypted.
So far I've read that if I wanna change the key, the version of the DB should be increase, but I can't really understand how to do it.
I'm new with the Dexie, so if anyone can help me, I would apreciate it

Related

How to get Axon event-identifier from the event-store

Just a short question here...
by using Axon, we know that AggregateLifecycle#apply(Object) will be doing the event-sourced for us which under the hood going to persist our event into our event-store.
With regards to that matter, how to get the event-identifier (not the aggregate identifier) once we call that particular apply method ?
Thanks
Based on your another answer, let me suggest you a way to follow.
The MessageIdentifier as used by AxonFramework (AF) is nothing more than an UUID generated for each Message you create.
Since you only need to reuse that info, you can pretty much get it from the Message while handling it. To make things easier for you, Axon provides a MessageIdentifierParameterResolver meaning you can simply use it in any #MessageHandler of you (of course, I am assuming you are using Spring as well).
Example:
#EventHandler
public void handle(Event eventToBeForwarded, #MessageIdentifier String messageIdentifier) {
// forward the event to another broker using the given `messageIdentifier`
}
Hope that helps you and make things clear!

Flutter Firestore save field as a String?

I am facing a problem with Flutter and Firebase Firestore. I want to take one of the current User's label, and save it as a string. I have already figured out how to find the currentUserID, so that isn't the problem. I just can't figure out how to target the "preference" label in my database. Here is my code for trying to declare it as a string:
String _checkPreference(DocumentSnapshot document) {
Firestore.instance.collection(currentUserId).document('preference').toString();
return _checkPreference(document);
}
The goal is then later to be able to do something like this:
return Text(_checkPreference(document));
Bit I am not really sure what to pass in, and if my method to get it is even correct. Thanks for reading, any help is appreciated!

Where should I put a logic for querying extra data in CQRS command flow

I'm trying to implement simple DDD/CQRS architecture without event-sourcing for now.
Currently I need to write some code for adding a notification to a document entity (document can have multiple notifications).
I've already created a command NotificationAddCommand, ICommandService and IRepository.
Before inserting new notification through IRepository I have to query current user_id from db using NotificationAddCommand.User_name property.
I'm not sure how to do it right, because I can
Use IQuery from read-flow.
Pass user_name to domain entity and resolve user_id in the repository.
Code:
public class DocumentsCommandService : ICommandService<NotificationAddCommand>
{
private readonly IRepository<Notification, long> _notificationsRepository;
public DocumentsCommandService(
IRepository<Notification, long> notifsRepo)
{
_notificationsRepository = notifsRepo;
}
public void Handle(NotificationAddCommand command)
{
// command.user_id = Resolve(command.user_name) ??
// command.source_secret_id = Resolve(command.source_id, command.source_type) ??
foreach (var receiverId in command.Receivers)
{
var notificationEntity = _notificationsRepository.Get(0);
notificationEntity.TargetId = receiverId;
notificationEntity.Body = command.Text;
_notificationsRepository.Add(notificationEntity);
}
}
}
What if I need more complex logic before inserting? Is it ok to use IQuery or should I create additional services?
The idea of reusing your IQuery somewhat defeats the purpose of CQRS in the sense that your read-side is supposed to be optimized for pulling data for display/query purposes - meaning that it can be denormalized, distributed etc. in any way you deem necessary without being restricted by - or having implications for - the command side (a key example being that it might not be immediately consistent, while your command side obviously needs to be for integrity/validity purposes).
With that in mind, you should look to implement a contract for your write side that will resolve the necessary information for you. Driving from the consumer, that might look like this:
public DocumentsCommandService(IRepository<Notification, long> notifsRepo,
IUserIdResolver userIdResolver)
public interface IUserIdResolver
{
string ByName(string username);
}
With IUserIdResolver implemented as appropriate.
Of course, if both this and the query-side use the same low-level data access implementation (e.g. an immediately-consistent repository) that's fine - what's important is that your architecture is such that if you need to swap out where your read side gets its data for the purposes of, e.g. facilitating a slow offline process, your read and write sides are sufficiently separated that you can swap out where you're reading from without having to untangle reads from the writes.
Ultimately the most important thing is to know why you are making the architectural decisions you're making in your scenario - then you will find it much easier to make these sorts of decisions one way or another.
In a project i'm working i have similar issues. I see 3 options to solve this problem
1) What i did do is make a UserCommandRepository that has a query option. Then you would inject that repository into your service.
Since the few queries i did need were so simplistic (just returning single values) it seemed like a fine tradeoff in my case.
2) Another way of handling it is by forcing the user to just raise a command with the user_id. Then you can let him do the querying.
3) A third option is ask yourself why you need a user_id. If it's to make some relations when querying the data you could also have this handles when querying the data (or when propagating your writeDB to your readDB)

OptimisticConcurrencyException with System.Web.Providers

I'm using the new Universal Providers from Microsoft for session in SQL Server. The old implementation of session on SQL Server required a job (running every minute) to clear expired sessions. The new one does this check and clear on every request. Since I'm actually running in SQL Azure, I don't have SQL Agent to schedule jobs, so this sounds like a reasonable way to go about it (no, I don't want to pay for Azure Cache for session).
The problem is when multiple users access the site at the same time, they're both trying to clear the same expired sessions at the same time and the second gets an optimistic concurrency exception.
System.Data.OptimisticConcurrencyException: Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries.
at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter)
at System.Data.Objects.ObjectContext.SaveChanges(SaveOptions options)
at System.Web.Providers.DefaultSessionStateProvider.PurgeExpiredSessions()
at System.Web.Providers.DefaultSessionStateProvider.PurgeIfNeeded()
at System.Web.SessionState.SessionStateModule.BeginAcquireState(Object source, EventArgs e, AsyncCallback cb, Object extraData)
at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
I'm using ELMAH for error logging, and this is showing up with some frequency. Before I try to configure ELMAH to ignore the errors, does anyone know of a way to stop the errors from happening in the first place? Is there some configuration with the new providers that I'm missing?
Please download the newer version of the providers which can be found here: http://www.nuget.org/packages/System.Web.Providers. We have updated the way that we clean up expired sessions to run asynchronously and to handle error conditions. Please let us know if it does not correct your issues.
This package has a what I would describe as a bug. The premise of out of process session state is that multiple instances might be managing session state clean up. In this case all instances are running this method...
private void PurgeExpiredSessions()
{
using (SessionEntities entities = ModelHelper.CreateSessionEntities(this.ConnectionString))
{
foreach (SessionEntity entity in QueryHelper.GetExpiredSessions(entities))
{
entities.DeleteObject(entity);
}
entities.SaveChanges();
this.LastSessionPurgeTicks = DateTime.UtcNow.Ticks;
}
}
The problem is that one instance deletes the entities before the other(s) and entities throws the error described in the post. I asked the package authors to release the source code or fix this.. My untested text editor fix would be to add public virtual, so one could override the method or just change it too..
private void PurgeExpiredSessions()
{
using (SessionEntities entities = ModelHelper.CreateSessionEntities(this.ConnectionString))
{
var sqlCommand = #"DELETE dbo.Sessions WHERE Expires < GETUTCDATE()";
entities.ExecuteStoreCommand(sqlCommand);
this.LastSessionPurgeTicks = DateTime.UtcNow.Ticks;
}
}
The package authors are really fast with a response (answered while posting this!) and today they stated that they are working on releasing the code but that they might be able to just fix this. I requested an ETA and will try to follow up here if I get one.
Great package, it just needs a little maintenance.
Proper Answer: Wait for source code release or a fix update. Or De-compile and fix it yourself (If that is consistent with the license!)
*Update the package owners are considering fixing it this week. Yeah!
**Update.SOLVED!!! They evidently fixed it a while ago and I was installing the wrong package..
I was using http://nuget.org/packages/System.Web.Providers and I should have been using http://nuget.org/packages/Microsoft.AspNet.Providers/ .. It was not obvious to me which one was legacy and included in another package. They wrapped it in an empty catch..
private void PurgeExpiredSessions()
{
try
{
using (SessionEntities entities = ModelHelper.CreateSessionEntities(this.ConnectionString))
{
foreach (SessionEntity entity in QueryHelper.GetExpiredSessions(entities))
{
entities.DeleteObject(entity);
}
entities.SaveChanges();
this.LastSessionPurgeTicks = DateTime.UtcNow.Ticks;
}
}
catch
{
}
}
Thank you to the package team for such quick responses and great support!!!
I posted a question on this at the NuGet(http://www.nuget.org/packages/System.Web.Providers), and got a very quick response from the owners. After a bit of backwards and forwards, turns out they do have a fix for this, but is going out in the next update.
There was a suggestion here, that Microsoft isnt too keen on supporting this, but my experience has been otherwise, and have always received good support.

Moving ViewState out of the page?

We are trying to lighten our page load as much as possible. Since ViewState can sometimes swell up to 100k of the page, I'd love to completely eliminate it.
I'd love to hear some techniques other people have used to move ViewState to a custom provider.
That said, a few caveats:
We serve on average 2 Million unique visitors per hour.
Because of this, Database reads have been a serious issue in performance, so I don't want to store ViewState in the database.
We also are behind a load balancer, so any solution has to work with the user bouncing from machine to machine per postback.
Ideas?
How do you handle Session State? There is a built-in "store the viewstate in the session state" provider. If you are storing the session state in some fast, out of proc system, that might be the best option for the viewstate.
edit: to do this add the following code to the your Page classes / global page base class
protected override PageStatePersister PageStatePersister {
get { return new SessionPageStatePersister(this); }
}
Also... this is by no means a perfect (or even good) solution to a large viewstate. As always, minimize the size of the viewstate as much as possible. However, the SessionPageStatePersister is relatively intelligent and avoids storing an unbounded number of viewstates per session as well as avoids storing only a single viewstate per session.
I have tested many ways to remove the load of view state from the page and between all hacks and some software out there the only thing that it is truly scalable is the StrangeLoops As10000 appliance. Transparent, no need to change the underlying application.
As previously stated, I have used the database to store the ViewState in the past. Although this works for us, we don't come close to 2 million unique visitors per hour.
I think a hardware solution is definitely the way to go, whether using the StrangeLoop products or another product.
The following works quite well for me:
string vsid;
protected override object LoadPageStateFromPersistenceMedium()
{
Pair vs = base.LoadPageStateFromPersistenceMedium() as Pair;
vsid = vs.First as string;
object result = Session[vsid];
Session.Remove(vsid);
return result;
}
protected override void SavePageStateToPersistenceMedium(object state)
{
if (vsid == null)
{
vsid = Guid.NewGuid().ToString();
}
Session[vsid] = state;
base.SavePageStateToPersistenceMedium(new Pair(vsid, null));
}
You can always compress ViewState so you get the benefits of ViewState without so much bloat:
public partial class _Default : System.Web.UI.Page {
protected override object LoadPageStateFromPersistenceMedium() {
string viewState = Request.Form["__VSTATE"];
byte[] bytes = Convert.FromBase64String(viewState);
bytes = Compressor.Decompress(bytes);
LosFormatter formatter = new LosFormatter();
return formatter.Deserialize(Convert.ToBase64String(bytes));
}
protected override void SavePageStateToPersistenceMedium(object viewState) {
LosFormatter formatter = new LosFormatter();
StringWriter writer = new StringWriter();
formatter.Serialize(writer, viewState);
string viewStateString = writer.ToString();
byte[] bytes = Convert.FromBase64String(viewStateString);
bytes = Compressor.Compress(bytes);
ClientScript.RegisterHiddenField("__VSTATE", Convert.ToBase64String(bytes));
}
// ...
}
using System.IO;
using System.IO.Compression;
public static class Compressor {
public static byte[] Compress(byte[] data) {
MemoryStream output = new MemoryStream();
GZipStream gzip = new GZipStream(output,
CompressionMode.Compress, true);
gzip.Write(data, 0, data.Length);
gzip.Close();
return output.ToArray();
}
public static byte[] Decompress(byte[] data) {
MemoryStream input = new MemoryStream();
input.Write(data, 0, data.Length);
input.Position = 0;
GZipStream gzip = new GZipStream(input,
CompressionMode.Decompress, true);
MemoryStream output = new MemoryStream();
byte[] buff = new byte[64];
int read = -1;
read = gzip.Read(buff, 0, buff.Length);
while(read > 0) {
output.Write(buff, 0, read);
read = gzip.Read(buff, 0, buff.Length);
}
gzip.Close();
return output.ToArray();
}
}
Due to the typical organizational bloat, requesting new hardware takes eons, and requesting hardware that would involve a complete rewire of our current setup would probably get some severe resistance from the engineering department.
I really need to come up with a software solution, because that's the only world I have some control over.
Yay for Enterprise :(
I've tried to find some of the products I had researched in the past that works just like StrangeLoops (but software based) It looks like they went all out of business, the only thing from my list that still up there is ScaleOut but they are specialized in session state caching.
I understand how hard it is to sell hardware solutions to senior management but it is always a good idea to at least get management to accept listening to the hardware's sales rep. I am much rather putting some hardware that will present me with an immediate solution because it allows me (or buy me some time) to get some other real job done.
I understand, it really sucks but the alternative is to change your code for optimization and that would maybe cost a lot more than getting an appliance.
Let me know if you find another software based solution.
I'm going to see if I can come up with a way to leverage our current State server to contain the viewstate in memory, I should be able to use the user session ID to keep things synched up between machines.
If I come up with a good solution, I'll remove any IP protected code and put it out for public use.
Oh no, red tape. Well this is going to be a tall order to fill. You mentioned here that you use a state server to serve your session state. How do you have this setup? Maybe you can do something similar here also?
Edit
Awh #Jonathan, you posted while I was typing this answer up. I think going that route could be promising. One thing is that it will definitely be memory intensive.
#Mike I don't think storing it in the session information will be a good idea, due to the memory intensiveness of viewstate and also how many times you will need to access the viewstate. SessionState is accessed a lot less often as the viewstate. I would keep the two separate.
I think the ultimate solution would be storing the ViewState on the client some how and maybe worth looking at. With Google Gears, this could be possible now.
Have you considered if you really need all that viewstate? For example, if you populate a datagrid from a database, all the data will be saved in viewstate by default. However, if the grid is just for presenting data, you dont need a form a all, and hence no viewstate.
You only need viewstate when there is some interaction with the user through postbacks, and even then the actual form data may be sufficient to recreate the view. You can selectively disable viewstate for controls on the page.
You have a very special UI if you actually need 100K of viewstate. If you reduce the viewstate to what is absolutely necessary, it might turn out to be the easiest and most scalable to keep the viewstate in the page.
I might have a simple solution for you in another post. It's a simple class to include in your app and a few lines of code in the asp.net page itself. If you combine it with a distributed caching system you could save a lot of dough as viewstate is large and costly. Microsoft’s velocity might be a good product to attach this method too. If you do use it and save a ton of money though I'd love a little mention for that. Also if you are unsure of anything let me know and I can talk with you in person.
Here is the link to my code. link text
If you are concerned with scaling then using the session token as a unique identifier or storing the state in session is more or less guaranteed to work in a web farm scenario.
Store the viewstate in a session object and use a distributed cache or state service to store session seperate from the we servers such as microsofts velocity.
I know this is a little stale, but I've been working for a couple of days on an opensource "virtual appliance" using squid and ecap to:
1.) gzip
2.) handle ssl
3.) replace viewstate with a token on request / response
4.) memcache for object caching
Anyways, it looks pretty promising. basically it would sit in front of the loadbalancers and should really help client performance. Doesnt seem to be very hard to set up either.
I blogged on this a while ago - the solution is at http://www.adverseconditionals.com/2008/06/storing-viewstate-in-memcached-ultimate.html
This lets you change the ViewState provider to one of your choice without having to change each of your Page classes, by using a custom PageAdapter. I stored the ViewState in memcached. In retrospect I think storing it in a database or on disk is better - we filled memcached up very quickly. Its a very low friction solution.
No need to buy or sell anything to eliminate viewstate bloating. Just need to extend the HiddenFieldPageStatePersister. The 100-200KB of ViewState will stay on the server and will send only a 62byte token on the page instead.
Here is a detailed article on how this can be done:
http://ashishnangla.com/2011/07/21/reducing-size-of-viewstate-in-asp-net-webforms-by-writing-a-custom-viewstate-provider-pagestatepersister-part-12/

Resources