Q1 - I’m not sure I understand why we should prefer to use PrincipalPermission.Union() ( or PrincipalPermission.Intersect() ) instead of IsInRole()? If anything, calling IsInRole() several times requires less code than creating multiple PrincipalPermission objects and merging them into one via Union() ( or Intersect() )?
Q2 - One constructor overload of PrincipalPermission object also specifies a IsAuthenticated flag that tells Demand() to verify if user is authenticated. Wouldn’t using that flag only be useful in situations where first two parameters ( name and role ) are both null?
thanx
Q1. - RE: PrincipalPermission methods vs. IPrincipal.IsInRole(..)
The two function calls make a PrincipalPermission that has the union or intersection of the roles you give it. Thus you end up with a principal that has a very specific set of demands, which you can then call IsInRole() upon. Note that doing this will hit your role provider which may be an SQL server or the active directory and thus have latency involved, so you don't want to do it all the time.
Q2. - RE: PrincipalPermission authentication
Authenticated indicates that the user is logged in against your provider. You may want this if you need only auditing on your application, confirming the user is logged in to your role provider will mean that you can log who they are etc.
You are correct in saying it's only useful where you don't care about who the user is, only that they are logged in.
Related
Website set at company server, and use the session to store user information like user account & name. I wrote the session on the login page and never rewrote it, but the client pc has an account changing problem as user A in the middle of operating his account will change to user B maybe while user B logs in. User A and User B are on the same Intranet and use different PC.
Is the session make this problem? How to solve this problem modify code less?
Well, you have two rather seperate issues.
First the user - can logon with different logons. That in fact should give them each seperate sesisons. Perhaps you not logging out the user correctly when they logoff. (or maybe you rolled your own logon system - bad idea!! - since now you can and will have a session probem.
Next up:
You have to adopt a design in which session() allows the ONE user to work, and work if they right click - new tab, or even launch another copy of the web browser. So, you have to be REALLY careful here. Say a user clicks on a gridview - typical select some project, product or whatever. So you shove the PK id of that selected row into session, and then of course jump to a new page.
but, if they have two tabs open, or two browsers open on that grid? Well, now they can click on one copy, select row, jump to new page. Then the user does the same on the other grid.
Now, your PK session ID is DIFFERENT for the first page. If that code continues to use session() for that information, you are in BIG trouble!!! - the PK id in session on the first page is now different, and not the PK the user selected.
So, what to do in above? well, there is a hodge podge of workarounds - some add a custom number to some session value - some add a number to the URL - all are quite messey.
The simple soluion? Adopet a coding practice that you ONLY use session() to pass values, but on first page load, you transfer that to ViewState. View state is per page, (or per tab, or per browser page). Session() of course is global to user.
So, if they pick a house to buy from that grid, jump to a new page. Now they do the same on the 2nd browser copy - they are sitting on two different houses, but your session PK value is different!!! If they click buy or any other kind of operations on the first page? That code can NOT use session() anymore, can it?
So, even when I have say 4-5 values i must pass to the next page via session? I on page load (is postback = false), transfer those values to ViewState. You now write ALL CODE ON that page to ALWAYS use ViewState, and thus session() values never trip over each other. Two web pages on the same grid, user clicks - pass via session, transfer to viewstate. So, now you don't care.
And the above approch would solve your probelm at least of passing values. However, I see VERY little reason to store user information in session(). That is what the membership, and roles and the logon system does for you, and thus, if they logon with different ID, then membership GetUserID etc. should and will return a correct user ID. So, while the above design pattern will solve the tripping over PK and vaues passed around, it will not solve your session informaton that you have for the user. But as I stated, you REALLY do not, and should not need to store user logon information in session anyway.
however if you do correctly need and let a user change/flip/jump to a new different logon? If you first logout the user, then session can and should start over - it not clear how you logout the user, but I would look at that code.
I use this to logout a user:
Session.Abandon()
FormsAuthentication.SignOut()
So the above should start a new session - even if you allow user to flip, or change or logon to a different user.
However, the general over all issue of the session? Your problem is not only user session information, but that using session() values in general code - be careful, and always ask if the values in session() are ok to be used if the user say had 2 or 5 copies of the browser open.
I am curious to understand what the best practice approach is when using the Axon Framework to validate that an email field is unique to a Set of emails for a Contact Aggregate.
Example setup
ContactCreateCommand {
identifier = '123'
name = 'ABC'
email = 'info#abc.com'
}
ContactAggregate {
ContactAggregate(ContactCreateCommand cmd) {
//1. cannot validate email
AggregateLifecycle.apply(
new ContactCreatedEvent(//fields ... );
);
}
}
From my understanding of how this might be implemented, I have identified a number of possible ways to handle this, but perhaps there are more.
1. Do nothing in the Aggregate
This approach imposes that the invoker (of the command) does a query to find Contacts by email prior to sending the command, allowing for some milliseconds where eventual consistency allows for duplication.
Drawbacks:
Any "invoker" of the command would then be required to perform this validation check as its not possible to do this check inside the Aggregate using an Axon Query Handler.
Duplication can occur, so all projections based from these events need to handle this duplication somehow
2. Validate in a separate persistence layer
This approach introduces a new persistence layer that would validate uniqueness inside the aggregate.
Inside the ContactAggregate command handler for ContactCreateCommand we can then issue a query against this persistence layer (eg. a table in postgres with a unique index on it) and we can validate the email against this database which contains all the sets
Drawbacks:
Introduces an external persistence layer (external to the microservice) to guarantee uniqueness across Contacts
Scaling should be considered in the persistence layer, hitting this with a highly scaled aggregate could prove a bottleneck
3. Use a Saga and Singleton Aggregate
This approach enhances the previous setup by introducing an Aggregate that can only have at most 1 instance (e.g. Target Identifier is always the same). This way we create a 'Singleton Aggregate' that is responsible only to encapsulate the Set of all Contact Email Addresses.
ContactEmailValidateCommand {
identifier = 'SINGLETON_ID_1'
email='info#abc.com'
customerIdentifier = '123'
}
UniqueContactEmailAggregate {
#AggregateIdentifier
private String identifier;
Set<String> email = new HashSet<>();
on(ContactEmailValidateCommand cmd) {
if (email.contains(cmd.email) == false) {
AggregateLifecycle.apply(
new ContactEmailInvalidatedEvent(//fields ... );
} else {
AggregateLifecycle.apply(
new ContactEmailValidatedEvent(//fields ... );
);
}
}
}
After we do this check, we could then re-act appropriately to the ContactEmailInvalidatedEvent or ContactEmailValidatedEvent which might invalidate the contact afterwards.
The benefit of this approach is that it keeps the persistence local to the Aggregate, which could give better scaling (as more nodes are added, more aggregates with locally managed Sets exist).
Drawbacks
Quite a lot of boiler plate to replace "create unique index"
This approach allows an 'invalid' Contact to pollute the Event Store for ever
The 'Singleton Aggregate' is complex to ensure it is a true (perhaps there is a simpler or better way)
The 'invoker' of the CreateContactCommand must check to see the outcome of the Saga
What do others do to solve this? I feel option 2 is perhaps the simplest approach, but are there other options?
What you are essentially looking for is Set Based Validation (I think here blog does a nice job explaining the concept, and how to deal with it in Axon). In short, validating some field is (or is not) contained in a set of data. When doing CQRS, this becomes a somewhat interesting concept to reason about, with several solutions out there (as you've already portrayed).
I think the best solution to this is summarized under your second option to use a dedicated persistence layer for the email addresses. You'd simply create a very concise model containing just the email addresses, which you would validate prior to issuing the ContactCreateCommand. Note that this persistence layer belongs to the Command Model, as it is used to perform business validation. You'd thus introduce an example where you not only have Aggregates in your Command Model, but also Views. And as you've rightfully noted, this View needs to be optimized for it's use case of course. Maybe introducing a cache which is created on application start up wouldn't be to bad.
To ensure this email addresses view is as up to date as possible, it's smartest to ensure it is updated in the same transaction as when the ContactCreatedEvent (which contains a new email address, I assume) is published. You can do this by having a dedicated Event Handling Component for your "Email Addresses View" which is updated through a SubscribingEventProcessor (a SEP). This would work as the SEP is invoked by the same thread publishing the event (your aggregate).
You have a couple of options when it comes to querying this model prior to sending the command. You could use a MessageDispatchInterceptor which only reacts on the ContactCreateCommand for example. Or, you introduce a Handler Enhancer which is dedicated to react ContactCreateCommand to perform this validation. Or, you introduce another command like RequestContactCreationCommand which is targeted towards a regular component. This component would handle the command, validate the model and if approved dispatches a ContactCreateCommand.
That's my two cents to the situation, hope this helps #vcetinick!
I try to manage the access rights for users to edit or view different articles.
Articles can be created dynamically and the rights should be editable for every article.
In my case, I have a User object and multiple other objects (Article, and more...).
I need to check if a User can read or write any kind of object.
I actually see there is a method Voters, but they only can manage User groups?
Can somebody help me?
A Voter can decide almost anything - usually it's based on a user's permission, but it doesn't have to be - I've used one as a 'feature flag' check, with a value fetched from a configuration, or database entry to show something - or not, as an example.
The page on voters has an example on viewing, or editing a database record (a Post entity, via a $this->denyAccessUnlessGranted('edit', $post);.
In your instance, the voter would be passed the 'attribute', the object (Article, etc) you want to check on, and gets the current user from a service. If that user has the appropriate permission to read/edit/delete the Article or other object, it returns true.
I will try to be as concise as possible. I'm using Flex/Hibernate technologies for my app. I also use Cairngorm micro-architecture for Flex. Because i'm beginner, i have probably misunderstand something about Caringorm's ModelLocator purpose. I have following problem...
Suppose that we have next data model:
USER ----------------> TOPIC -------------> COMMENT
1 M 1 M
User can start many topics, topics can have many comments etc. It is pretty simple model, just for example. In hibernate, i use EAGER fetching strategy for unidirectional USER->TOPIC and TOPIC->COMMENT relations(here is no question about best practices etc, this is just example of problem).
My ModelLocator looks like this:
...
public class ModelLocator ....
{
//private instance, private constructor, getInstance() etc...
...
//app state
public var users:ArrayCollection;
public var selectedUser:UserVO;
public var selectedTopic:TopicVO;
}
Because i use eager fetching, i can 'walk' through all object graph on my Flex client without hitting the database. This is ok as long as i don't need to insert, update, or delete some of the domain instances. But when that comes, problems with synchronization arise.
For example, if i want to show details about some user from some UserListView, when user(actor) select that user in list, i will take selected index in UserList, get element from users ArrayCollection in ModelLocator at selected index and show details about selected user.
When i want to insert new User, ok, I will save that user in database and in IResponder result method i will add that user in ModelLocator.users ArrayCollection.
But, when i want to add new topic for some user, if i still want to use convenience of EAGER fetching, i need to reload user list again... And to add topic to selected user... And if user is in some other location(indirectly), i need to insert topic there also.
Update is even worst. In that case i need to write even some logic...
My question: is this good way of using ModelLocator in Cairngorm? It seems to me that, because of mentioned, EAGER fetching is somehow pointless. In case of using EAGER fetching, synchronization on Flex client can become big problem. Should I always hit database in order to manipulate with my domain model?
EDIT:
It seems that i didn't make myself clear enough. Excuse me for that.
Ok, i use Spring in technology stack also and DTO(DVO) pattern with flex/spring (de)serializer, but i just wanted to stay out of that because i'm trying to point out how do you stay synchronized with database state in your flex app. I don't even mention multi-user scenario and poling/pushing topic which is, maybe, my solution because i use standard request-response mechanism. I didn't provide some concrete code, because this seems conceptual problem for me, and i use standard Cairngorm terms in order to explain pseudo-names which i use for class names, var names etc.
I'll try to 'simplify' again: you have flex client for administration of above mentioned domain(CRUD for each of domain classes), you have ListOfUsersView(shows list of users with basic infos about them), UserDetailsView(shows user details and list of user topics with delete option for each of topic), InsertNewUserTopicView(form to insert new topic) etc.
Each of view which displays some infos is synchronized with ModelLocator state variables, for example:
ListOfUsersView ------binded to------> users:ArrayCollection in ModelLocator
UserDetailsView ------binded to------> selectedUser:UserVO in ModelLocator
etc.
View state transition look like this:
ListOfUsersView----detailsClick---->UserDetailsView---insertTopic--->InsertTopicView
So when i click on "Details" button in ListOfUsersView, in my logic, i get index of selected row in ListOfUsers, after that i take UserVO object from users:ArrayCollection in ModelLocator at mentioned index, after that i set that UserVO object as selectedUser:UserVO in ModelLocator and after that i change view state to UserDetailsView(it shows user details and selectedUser.topics) which is synchronized with selectedUser:UserVO in ModelLocator.
Now, i click "Insert new topic" button on UserDetailsView which results in InsertTopicView form. I enter some data, click "Save topic"(after successful save, UserDetailsView is shown again) and problem arise.
Because of my EAGER-ly fetched objects, i didn't hit the database in mentioned transitions and because of that there are two places for which i need to be concerned when insert new topic for selected user: one is instance of selectedUser object in users:ArrayCollection (because my logic select users from that collection and shows them in UserDetailsView), and second is selectedUser:UserVO(in order to sync UserDetailsView which comes after successfull save operation).
So, again my question arises... Should i hit database in every transition, should i reload users:ArrayCollection and selectedUser:UserVO after save in order to synchronize database state with flex client, should i take saved topic and on client side, without hitting the database, programmatically pass all places which i need to update or...?
It seems to me that EAGER-ly fetched object with their associations is not good idea. Am i wrong?
Or, to 'simplify' :) again, what should you do in the mentioned scenario? So, you need to handle click on "Save topic" button, and now what...?
Again, i really try to explain this as plastic as possible because i'm confused with this. So, please forgive me for my long post.
From my point of view the point isn't in fetching mode itself but in client/server interaction. From my previous experience with it I've finally found some disadvantages of using pure domain objects (especially with eager fetching) for client/server interaction:
You have to pass all the child collections maybe without necessity to use them on a client side. In your case it is very likely you'll display topics and comments not for all users you get from server. The most like situation you need to display user list then display topics for one of the selected users and then comments for one of the selected topics. But in current implementation you receive all the topics and comments even if they are not needed to display. It is very possible you'll receive all your DB in a single query.
Another problem is it can be very insecure to get all the user data (or some other data) with all fields (emails, addresses, passwords, credit card numbers etc).
I think there can be other reasons not to use pure domain objects especially with eager fetching.
I suggest you to introduce some Mapper (or Assembler) layer to convert your domain objects to Data Transfer Objects aka DTO. So every query to your service layer will receive data from your DAO or Active Record and then convert it to corresponding DTO using corresponding Mapper. So you can get user list without private data and query some additional user details with a separate query.
On a client side you can use these DTOs directly or convert them into client domain objects. You can do it in your Cairngorm responders.
This way you can avoid a lot of your client side problems which you described.
For a Mapper layer you can use Dozer library or create your own lightweight mappers.
Hope this helps!
EDIT
What about your details I'd prefer to get user list with necessary displayable fields like first name and last name (to display in list). Say a list of SimpleUserRepresentationDTO.
Then if user requests user details for editing you request UserDetailsDTO for that user and fill tour selectedUser fields in model with it. The same is for topics.
The only problem is displaying list of users after user details editing. You can:
Request the whole list again. The advantage is you can display changes performed by other users. But if the list is too long it can be very ineffective to query all the users each time even if they are SimpleUserRepresentationDTO with minimal data.
When you get success from server on user details saving you can find corresponding user in model's user list and replace changed details there.
Tell you the truth, there's no good way of using Cairngorm. It's a crap framework.
I'm not too sure exactly what you mean by eager fetching (or what exactly is your problem), but whatever it is, it's still a request/response kind of deal and this shouldn't be a problem per say unless you're not doing something right; in which case I can't see your code.
As for frameworks, I recommend you look at RobotLegs or Parsley.
Look at the "dpHibernate" project. It implements "lazy loading" on the Flex client.
The documentation reads:
Helper function for authentication
modules. Either login in or registers
the current user, based on username.
Either way, the global $user object is
populated based on $name.
It seems to me that this function does not actually perform a login (it does not trigger the user_hook with op=login. It does not call user_external_login or even user_authenticate_finalize.
Am I interpreting it wrong?
I looked through the code, and it doesn't invoke hook_user() op = 'login'. You can do that in your own module though.
Look at user_module_invoke() to do this.
It does log the user in.
Last lines in code,
// Log user in.
$form_state['uid'] = $account->uid;
user_login_submit(array(), $form_state);
, seems to say that, in spite of submitting a wrong password.
System seems to create a user (named like that name provided in the login form) and save locally whichever wrong password provided (which will be, then, the "right" password).
If you do not take further action, then, it will not even care about an external authentication source and the real onwer of that name will not be able to log in later...
Scary, uh?