Flex-Cairngorm/Hibernate - Is EAGER fetching strategy pointless? - apache-flex

I will try to be as concise as possible. I'm using Flex/Hibernate technologies for my app. I also use Cairngorm micro-architecture for Flex. Because i'm beginner, i have probably misunderstand something about Caringorm's ModelLocator purpose. I have following problem...
Suppose that we have next data model:
USER ----------------> TOPIC -------------> COMMENT
1 M 1 M
User can start many topics, topics can have many comments etc. It is pretty simple model, just for example. In hibernate, i use EAGER fetching strategy for unidirectional USER->TOPIC and TOPIC->COMMENT relations(here is no question about best practices etc, this is just example of problem).
My ModelLocator looks like this:
...
public class ModelLocator ....
{
//private instance, private constructor, getInstance() etc...
...
//app state
public var users:ArrayCollection;
public var selectedUser:UserVO;
public var selectedTopic:TopicVO;
}
Because i use eager fetching, i can 'walk' through all object graph on my Flex client without hitting the database. This is ok as long as i don't need to insert, update, or delete some of the domain instances. But when that comes, problems with synchronization arise.
For example, if i want to show details about some user from some UserListView, when user(actor) select that user in list, i will take selected index in UserList, get element from users ArrayCollection in ModelLocator at selected index and show details about selected user.
When i want to insert new User, ok, I will save that user in database and in IResponder result method i will add that user in ModelLocator.users ArrayCollection.
But, when i want to add new topic for some user, if i still want to use convenience of EAGER fetching, i need to reload user list again... And to add topic to selected user... And if user is in some other location(indirectly), i need to insert topic there also.
Update is even worst. In that case i need to write even some logic...
My question: is this good way of using ModelLocator in Cairngorm? It seems to me that, because of mentioned, EAGER fetching is somehow pointless. In case of using EAGER fetching, synchronization on Flex client can become big problem. Should I always hit database in order to manipulate with my domain model?
EDIT:
It seems that i didn't make myself clear enough. Excuse me for that.
Ok, i use Spring in technology stack also and DTO(DVO) pattern with flex/spring (de)serializer, but i just wanted to stay out of that because i'm trying to point out how do you stay synchronized with database state in your flex app. I don't even mention multi-user scenario and poling/pushing topic which is, maybe, my solution because i use standard request-response mechanism. I didn't provide some concrete code, because this seems conceptual problem for me, and i use standard Cairngorm terms in order to explain pseudo-names which i use for class names, var names etc.
I'll try to 'simplify' again: you have flex client for administration of above mentioned domain(CRUD for each of domain classes), you have ListOfUsersView(shows list of users with basic infos about them), UserDetailsView(shows user details and list of user topics with delete option for each of topic), InsertNewUserTopicView(form to insert new topic) etc.
Each of view which displays some infos is synchronized with ModelLocator state variables, for example:
ListOfUsersView ------binded to------> users:ArrayCollection in ModelLocator
UserDetailsView ------binded to------> selectedUser:UserVO in ModelLocator
etc.
View state transition look like this:
ListOfUsersView----detailsClick---->UserDetailsView---insertTopic--->InsertTopicView
So when i click on "Details" button in ListOfUsersView, in my logic, i get index of selected row in ListOfUsers, after that i take UserVO object from users:ArrayCollection in ModelLocator at mentioned index, after that i set that UserVO object as selectedUser:UserVO in ModelLocator and after that i change view state to UserDetailsView(it shows user details and selectedUser.topics) which is synchronized with selectedUser:UserVO in ModelLocator.
Now, i click "Insert new topic" button on UserDetailsView which results in InsertTopicView form. I enter some data, click "Save topic"(after successful save, UserDetailsView is shown again) and problem arise.
Because of my EAGER-ly fetched objects, i didn't hit the database in mentioned transitions and because of that there are two places for which i need to be concerned when insert new topic for selected user: one is instance of selectedUser object in users:ArrayCollection (because my logic select users from that collection and shows them in UserDetailsView), and second is selectedUser:UserVO(in order to sync UserDetailsView which comes after successfull save operation).
So, again my question arises... Should i hit database in every transition, should i reload users:ArrayCollection and selectedUser:UserVO after save in order to synchronize database state with flex client, should i take saved topic and on client side, without hitting the database, programmatically pass all places which i need to update or...?
It seems to me that EAGER-ly fetched object with their associations is not good idea. Am i wrong?
Or, to 'simplify' :) again, what should you do in the mentioned scenario? So, you need to handle click on "Save topic" button, and now what...?
Again, i really try to explain this as plastic as possible because i'm confused with this. So, please forgive me for my long post.

From my point of view the point isn't in fetching mode itself but in client/server interaction. From my previous experience with it I've finally found some disadvantages of using pure domain objects (especially with eager fetching) for client/server interaction:
You have to pass all the child collections maybe without necessity to use them on a client side. In your case it is very likely you'll display topics and comments not for all users you get from server. The most like situation you need to display user list then display topics for one of the selected users and then comments for one of the selected topics. But in current implementation you receive all the topics and comments even if they are not needed to display. It is very possible you'll receive all your DB in a single query.
Another problem is it can be very insecure to get all the user data (or some other data) with all fields (emails, addresses, passwords, credit card numbers etc).
I think there can be other reasons not to use pure domain objects especially with eager fetching.
I suggest you to introduce some Mapper (or Assembler) layer to convert your domain objects to Data Transfer Objects aka DTO. So every query to your service layer will receive data from your DAO or Active Record and then convert it to corresponding DTO using corresponding Mapper. So you can get user list without private data and query some additional user details with a separate query.
On a client side you can use these DTOs directly or convert them into client domain objects. You can do it in your Cairngorm responders.
This way you can avoid a lot of your client side problems which you described.
For a Mapper layer you can use Dozer library or create your own lightweight mappers.
Hope this helps!
EDIT
What about your details I'd prefer to get user list with necessary displayable fields like first name and last name (to display in list). Say a list of SimpleUserRepresentationDTO.
Then if user requests user details for editing you request UserDetailsDTO for that user and fill tour selectedUser fields in model with it. The same is for topics.
The only problem is displaying list of users after user details editing. You can:
Request the whole list again. The advantage is you can display changes performed by other users. But if the list is too long it can be very ineffective to query all the users each time even if they are SimpleUserRepresentationDTO with minimal data.
When you get success from server on user details saving you can find corresponding user in model's user list and replace changed details there.

Tell you the truth, there's no good way of using Cairngorm. It's a crap framework.
I'm not too sure exactly what you mean by eager fetching (or what exactly is your problem), but whatever it is, it's still a request/response kind of deal and this shouldn't be a problem per say unless you're not doing something right; in which case I can't see your code.
As for frameworks, I recommend you look at RobotLegs or Parsley.

Look at the "dpHibernate" project. It implements "lazy loading" on the Flex client.

Related

Storing Temp. Data in ASP.NET Core (NopCommerce)

Dear StackOverflow community,
For a project of mine I need to store (Temporary) data somewhere else than a Database. Actually its a little more complicated. I have a checkout page in NopCommerce where users can select them delivery moment or even location for example pickup. This data has to be store temporary untill the user made the payment. Only then I will request the data and store in DB. So that later I can retrieve the data and display in my dashboard, So that I know when the package is scheduled to be shipped.
Requirements:
Endurement: 12-24 Hours.
Store as user specific data.
Data has to be safed for quite a few sessions. Depends on the user. For example. If the user chooses a delivery moment but desides to look somewhere else before paying. This data has to be stored all those sessions.
If possible serverside.
I have quite a few options following Microsoft:
Session and state management in ASP.NET Core
Now I have tried storing data in Memory (Caching data) using 'MemoryCacheEntryOptions'. The problem is that its application wide. And its hard to maintain with hundreds of users.
Other option is 'Session state'.
The problem is that this data only endures a single session. I need to hold the data for atleast 12 to 24 hours.
Then we have 'Temp Data'. This seems like a promising option. Its great since it is kept until the is has been used/read. You even have 'Peek' and 'Keep' Methods to keep the data while peeking. Problem: It requires a controller. And requesting the data is a callback Method that doesnt require a Controller.
'Query Strings' Well its not much is it? This may be not user critical data BUT seems like query string isnt what Im looking for.
'Hidden Fields' Not really suitable either. It is therefore not form data.
'HttpContext.Items' Definetly not suitable. Data is only stored for single request.
'Cache' Way to hard to maintain user specific data.
So my question is. How do I save all this data temporary, if possible server side for a day atleast with hundreds of users at the same moment. And request the data later in a callback, to store it in DB until the order is shipped.
NopCommerce exposes a class called GenericAttributeService in Nop.Services.Common. This allows you to store your custom attribute data, specific to each customer.
To use it, first inject the GenericAttributeService into your class (controller, service class, etc).
private readonly IGenericAttributeService _genericAttributeService;
public MyFancyController(IGenericAttributeService genericAttributeService)
{
_genericAttributeService = genericAttributeService;
}
To save the data for current customer, use SaveAttribute and save an key-value pair you need:
_genericAttributeService.SaveAttribute<string>(_workContext.CurrentCustomer, "MyKeyName", "Value to save for this customer")
_genericAttributeService.SaveAttribute<int>(_workContext.CurrentCustomer, "My2ndKeyName", model.id)
To get the data use GetAttributesForEntity, where the entity is your key value.
var attribute = _genericAttributeService.GetAttributesForEntity(_workContext.CurrentCustomer.Id, "Customer").Where(x => x.Key == "MyKeyName").FirstOrDefault();
To delete the attribute use DeleteAttribute:
_genericAttributeService.DeleteAttribute(attribute);
Notice that we used _workContext which helps expose the current customer.
This already works well with other temp functionality already existing in the application, such as managing shopping carts, wish lists, etc, so browsing the source code for other examples can also be helpful.

Validating a Contact has a Unique Email in Axon

I am curious to understand what the best practice approach is when using the Axon Framework to validate that an email field is unique to a Set of emails for a Contact Aggregate.
Example setup
ContactCreateCommand {
identifier = '123'
name = 'ABC'
email = 'info#abc.com'
}
ContactAggregate {
ContactAggregate(ContactCreateCommand cmd) {
//1. cannot validate email
AggregateLifecycle.apply(
new ContactCreatedEvent(//fields ... );
);
}
}
From my understanding of how this might be implemented, I have identified a number of possible ways to handle this, but perhaps there are more.
1. Do nothing in the Aggregate
This approach imposes that the invoker (of the command) does a query to find Contacts by email prior to sending the command, allowing for some milliseconds where eventual consistency allows for duplication.
Drawbacks:
Any "invoker" of the command would then be required to perform this validation check as its not possible to do this check inside the Aggregate using an Axon Query Handler.
Duplication can occur, so all projections based from these events need to handle this duplication somehow
2. Validate in a separate persistence layer
This approach introduces a new persistence layer that would validate uniqueness inside the aggregate.
Inside the ContactAggregate command handler for ContactCreateCommand we can then issue a query against this persistence layer (eg. a table in postgres with a unique index on it) and we can validate the email against this database which contains all the sets
Drawbacks:
Introduces an external persistence layer (external to the microservice) to guarantee uniqueness across Contacts
Scaling should be considered in the persistence layer, hitting this with a highly scaled aggregate could prove a bottleneck
3. Use a Saga and Singleton Aggregate
This approach enhances the previous setup by introducing an Aggregate that can only have at most 1 instance (e.g. Target Identifier is always the same). This way we create a 'Singleton Aggregate' that is responsible only to encapsulate the Set of all Contact Email Addresses.
ContactEmailValidateCommand {
identifier = 'SINGLETON_ID_1'
email='info#abc.com'
customerIdentifier = '123'
}
UniqueContactEmailAggregate {
#AggregateIdentifier
private String identifier;
Set<String> email = new HashSet<>();
on(ContactEmailValidateCommand cmd) {
if (email.contains(cmd.email) == false) {
AggregateLifecycle.apply(
new ContactEmailInvalidatedEvent(//fields ... );
} else {
AggregateLifecycle.apply(
new ContactEmailValidatedEvent(//fields ... );
);
}
}
}
After we do this check, we could then re-act appropriately to the ContactEmailInvalidatedEvent or ContactEmailValidatedEvent which might invalidate the contact afterwards.
The benefit of this approach is that it keeps the persistence local to the Aggregate, which could give better scaling (as more nodes are added, more aggregates with locally managed Sets exist).
Drawbacks
Quite a lot of boiler plate to replace "create unique index"
This approach allows an 'invalid' Contact to pollute the Event Store for ever
The 'Singleton Aggregate' is complex to ensure it is a true (perhaps there is a simpler or better way)
The 'invoker' of the CreateContactCommand must check to see the outcome of the Saga
What do others do to solve this? I feel option 2 is perhaps the simplest approach, but are there other options?
What you are essentially looking for is Set Based Validation (I think here blog does a nice job explaining the concept, and how to deal with it in Axon). In short, validating some field is (or is not) contained in a set of data. When doing CQRS, this becomes a somewhat interesting concept to reason about, with several solutions out there (as you've already portrayed).
I think the best solution to this is summarized under your second option to use a dedicated persistence layer for the email addresses. You'd simply create a very concise model containing just the email addresses, which you would validate prior to issuing the ContactCreateCommand. Note that this persistence layer belongs to the Command Model, as it is used to perform business validation. You'd thus introduce an example where you not only have Aggregates in your Command Model, but also Views. And as you've rightfully noted, this View needs to be optimized for it's use case of course. Maybe introducing a cache which is created on application start up wouldn't be to bad.
To ensure this email addresses view is as up to date as possible, it's smartest to ensure it is updated in the same transaction as when the ContactCreatedEvent (which contains a new email address, I assume) is published. You can do this by having a dedicated Event Handling Component for your "Email Addresses View" which is updated through a SubscribingEventProcessor (a SEP). This would work as the SEP is invoked by the same thread publishing the event (your aggregate).
You have a couple of options when it comes to querying this model prior to sending the command. You could use a MessageDispatchInterceptor which only reacts on the ContactCreateCommand for example. Or, you introduce a Handler Enhancer which is dedicated to react ContactCreateCommand to perform this validation. Or, you introduce another command like RequestContactCreationCommand which is targeted towards a regular component. This component would handle the command, validate the model and if approved dispatches a ContactCreateCommand.
That's my two cents to the situation, hope this helps #vcetinick!

Security Issue with ASP.NET and SQL Server

A problem appears when two users are logged on to our service system at the same time and looking at the service list gridview. If user1 does a search to filter the gridview and user2 happens to click to another page user2 sees the results from the search performed by user1. That means one company can see another company's data.
It's an ASP.NET application that was developed in house with C#/ASP.NET 3.5. The data is stored in a SQL 2000 database and relies very heavily on stored procedures to update, select, and delete data. There are multiple user types that are restricted to what data they can see. For example, we have a company use that can only see data relavant to that company.
From what I've seen, the security is handled through If statements in the front end. Example, if userlevel = 1 then do this, if userlevel = 2 do this. These statments are used to show or hide columns in a grid, run queries to return data, and any other restrictions needed. For a company user the code behind gets the companyid assigned to the user and uses that in a query to return the results of all the data associated with that companyid (services, ships, etc).
Any recommendations for fixing this will be highly appreciated.
It's hard to say without seeing any implementation details, but on the surface it appears that there maybe some company level caching. Check for OutputCache settings, DataSource caching, explicit caching with Page.Cache, etc.
This article is a little dated, but at a glance it looks like most information is still relevant in ASP.NET 4.0.
ASP.NET Caching: Techniques and Best Practices
In addition to jrummerll's answer, check the Data Acces Layer of our app and make sure that you don't have any static variables defined. Having a static variable defined could cause this sort of issue too, since 2 contending requests may overwrite the value of the CompanyID, for example.
You basic model should work. What you've told us is not enough to diagnose the problem. But, I've got a few guesses. Most likely your code is confusing UserID or CompanyID values.
Are you mistakenly storing the CompanyID in the Cache, rather than the session?
Is the CompanyID stored in a static variable? A common (and disastrous!) pitfall in web applications is that a value stored in a static variable will remain the same for all users! In general, don't use static variables in asp.net apps.
Maybe your db caching or output caching doesn't vary properly by session or other variables. So, a 2nd user will see what was created for the previous user. Stop any caching that's happening and see if that fixes it, but debug from there.
Other variations on the above themes: maybe the query is stored in a static variable. Maybe these user-related values are stored in the cache or db, but the key for that record (UserID?) is stored in a static variable?
You can put that if statements in a thread. Threading provides you the option that only 1 user can access the application or gridview in your case.
See this link: http://msdn.microsoft.com/en-us/library/ms173179.aspx
Here is some sample code that is throughout the entire application that is used for filtering results. What is the best way to fix this so that when one user logs on, the other user doesn't see those results?
protected void PopulategvServiceRequestListing(string _whereclause)
{
_dsGlobalDatasource = new TelemarServiceRequestListing().GetServiceRequestListingDatasource(_whereclause);
if(_dsGlobalDatasource.Tables[0].Rows.Count!=0)
{
gv_ServiceRequest.DataSource = _dsGlobalDatasource;
gv_ServiceRequest.DataBind();
}
else
{
gv_ServiceRequest.DataSource=new TelemarServiceRequestListing().DummyDataset();
gv_ServiceRequest.DataBind();
gv_ServiceRequest.Rows[0].Visible = false;
gv_ServiceRequest.HeaderStyle.Font.Bold = true;
}
}

Best Practices for controlling access to form fields

I have a classic 3-tier ASP.Net 3.5 web application with forms that display business objects and allow them to be edited. Controls on the form correspond to a property of the underlying business object. The user will have read/write, readonly, or no access to the various controls depending on his/her role. Very conventional stuff.
My question is: what is the object-oriented best practice for coding this? Is there anything more elegant than wrapping each control in a test for the user's role and setting its Visible and Enabled properties?
Thanks
You'll want to drive this off of data, trust me. You'll need a lot of tables to do it right, but it is so worth it in the end. Having to crack open code and edit a bunch of if-statements every time the business wants to change permissions is a killer.
You'll want a table for your main high-level types, things you probably already have business object clases for. Then a table for each status of them. Then a table for the fields of these classes. Then a table for user roles (admin, guest, etc.) Finally a table for the permissions themselves. This table will have columns for business class, status, field, user role, and then what permission they have. For permissions I would go with one field and use an enum: Hidden, ReadOnly, Editable, and Required. Required implies Editable. Anything but Hidden implies Visible. Finally put a Priority column on this table to control which permission is used when more than one might apply.
You fill out this table with various combinations of class, status, field, role, and permission. If a value is null then it applies to all possible values. So you don't need a trillion rows to cover all your bases. For example, 99% of the time, Guest users are read-only users. So you can put a single entry in the table with only the Guest role specified, everything else is null, and set it's Priority nice and high, and set the permission to Read Only. Now for all classes, all statuses, all fields, if the user is a Guest, they will have Read Only permission.
I added status to your list of concerns because in my experience, business all the time wants to constrain things by an object's status. So maybe users can edit an item's name while it is in Draft status, for example, but once it is in Posted status, the name is no longer editable. That is really common in my experience.
You'd want to bring this table into memory and store it in the app's cache, because it's not going to change very often, if ever, unless you do a whole new version.
Now the above is going to handle 90% of your needs, I suspect.
One area that will have to be handled in code, unless you want to get really fancy, is the cases where a user's permission is determined in part by the value of fields in the object itself. So say you have a Project class, which has a Project Manager class. Now the Percent Complete field of the class is basically read-only for everybody, except the Project Manager. How are you going to handle that? You'll need to provide a way to incorporate specific instances of a class into the decision making process. I do this in code.
To work properly, I have found that access levels should be in this increasing order:
NONE, VIEW, REQUIRED, EDIT.
Note that REQUIRED is NOT the top level as you may think it would be since EDIT (both populate & de-populate permission) is a greater privilege than REQUIRED (populate-only permission).
The enum would look like this:
/** NO permissions.
* Presentation: "hidden"
* Database: "no access"
*/
NONE(0),
/** VIEW permissions.
* Presentation: "read-only"
* Database: "read access"
*/
VIEW(1),
/** VIEW and POPULATE permissions.
* Presentation: "required/highlighted"
* Database: "non-null"
*/
REQUIRED(2),
/** VIEW, POPULATE, and DEPOPULATE permissions.
* Presentation: "editable"
* Database: "nullable"
*/
EDIT(3);
From the bottom layer (database constraints), create a map of fields-to-access. This map then gets updated (further restrained) at the next layer up (business rules + user permissions). Finally, the top layer (presentation rules) can then further restrain the map again if desired.
Important: The map must be wrapped so that it only allows access to be decreased with any subsequent update. Updates which attempt to increase access should just be ignored without triggering any error. This is because it should act like a voting system on what the access should look like. In essence, the subsequent layering of access levels as mentioned above can happen in any order since it will result in an access-level low-water-mark for each field once all layers have voted.
Ramifications:
1) The presentation layer CAN hide a field (set access to NONE) for a database-specified read-only (VIEW) field.
2) The presentation layer CANNOT display a field when the business rules say that the user does not have at least VIEW access.
3) The presentation layer CANNOT move a field's access up to "editable" (nullable) if the database says it's only "required" (non-nullable).
Note: The presentation layer should be made (custom display tags) to render the fields by reading the access map without the need for any "if" statements.
The same access map that is used for setting up the display can also be using during the submit validations. A generic validator can be written to read any form and its access map to ensure that all the rules have been followed.
I have often found that this is really the only real easy and understandable way to do it, as your interface needs to modify based on the information and level of editing that they can complete.
I do find typically though that depending on the needs, you can interject the "cannot edit" information by passing role information to the business level if you have plans to move to different presentation levels. but this adds complexity, and if you are only building for one interface it would most likely be overkill
For the website menus we can have different menus based on users role by using the Sitemaps. For controls like Buttons we will have to hide them using their Visible property. I think a good idea will be to create a server control (Button) and expose the Role property. This will hide the Button if the user is not in the correct role.
My first instinct for doing this in a more OO way would be to handle your roles and their implementations for this purpose (control permissions read/write/etc) is to use the abstract factory pattern for your roles. I will be happy to explain the ins and outs of what I am talking about if you'd like but there are probably 900 examples on the web. Here is one link (disclaimer: it's my blog but it does happen to talk to using abstract factory for roles specifically)
Using something like this you could then use a number of methods to display the correct controls for each of your business object properties with the correct attributes (read/write/hidden/displayed/etc).

Business/Domain Object in ASP.NET

Just trying to gather thoughts on what works/doesn't work for manipulating Business/Domain objects through an ASP.NET (2.0+) UI/Presentation layer. Specifically in classic ASP.NET LOB application situations where the ASP.NET code talks directly to the business layer. I come across this type of design quite often and wondering what is the ideal solution (i.e. implementing a specific pattern) and what is the best pragmatic solution that won't require a complete rewrite where no "pattern" is implemented.
Here is a sample scenario.
A single ASP.NET page that is the "Edit/New" page for a particular Business/Domain object, let's use "Person" as an example. We want to edit Name and Address information from within this page. As the user is making edits or entering data, there are some situations where the form should postback to refresh itself. For example, when editing their Address, they select a "Country". After which a State/Region dropdown becomes enabled and refreshed with relevant information for the selected country. This is essentially business logic (restricting available selections based on some dependent field) and this logic is handled by the business layer (remember this is just one example, there are lots of business situations where the logic is more complex during the post back - for example insurance industry when selecting certain things dictates what other data is needed/required).
Ideally this logic is stored only in the Business/Domain object (i.e. not having the logic duplicated in the ASP.NET code). To accomplish this, I believe the Business/Domain object would need to be reinitialized and have it's state set based on current UI values on each postback.
For example:
private Person person = null;
protected void Page_Load()
{
person = PersonRepository.Load(Request.QueryString["id"]);
if (Page.IsPostBack)
SetPersonStateFromUI(person);
else
SetUIStateFromPerson(person);
}
protected void CountryDropDownList_OnChange()
{
this.StateRegionDropDownList.Enabled = true;
this.StateRegionDropDownList.Items.Clear();
this.StateRegionDropDownList.DataSource = person.AvailableStateRegions;
this.StateRegionDropDownList.DataBind();
}
Other options I have seen are storing the Business object in SessionState rather than loading it from the repository (aka database) each time the page loads back up.
Thoughts?
I'd put your example in my 'UI Enhancement' bucket rather than BL, verifying that the entries are correct is BL but easing data entry is UI in my opinion.
For very simple things I wouldn't bother with a regular post back but would use an ajax approach. For example if I need to get a list of Cities, I might have a Page Method (Or web service) that given a state gives me a list of cities.
If your options depends on a wide variety of parameters, what your doing would work well. As for storing things in Session there are benefits. Are your entities visible to multiple at the same time? If so what happens when User A and User B both edit the same. Also if your loading each time are you savign to the database each time? What happens if I am editing my name, and then select country, but now my browser crashes. Did you update the name in the DB?
This is the line I disagree with slightly:
this.StateRegionDropDownList.DataSource = person.AvailableStateRegions;
Person is a business/domain object, but it's not the object that should be handling state/region mapping (for example), even if that's where the information to make the decision lives.
In more complicated examples where multiple variables are needed to make a decision, what you want to do in general is start from the domain object you're trying to end up with, and call a function on that object that can be given all the required information to make a business decision.
So maybe (using a static function on the State class):
this.StateRegionDropDownList.DataSource = State.GetAvailableStateRegions(person, ipAddress);
As a consequence of separating out UI helper concerns from the Person domain object, this style of programming tends to be much "more testable".

Resources