I've been doing some reading recently and have encountered the Law of Demeter. Now some of what I've read makes perfect sense e.g. the paperboy should never be able to rifle through a customers pocket, grab the wallet and take the money out. The wallet is something the customer should have control of, not the paperboy.
What throws me about the law, maybe I'm just misunderstanding the whole thing, is that stringing properties together with a heirarchy of functionality/information can be so useful. e.g. .NETs HTTPContext class.
Wouldn't code such as :
If DataTable.Columns.Count >= 0 Then
DataTable.Columns(0).Caption = "Something"
End If
Or
Dim strUserPlatform as string = HttpContext.Current.Request.Browser.Platform.ToString()
Or
If NewTerm.StartDate >= NewTerm.AcademicYear.StartDate And
NewTerm.EndDate <= NewTerm.AcademicYear.EndDate Then
' Valid, subject to further tests.
Else
' Not valid.
End If
be breaking this law? I thought (perhaps mistakenly) the point of OOP was in part to provide access to related classes in a nice heirarchical structure.
I like, for example, the idea of referencing a utility toolkit that can be used by page classes to avoid repetitive tasks, such as sending emails and encapsulating useful string methods:
Dim strUserInput As String = "London, Paris, New York"
For Each strSearchTerm In Tools.StringManipulation.GetListOfString(strUserInput, ",")
Dim ThisItem As New SearchTerm
ThisItem.Text = strSearchTerm
Next
Any clarity would be great...at the moment I can't reconcile how the law seems to banish stringing properties and methods together...it seems strange to me that so much power should be disregarded? I'm pretty new to OOP as some of you might have guessed, so please go easy :)
What the Law of Demeter (also "Law of Demeter for Functions/Methods") wants to reduce with saying "only use one dot" is that in a method you shouldn't have to assume such a lot of context from the provided arguments. This increases the dependency of the class and makes it less testable.
It doesn't mean that you can't use all of the above examples but it suggests that instead of giving your method the customer which then accesses the wallet and retrieves the money from it:
function getPayment(Customer customer)
{
Money payment = customer.leftpocket.getWallet().getPayment(100);
...
// do stuff with the payment
}
that you instead only pass what the paperboy needs to the method and as such reduce the dependency for the method if it's possible:
function getPayment(Money money)
{
// do stuff with the payment
}
Your benefit from it will be that you dont depend on the customer to have the wallet in the left pocket but instead just process the money the customer gives you. It's a decision you have to base on your individual case though. Less dependencies allow you to test easier.
I think applying the Law of Demeter to individual classes is taking it a bit too far. I think a better application is to apply it to layers in your code. For example, your business logic layer shouldn't need to access anything about the HTTP context, and your data access layer shouldn't need to access anything in the presentation layer.
Yes, it's usually good practice to design the interface of your object so that you don't have to do tons of property chaining, but imagine the hideously complex interface you'd have if you tried to do that to the DataTable and HttpContext classes you gave as examples.
The law doesn't say that you shouldn't have access to any information at all in a class, but that you should only have access to information in a way that you can't easily misuse it.
You can for example not add columns in a data table by assigning anything to the Count property:
DataTable.Columns.Count = 42;
Instead you use the Add method of the Columns object that lets you add a column in a way that all the needed information about the column is there, and also so that the data table get set up with data for that column.
Related
I have a question about Service Layer - now my controllers interacts with Service Layer where I work with EF Context but sometime business logic inside Service methods can be really huge approximately 1000 lines.
For example:
class OrderService {
public async Task UpdateOrder(OrderDto dto) {
if(dto.Products.Count > 3)
{
**change order status that can take 300 lines**
}
}
}
Where can I keep that logic that I use inside if statement ?
Usually I just create private method inside that service but I'm not sure that it is a good approach. Maybe create specific class for that or something like this ?
Thank you.
This question can go more to the developer's personal point of view. Here's some techniques I use to organize my code. Just try to see if they fit your scenario and change them to your need.
Move some logic to your entities.Example: An entity should be able to know if is in a good state.
Entity.Type = Square
Entity.Edges = 3
Entity.IsValid() => False // Squares has 4 edges
You can create multiple classes that will help you add some logic to your code. In a feature/entity you can create multiple validation classes and then do something like entity.AddValidation(EdgeValidator).
If you'll have a different algorithm base on conditions you can use the Strategy Pattern which is a Behavioral patterns.
If House.FamilySize == 1 then ProcessAlgorithm = SinglePersonAlgorithm
Else ProcessAlgorithm = MultiplePersonAlgorithm
ProcessAlgorithm(House)
The old and classic split a function in multiple ones.
Incapsulating different branches of if statements in separate classes may require some rework and refactoring but may well lead to great results and allow you to use patterns like for example
Chain of responsibility
Pipeline processing
and eventually all other Behavioral patterns
I am working on the SCORM 2004 4th edition based LMS, where i am at the initial stage.
Hence, i am reading SCORM based documents.
In the SCORM 2004 4th edition CAM document, i was stuck at the page CAM-3-37, where the element adlcp:data is defined as the container used to define sets of data shared associated with an
activity.
and the child element of adlcp:data i.e; map is defined as
The element is the container used to describe how an activity will utilize a specific
set of shared data.
I thought, I may understand it as I will move forward in the said book.
But, I completed the CAM book and yet i am unable to get it how those two tags work.
And also, let's take an example into the consideration, which is as follows:
<adlcp:data>
<adlcp:map targetID="com.scorm.golfsamples.sequencing.forcedsequential.notesStorage" readSharedData="true" writeSharedData="true"/>
</adlcp:data>
where, readSharedData attribute indicates that
currently available shared data will be utilized by the activity while it is active.
and writeSharedData attribute indicates that
shared data should be persisted (true or false) upon termination ( Terminate(“”)
) of the attempt on the activity.
Here in this case,
i didn't get what this targetID= com.scorm.golfsamples.sequencing.forcedsequential.notesStorage indicates.
i didn't get what is this shared data? and where is it located? what is it actually?
Can anyone help me in understanding the above described elements?
adlcp:data is a way to define space on the LMS to store information that doesn't fit in the CMI data model, or that you want to make accessible across SCOs.
There are 3 pieces to defining this space.
1. adlcp:sharedDataGlobalToSystem attribute on the element, which says whether shared data is available for one attempt or for every attempt (aka is it wiped out each time the learner takes the course).. See CAM-3-27
2. adlcp:data & adlcp:map elements list the space(s) you want made available for that SCO. You define an ID for each storage space, and then add access controls meaning whether or not the SCO can read or write to the storage space. (See CAM-3-37)
Those two set up the LMS storage and behaviors for each SCO in the content package.
The final piece is described in section 4.3 of the RTE book. To access the data storage spaces, you use the SCORM API GetValue and SetValue requests and the data model element adl.data.n.store.
One additional note, since the id order is not necessarily maintained, you will need to loop through the adl.data stores in the SCO and determine which index goes to which ID.
Tom Creightons answer is a very good explanation of the shared buckets implementation.
I am just adding a few pointers we found in our implentation.
The data saved is for the "learner", and can be accessed and set across different SCO's or courses assigned to the learner. Beware though, if you are using SCORM Cloud, Clear GLobals button will clear the data for all courses assigned to the user.
While Tom mentions that adlcp:sharedDataGlobalToSystem is attempt specific, SCORM Cloud support says that it is restricted to course/SCO. I have yet to get clarity on that.
There might be a limit on the number of buckets being saved. I have yet to confirm this and will update this reply shortly.
For those looking for more info on implementation:
Add this to your item (organisation > item) in the manifest:
<adlcp:data>
<adlcp:map targetID="mybucketname" readSharedData="true" writeSharedData="true"/>
</adlcp:data>
JS part (Use your API calls in place of LMSGetValue and LMSSetValue)
var dataBucketsCount = LMSGetValue("adl.data._count");
dataBucketsCount = parseInt(dataBucketsCount);
for (var i=0; i < dataBucketsCount; i++){
if (LMSGetValue("adl.data." + i + ".id") == "mybucketname"){
//do your processing with the data
}
}
I had to search a lot for this and try and fail multiple times until we got this right. So I have added this here, so in future it might help someone.
I am reading the Head First Design Patterns book. In Chapter 3 ("Decorating Objects: The Decorator Pattern"), I do not understand this part:
"Wouldn’t it be easy for some
client of a beverage to end up with
a decorator that isn’t the outermost
decorator? Like if I had a DarkRoast with
Mocha, Soy, and Whip, it would be easy
to write code that somehow ended up
with a reference to Soy instead of Whip,
which means it would not include Whip in
the order."
Could someone please help me understand the main point of this section, and what the main issue was that the authors were addressing?
I think what they wanted to point out, is the fact that you can get your references mixed up if you are not careful where and how you create your decorated objects. Consider the example on page 98 (first edition from 2004).
Beverage beverage3 = new HouseBlend();
beverage3 = new Soy(beverage3);
beverage3 = new Mocha(beverage3);
beverage3 = new Whip(beverage3);
If you would do stuff in between those steps of creation, you might end up with a Mocha without Whip.
And like they wrote in the answer section:
However, decorators are typically created by using other patterns like Factory and Builder.
If you automate your object creation, it might prevent you from making reference errors.
i created the following sample method in business logic layer. my database doesn't allow nulls for name and parent columns:
public void Insert(string catName, long catParent)
{
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = catParent;
con.Category.AddObject(cat);
con.SaveChanges();
}
so i unit test this and test for empty name and empty parent will fail. to get around that issue i have to refactor the Insert mathod as following:
public void Insert(string catName, long catParent)
{
//added to pass the test
if(string.IsNullOrEmpty(catName)) throw new InvalidOperationException("wrong action. name is empty.");
long parent;
if(long.TryParse(catParent, out parent) == false) throw new InvalidOperationException("wrong action. parent didn't parsed.");
//real bussiness logic
EntityContext con = new EntityContext();
Category cat = new Category();
cat.Name = catName;
cat.Parent = parent;
con.Category.AddObject(cat);
con.SaveChanges();
}
my entire bussiness layer are simple calls to database. so now i'm validating the data again! i already planned to do my validation in UI and test that kind of stuff in UI test units. what should i test in my bussiness logic method other than validation related tasks? and if there is nothing to be unit tested why everybody says "unit test all the layers" and things like that which i found a lot online?
The techniques involved in testing are those that you break down your program into smaller parts (smaller components or even classes) and test those small parts. As you assemble those parts together, you make less comprehensive tests -- the smaller parts are already proven to work -- until you have a functional, tested program, which then you give to users for "user tests".
It's preferable to test smaller parts because:
It's simpler to write the tests. You'll need less data, you only setup one object, you have to inject less dependencies.
It's easier to figure out what to test. You know the failing conditions from a simple reading of the code (or, better yet, from the technical specification).
Now, how can you guarantee that you business layer, simple as it's, is correctly implemented? Even a simple database insert can fail if badly written. Besides, how can you protected yourself from changes? Right know, the code works, but what will happen in the future if the database is changed or someone update the business logic.
However, and this is important, you actually don't need to test everything. Use your intuition (which is also called experience) to understand what needs testing and what doesn't. If you method is simple enough, just make sure the client code is correctly tested.
Finally, you've said that all your validation will occur in the UI. The business layer should be able to validate the data in order to increase reuse in your application. Fail to do that and whenever you or whoever make changes in your code in the future might create new UI and forget to add the required validations.
I am trying to get my Windows State Machine workflow to communicate with end users. The general pattern I am trying to implement within a StateActivity is:
StateInitializationActivity: Send a message to user requesting an answer to a question (e.g. "Do you approve this document?"), together with the context for...
...EventDrivenActivity: Deal with answer sent by user
StateFinalizationActivity: Cancel message (e.g. document is withdrawn and no longer needs approval)
This all works fine if the StateActivity is a "Leaf State" (i.e. has no child states). However, it does not work if I want to use recursive composition of states. For non-leaf states, StateInitialization and StateFinalization do not run (I confirmed this behaviour by using Reflector to inspect the StateActivity source code). The EventDrivenActivity is still listening, but the end user doesn't know what's going on.
For StateInitialization, I thought that one way to work around this would be to replace it with an EventDrivenActivity and a zero-delay timer. I'm stuck with what to do about StateFinalization.
So - does anyone have any ideas about how to get a State Finalization Activity to always run, even for non-leaf states?
Its unfortunate that the structure of "nested states" is one of a "parent" containing "children", the designer UI re-enforces this concept. Hence its quite natural and intuative to think the way you are thinking. Its unfortunate because its wrong.
The true relationship is one of "General" -> "Specific". Its in effect a hierachical class structure. Consider a much more familar such relationship:-
public class MySuperClass
{
public MySuperClass(object parameter) { }
protected void DoSomething() { }
}
public class MySubClass : MySuperClass
{
protected void DoSomethingElse() { }
}
Here MySubClass inherits DoSomething from SuperClass. The above though is broken because the SuperClass doesn't have a default constructor. Also parameterised constructor of SuperClass is not inherited by SubClass. In fact logically a sub-class never inherits the constructors (or destructors) of the super-class. (Yes there is some magic wiring up default constructors but thats more sugar than substance).
Similarly the relationship between StateAcivities contained with another StateActivity is actually that the contained activity is a specialisation of the container. Each contained activity inherits the set of event driven activities of the container. However, each contained StateActivity is a first class discrete state in the workflow same as any other state.
The containing activity actual becomes an abstract, it can not be transitioned to and importantly there is no real concept of transition to a state "inside" another state. By extension then there is no concept of leaving such an outer state either. As a result there is no initialization or finalization of the containing StateActivity.
A quirk of the designer allows you to add a StateInitialization and StateFinalization then add StateActivities to a state. If you try it the other way round the designer won't let you because it knows the Initialization and Finalization will never be run.
I realise this doesn't actually answer your question and I'm loath to say in this case "It can't be done" but if it can it will be a little hacky.
OK, so here’s what I decided to do in the end. I created a custom tracking service which looks for activity events corresponding to entering or leaving the states which are involved in communication with end users. This service enters decisions for the user into a database when the state is entered and removes them when the state is left. The user can query the database to see what decisions the workflow is waiting on. The workflow listens for user responses using a ReceiveActivity in an EventDrivenActivity. This also works for decisions in parent ‘superstates’. This might not be exactly what a "Tracking Service" is meant to be for, but it seems to work
I've thought of another way of solving the problem. Originally, I had in mind that for communications I would use the WCF-integrated SendActivity and ReceiveActivity provided in WF 3.5.
However, in the end I came to the conclusion that it's easier to ignore these activities and implement your own IEventActivity with a local service. IEventActivity.Subscribe can be used to indicate to users that there is a question for them to answer and IEventActivity.Unsubscribe can be used to cancel the question. This means that separate activities in the State's inialization and finalization blocks are not required. The message routing is done manually using workflow queues and the user's response is added to the queue with appropriate name. I used Guid's for the queue names, and these are passed to the user during the IEventActivity.Subscribe call.
I used the 'File System Watcher' example in MSDN to work out how to do this.
I also found this article very insructive: http://www.infoq.com/articles/lublinksy-workqueue-mgr