I'm looking to create a custom carbon component and there are a few choices for persistence. The main options appear to be JPA, or WSO2 data services.
What is the strengths and weaknesses of both options?
Are there other recommended approaches?
The preferred way to access data in an SOA would be to use services. Basically, if you use JPA, your data access logic will be limited to the use of that specific component. If you use a solution like data services, then those services will be accessible globally, and thus enforce re-usability and a more consistent architecture.
The problem with exposing data services outside of individual service boundaries is that you lose isolation and expose internal implementation of a service to the outside world.
Each service should use its own database (and you can use data services if that data comes from multiple sources).
The place to have a single database with cross service data is the reporting database which should be different from the transactional one anyway (a pattern I call aggregated reporting)
Related
How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation for fetching the Data Store Credentials instead of credentials being put up directly in ADF Linked Service. Please suggest ARM or PowerShell methods for the policy implementation.
As of yesterday, the Data Factory Azure Policy integration is available which means you can now find some built-in policies that can be assigned to ADF.
One of those is exactly what you're asking for as you can see in the image below. You can find more information here
Edit: Based on your comment, I'm editing this answer with the info you want. When it comes to custom policies, it's pretty much up to you to come up with them and create what fits your needs. In your particular case, I've created one policy that does what you want, please see here.
This policy will audit your data factory linked services and check if they're using a self-hosted integration runtime. Currently, that check is only done for a few types of integration runtimes (if you look at the policy, you can see 5 of them) which means that if you want to check more types of linked services, you'll need to add them to the list of allowed values and select them when assigning the policy definition.
Bear in mind that for some linked services types, such as Key Vault, that check won't make sense since that service can't use a self-hosted IR
I need the opinion of the person who has used/uses 'Restier' in the production.
I see some issues - security is disabled by default - all data can be read by the user who is not even authorized on site. Even if we plan to restrict some data - you can not remove one column from the table - only all columns will be visible to the client.
And the last - all business-logic moved to browser javascript - which is not good. If we need to perform a complex operation (which must be in a single transaction) - it is not possible.
My opinion - 'Restier' is designed for very simple RESTful projects - such as the address book, todo list etc. If you develop the big commercial application - that operate complex data scheme and operate money transactions - you should avoid using 'Restier' in a project.
Any thoughts appreciated.
REST is an arquitectural style for Web Services.
OData is a standard that describes a good technology independent implementation of REST.
RESTier is a library that implements OData V4.
The complexity of your domain must be in your Domain and Application Layer.
You can use RESTier to expose your domain functionality as a WebService the way you like. You could expose your entities only for Read operations and expose your use cases (Application Layer) as OData Actions and Functions which can the be consumed by any kind of client (iOS, Android, Web Client such as Asp.Net Mvc, Wpf , any JavaScript Frontend etc.)
If you have a complex domain I would suggest you to investigate Domain Driven Design.
Now to your questions...
Regarding security you can implement all the goodness of Asp.Net in Restier.
Regarding data shaping you never expose your domain entities directly through the Web Service. I would suggest to implement factories that convert back and forth between for example Customer (domain entity which represents the business logic) and CustomerDto (simple Data Transfer Object) . With this you can shape your data to be exposed the way you require.
Having the business logic in the Front End (UI Layer), as you mentioned, is considered an anti pattern (smart UI anti pattern) if you have big domain complexity. (For simple CRUD apps is ok). Restier does not push you in this direction. It is a matter of how you architect your solution.
Hope this helps you.
My company has a 3rd party web service we are designing a front end for. The "objects" used by this web service are very large (and variable depending on the number of sub-entities created). The web service does not expose methods to commit/load sub-entities, only the full object hierarchy.
The UI itself is split into many sub screens, and master/detail views to be able to efficiently/easily edit the large amount of data.
The issue is where to store all the data you aren't currently looking at.
Doing the web service commit takes up to 30 seconds for large records, so it is not feasible to use the web service for the intermittent data storage.
You can consider using .Net's SessionState out of the box, with the SQL persistence mode to cache the web service data, although you do need to ensure that you have a strategy to clear out expired data from the database. All objects stored in SessionState will need to be Serializable.
Also, instead of using the external web service's entity structure (e.g. the serializable proxy entities generated by a .Net added Service Reference), you should also
consider building your own customized class hierarchy for your screens (i.e. custom view models), and then build the bridging to map / project the web service graph to your viewmodel after the initial fetch from the web service, and then back again to the web service entities after the user has finished updating the graph. LINQ is great for this purpose, or possibly AutoMapper, if you haven't deviated from the web service class and property naming standards.
I am trying to decide which way to go. I have a solution that needs to have a web service and a client side which is a windows phone 7 project. the WP7 project needs to communicate with the database through the WCF service.
I am a little bit confused as to which way i should choose to go, and what are the differences, advantages/disadvantages of regular WCF service file VS the WCF Data Service.
Which way will be easier to go with considering my wp7 app needs to run queries on some tables on the database, nothing too fancy.
Any explanation will be welcomed.
Thanks
WCF Data Services are great if you need CRUD and flexible query capabilities - they allow you to expose underlying data (e.g. via Entity Framework) and control security with a minimum of development effort, as a RESTful API, especially to AJAX and SPA type client front ends. (Also, note that WebAPI now also offers similar capabilities).
WCF Services are more for Formal "Service" and "Operation" integration capabilities, where there is a lot more business focus, e.g. rules, processing, workflow, etc.
e.g. WCF would be useful to Submit a Claim for Processing (custom / rich graph of data input and output), Trigger a Nightly batch job (void response), etc.
Also, you can combine both technologies, e.g. for a CQRS type architecture, by using Data Services for the Query, and WCF for the Command type capabilities.
We have a few projects that we put all the data access in a separate web service project and the parent project will call the web service for everything data related. The web service will only accept connections from the web project server. My assumption is that the web service would be less susceptible to intrusion this way. I'm not really sure this is correct.
Is this more secure than just putting the data access in a class or dll within the parent project?
NOTE
Developers above me made this decision.
I don't see that as an effective way of securing your database. Of all the various ways that exist to protect your data layer, I don't think that moving calls from a class library to a web service is an effective way to protect yourself.
A better approach would be to make sure that you use parameterized queries or stored procedures to prevent SQL injection, and limit the privileges of your logins to only the operations that they need to perform.
However, there would be other arguments for having data access in a separate web service... such as re-usability, or a service-oriented architecture. If the same data access layer is needed from a variety of projects on multiple servers, by having the web service you wouldn't need to have the same class library duplicated all over the place... which would cause you to worry about which project has which version of your data access layer.
So, more secure? I don't think so... Other benefits? Probably...
Short answer: Yes
Longer answer: My assumption is that the web server that is exposing the services is behind its own firewall. Doing it this way insulates the database from intrusion by forcing hackers to go through another layer if they were able to compromise your application servers. Since the database connection strings do not exist on the app server, and a firewall prevents direct connections from that server to the database, the hackers would need to somehow puncture that firewall and gain access to the server that is hosting your data services.
Now, I also assume that the web services are not simply exposing methods like
execute(string sqlCommand)
if that's the case, then this solution might actually less secure than simply using a database without the web services. For this solution to truly be more secure you would want to create operation-specific methods on the web service server.
A DLL can't be accessed and executed from the Web, so far as I know. A Web service can. If that's true, the class library referenced by a Web project (or even a Web Service) is more secure than a Web service encapsulating that logic directly.
Further, there's the whole notion of Separation of Concerns. In my mind, data access logic belongs on a separate tier, completely separate from business logic. In a well designed architecture, Web services expose discrete methods that represent business transactions--not necessarily data transactions. Business transactions encapsulate one or more data transactions, which are represented by separate classes that encapsulate the data access logic and provide the security to ensure that SQL injection never occurs.
Others, naturally, may disagree. We're developers. It's our nature to disagree. :)