In Corda, assets that can be split and merged should be represented using the FungibleAsset interface.
The finance package defines a further OnLedgerAsset class that seems to provide similar functionality for issuing, splitting, merging and exits fungible assets.
If I'm defining my own fungible asset, should I subclass the OnLedgerAsset class?
FungibleAsset is defined in Corda Core, and is used by the node's vault to split and merge fungible assets. All fungible assets should implement it.
OnLedgerAsset is defined in the finance package. It is used to ensure that all the fungible assets defined in the finance package have additional common methods for issuance, splitting, merging and exiting. The finance package remains unstable (see https://docs.corda.net/corda-api.html#corda-incubating-modules), and its API is likely to change extensively in the future to meet the requirements of real businesses.
I'd therefore recommend that you do not implement OnLedgerAsset for now. Much of the functionality provided by OnLedgerAsset will likely be moved into the node's vault in the future.
Related
I would like to add validation annotations to my states to avoid boilerplate when verifying Corda transactions. For example, I might want to annotate my state with an annotation that prevents states from being created with negative amounts:
class MyState(#Min(0) val amount: Int): ContractState {
override val participants = listOf<AbstractParty>()
}
I would then like to check these annotations during contract verification, and throw an exception if any of the annotations are violated.
Does Corda support the use of existing validation annotation libraries within contract validation? Can I provide my own custom validation annotations?
An annotations approach would make the code a lot clearer, especially in cases where the data model is very complex.
Right now you have two options of doing this
Embed the validator engine within your cordapp as a normal dependency, in which case you are providing an implementation to your members, who must trust you.
Individual members can attach their chosen validator engine to a transaction as a normal attachment, which will make the validator classes available on the classpath during the contract verify. In this scenario each counterparty to a transaction is responsible for checking the attachment hash is listed in a whitelist of validators they have previously audited.
However, we would like to warn you of some associated risks, which are listed below.
Determinism. In the future, Corda will run contracts inside a deterministic JVM (DJVM), where any non-deterministic code would fail to execute. It is possible some of the available JSR303 validator implementations rely on non-deterministic code. It is important to emphasise, that contracts which do work now, might stop working in the future, once DJVM is fully implemented. R3 intends to provide a Gradle plugin, which would verify code for determinism during the build time, which would help developers to eliminate all non-deterministic libraries from their contracts.
Some JSR303 implementations, such as the one from Hibernate, are very heavy (about 120k lines of code). In the future, contract classes will be loaded by a transaction scoped classloader, i.e. classes will be reloaded from a scratch for verification of each transaction. Given that hibernate validator takes about 20-30 seconds to self-initialise, it will become a performance bottleneck. There might be a need to write a custom implementation of the JSR re-using Hibernate's validator logic but stripping out the more advanced features that are irrelevant in contract context.
As a general recommendation, we encourage you to consider moving some of the heavy lifting to flows, as they don't have any DJVM-related restrictions.
If annotations are used, Corda will still require some forms of validation that aren’t provided by any JSR303 annotations, eg transaction level validations such as matching signers vs participants etc. Hence, some Contract code will still have to be written.
You will have to provide a mechanism for your members to audit and validate the chosen validation implementation. As this will now form part of the contract to which they are a signing party. It is worth discussion around the situation if a validator is found to be faulty in the future.
We really like the idea of using JSR303 annotations for data model validation and we will help you through the journey to implement it, so if you encounter any issues let us know.
I'm trying to refactor an existing project into PureMVC. This is an Adobe AIR desktop app taking advantage of the SQLite library included with AIR and building upon it with a few other libraries:
Paul Robertson's excellent async SQLRunner
promise-as3 implementation of asynchronous promises
websql-js documentation for good measure
I made my current implementation of the database similar to websql-js's promise based SQL access layer and it works pretty well, however I am struggling to see how it can work in PureMVC.
Currently, I have my VOs that will be paired with DAOs (data access objects) for database access. Where I'm stuck is how to track the dbFile and sqlRunner instances across the entire program. The DAOs will need to know about the sqlRunner, or at the very least, the dbFile. Should the sqlRunner be treated as singleton-esque? Or created for every database query?
Finally, how do I expose the dbFile or sqlRunner to the DAOs? In my head right now I see keeping these in a DatabaseProxy that would be exposed to other proxies, and instantiate DAOs when needed. What about a DAO factory pattern?
I'm very new to PureMVC but I really like the structure and separation of roles. Please don't hesitate to tell me if this implementation simply will not work.
Typically in PureMVC you would use a Proxy to fetch remote data and populate the VOs used by your View, so in that respect your proposed architecture sounds fine.
DAOs are not a pattern I've ever seen used in conjunction with PureMVC (which is not to say that nobody does or should). However, if I was setting out to write a crud application in PureMVC, I would probably think in terms of a Proxy (or proxies) to read information from the database, and Commands to write it back.
The question(as stated in the title) comes to me as recently i was looking at Spring MVC 3.1 with annotation support and also considering DDD for an upcoming project. In the new Spring any POJO with its business methods can be annotated to act as controller, all the concerns that i would have addressed within a Controller class can be expressed exclusively through the annotations.
So, technically i can take any class and wire it to act as controller , the java code is free from any controller specific code, hence the java code could deal with things like checking security , starting txn etc. So will such a class belong to Presentation or Application layer ??
Taking that argument even further , we can pull out things like security, txn mgmt and express them through annotations , thus the java code is now that of the domain object. Will that mean we have fused together the 2 layers? Please clarify
You can't take any POJO and make it a controller. The controller's job is get inputs from the browser, call services, prepare the model for the view, and return the view to dispatch to. It's still a controller. Instead of configuring it through XML and method overrides, you configure it through annotations, that's all.
The code is very far from being free from any controller specific code. It still uses ModelAndView, BindingResult, etc.
I'll approach the question's title, regarding AOP:
AOP does not violate "layered architecture", specifically because by definition it is adding application-wide functionality regardless of the layer the functionality is being used in. The canonical AOP example is logging: not a layer, but a functionality--all layers do logging.
To sort-of tie in AOP to your question, consider transaction management, which may be handled via Spring's AOP mechanism. "Transactions" themselves are not specific to any layer, although any given app may only require transactions in only a single layer. In that case, AOP doesn't violate layered architecture because it's only being applied to a single layer.
In an application where transactions may cross layers IMO it still doesn't violate any layering principles, because where the transactions live isn't really relevant: all that matters is that "this chunk of functionality must be transactional". Even if that transaction spans several app boundaries.
In fact, I'd say that using AOP in such a case specifically preserves layers, because the TX code isn't mechanically reproduced across all those layers, and no single layer needs to wonder (a) if it's being called in a transactional context, or (b) which transactional context it's in.
I've read through this article trying to understand why you want a session bean in between the client and entity bean. Is it because by letting the client access entity bean directly you would let the client know exactly all about the database?
So by having middleman (the session bean) you would only let the client know part of the database by implementing the business logic in some certain way. So only part of the database which is relevant to the client is only visible. Possibly also increase the security.
Is the above statement true?
Avoiding tight coupling between the client & the business objects, increasing manageability.
Reducing fine-grained method invocations, leads to minimize method invocation calls over the network, providing coarse-grained access to clients.
Can have centralized security & transaction constraints.
Greater flexibility & ability to cope with changes.
Exposing only required & providing simpler interface to the clients, hiding the underlying complexity and inner details, interdependencies between business components.
The article you cite is COMPLETELY out of date. Check the date, it's from 2002.
There is no such thing anymore as an entity bean in EJB (they are currently retained for backwards compatibility, but are on the verge of being purged completely). Entity beans where awkward things; a model object (e.g. Person) that lives completely in the container and where access to every property of it (e.g. getName, getAge) required a remote container call.
In this time and age, we have JPA entities that are POJOs and contain only data. Don't confuse a JPA entity with this ancient EJB entity bean. They sound similar but are completely different things. JPA entities can be safely send to a (remote) client. If you are really concerned that the names used in your entity reveal your DB structure, you could use XML mapping files instead of annotations and use completely different names.
That said, session beans can still perfectly be used to implement the Facade pattern if that's needed. This pattern is indeed used to give clients a simplified and often restricted view of your system. It's just that the idea of using session beans as a Facade for entity beans is completely outdated.
It is to simplify the work of the client. The Facade presents a simple interface and hides the complexity of the model from the client. It also makes it possible for the model to change without affecting the client, as long as the facade does not change its interface.
It decouples application logic with the business logic.
So the actual data structures and implementation can change without breaking existing code utilizing the APIs.
Of course it hides the data structure from "unknown" applications if you expose your beans to external networks
I'm wondering what strategies exist to handle object integrity in a stateful client like a Flex or Silverlight app.
What I mean is the following: consider an application where you have a Group and a Member entity. Groups contain multiple members and members can belong to multiple groups. A view lists the different groups, which are lazy loaded (no members initially). When requesting the details of the group, all members are loaded and cached so the next time we don"t need to invoke a service to fetch the details and members of the group.
Now, when we request the details of another group that has the same member of a group that was already loaded, would we care about the fact that the member is already in memory?
If we don't, I can see a potential data conflict when we edit the member (referenced in the first group) and changes are not applied to the other member instance. So to solve this, we could check the result of the service call (that gets the group details) for members that are already loaded and then replace the loaded ones with the cached ones.
Any tips, ideas or experiences to share?
What you are describing is something that is usually solved by a "first-level cache" (in Hibernate, the "Session"; in JPA, the "EntityManager") which ensures that only one instance of a particular entity exists in a particular context. As you suggest, this could be applied to objects as they are fetched from the server to ensure that all references to a particular entity are in fact references to the same object instance. You would also need a mechanism to ensure that entities created inside the AVM exist in that same context so they have similar logic applied to them.
The Granite Data Services project has a project called "Tide" which aims to solve this problem:
http://www.graniteds.org/confluence/display/DOC/6.+Tide+Data+Framework
As far as DDD goes, it's important not to design the backend as a simple data access API, such as simply exposing a set of DAOs or Repositories. The client application cannot be trusted and in fact is very easy to manipulate with a debugging proxy such as Charles. I always design a services API that is tailored to the UI (so that data for a screen can be fetched in a single call) and has necessary security or validation logic enforced, often using annotations and Spring AOP.
What I would do is create a client side application service which does the caching and servicing of requests for data. This would handle whether an object already exists in the cache. If you are using DDD then you'll need to decide what is going to be your aggregate root entity. Group or Member. You can't have both control each other. There needs to be one point for managing loading etc. Check out this video on DDD at the Canadian ALT.NET OpenSpaces. http://altnetpedia.com/Calgary200808.ashx