Atomic swap in a cross compatibility zone setup - corda

I am pretty new to corda and I am curious if it is possible to do a cross compatibility zone DvP. According to https://www.corda.net/2017/08/compatibility-and-upgrades/ it is possible to have different corda newtorks in a global network.
My question addresses following use case:
let's say I have two corda networks (compatibility zones). Each network has its own notary, nodes, customers & KYC process and is supporting a certain asset.
The first network provides for example a payment infrastructure and the second network a securities network.
Is it possible to do that by using R3 corda, if yes is there any example/tutorial?
Thanks in advance for any support!

The answer is yes but I think we're talking at cross-purposes :) Networks operated and governed by different entities are intended to form and operate WITHIN a compatibility zone.
The way I think it's most helpful to think of Compatibility Zones is to imagine the concept just doesn't exist... imagine there was just ONE Corda network (ie CZ) that everybody used (that was transparently/openly governed so no one firm/group of firms controlled it)... and then all the different apps and business networks existed within it... able to interoperate and transact across each other, because their nodes were compatible... they would understand and accept each other's transactions, etc.
Think about it from the perspective of a firm installing a blockchain node: getting onto any blockchain network (a Corda CZ or whatever the equivalent concept is for other platforms)... getting an identity, punching the right holes in the firewall, setting up the node infrastructure... it's analogous to the work needed to get a firm "on the internet" - setting up routers, getting IP addresses, etc, etc.
It's the kind of thing you want to do once and then reuse ruthlessly. The idea that you would have to connect to an entirely new communications network for each app your firm used would be ludicrous. And yet that's how some people seem to think blockchain deployments should be: ie for each app, you set up a separate blockchain network with its own nodes and settings and identity layer and consensus providers. But that's surely just nonsense, right?
You want to connect to a global network once and then reuse that infrastructure.
So the idea is that we try to have as few CZs as possible and encourage as many business networks as possible to form within that small number of CZs.
I know this can mess with your mind when you first hear about it because all the other enterprise blockchain platforms are going in totally the wrong direction (in my opinion..!) They seem to be encouraging the formation of a separate private network for each application. But that just seems crazy to me.
So maybe try this: even if you think I'm mad, play along with the idea for a day or so and see if it begins to grow on you :) If not, let's debate it again but I really do think this idea of multiple apps on the same overall shared network (ie multiple business networks in a single compatibility zone) is just so amazingly powerful as a concept.
So to your answer: can you do cross-app/cross-business-network DvP within a CZ? Yes! That is one of the key use-cases we invented Corda to solve... it's almost perfect for those sorts of scenarios.
Could you do it if the two apps were on different CZs? Well, yes... but it would be like asking if you could do DvP between assets managed in different databases or hosted on different blockchains.. it's just messier... needing locking and 2PC and all the stuff that we can just eliminate if we hold ourselves accountable for not creating needless balkanisation/siloed deployment through deployment of standalone networks unless they're really, really needed.

Related

Network Simulation using NS-3

Can I simulate susceptible-infected-susceptible (SIS) model using NS-3?
I'm aiming to model malware flow using SIS and trying to simulate using NS-3.
I'm a newbie to networks, and have been searching for this since hours, going through tens of research papers but can't find anything similar.
SIS implies some sort of discovery mechanism to spread the infection of a network. These discovery mechanisms typically use exploits in network capable daemons. So, to simulate SIS, you want a simulator that works at the application level of the network stack.
ns-3 has some application-level capabilities, but it's mostly intended to be used to simulate the network stack below the application level. The application-level capabilities that ns-3 does have are limited to traffic generation. Discovering the presence of a service on a Node, let alone a compromised version of a daemon, is not supported.
So, it seems like you'll need to find another simulator. I'm not sure what your options are, but depending on how complex of a simulation you want, you could just roll your own by representing the network as a graph, and infecting a susceptible node with probability p.

What is ZeroMQ underlying design architecture

I am comparatively new to ZeroMQ and would like some suggestions regarding its it's internal architecture.
I am planning to use ZeroMQ as a messaging framework for my work. The basic idea what I want to achieve is to be able to dynamically scale the infrastructure based on the load and computational capacity required to achieve a particular workflow deadlines.
So,if if there is a necessity to add more nodes, then the application spawns new nodes and the messaging framework should be able to incorporate the changes as well. I should also be able to be point where the additional computations should occur or how the framework dynamically adds the new nodes (if any). The event on a particular node decides subsequent actions to be performed on other nodes. Here is my scenario or my stack that I am thinking off, but wanted to know if it makes sense:
User applications
ZeroMQ messaging
Squid-Content based routing
Overlay
Physical Substrate
I am bit skeptical about the above stack as I believe ZeroMQ helps one to achieve most of the functionality and therby thereby making it simpler.
Few points about my stack:
Physical substrate are the total number of nodes that are available for the computations or as data sources.
Overlay is a logical network that is built dynamically upon the physical network based upon the closest nodes available for a particular workflow. i.e. if two nodes exchange data frequently, then those two nodes are placed logically close to one another. Is a separate overlay like CHORD etc required when we use ZeroMQ?
Squid is basically used for content based routing. Is Squid required when we use ZeroMQ?
ZeroMQ messaging is for the communication between different nodes for an application.
Basically, what I wanted to know is whether above stack can be made simpler given that ZeroMQ has richer functionalities. If so, can someone point or share the thoughts. I am however going through the documentations of ZeroMQ, I am finding it a bit difficult to understand the intrinsic design of ZeroMQ. Please help.
Thanks
There's so much specific to your use-case here that it's almost impossible to give any definite answers. ZeroMQ is not a direct replacement for the concepts you've built into your architecture, however it may meet the goals you're trying to meet depending on how you're using them.
My suggestion would be to put your current architecture aside and start trying to build up a new one with ZMQ as its core, and see where you run into limitations that are solved by the other parts of your stack.
As for the "intrinsic design" of ZMQ, here's the basics that you need to understand as a starting point:
A ZMQ socket handles connection details for you, including managing network hiccups - but this has limits that you'll need to know
There are different kinds of ZMQ sockets, and they have opinions about how you use them. Some of them communicate asynchronously, some of them are strictly synchronous, some are one way, some are bi-directional.
If a connection between two sockets is severed (e.g. one node goes down, there is a network failure - something more than a momentary hiccup), it's your job to recognize that and re-establish that connection
There is no built in brokering or topology, you have to design and build that all yourself.
... ultimately, ZMQ provides a toolset for you to build a messaging framework, it does not provide a fully realized messaging framework out of the box. So, yes, it has the power to replace some of the other tools you're currently using, but you'll have to build it.

openHAB for two or more home

I started exploring openHAB for my home automation. Looks to be a great application for the home automation. I want to automate two homes and want to run openHAB on one centrally placed server. Is it possible to segregate the data for my two homes and provide use based access for two homes.
Or I will have to have to instances running on my server.
Please suggest if anyone has done this earlier.
you can (I believe) provide different sitemaps, but the most importing question is how will a central openhab instance communicate with the "other" home?
Especially If you're going to use bindungs which require a piece of hardware like z-wave etc.
You can potentially play with MQTT and have a small Raspberry Pi running in the "other" home feeding the MQTT.
Assuming that there is not a hardware or range based issue with using OpenHab for two homes (e.g. z-wave USB dongle but second home is out of range) and there is network connectivity between the two houses, there are a number of ways you can accomplish this. Here is one.
The easiest would probably be to just use a naming convention for your items and groups to easily tell which house the item comes from. You would probably want to set up a separate sitemap for each house as well. If I understand your question this should segregate the data for you based on name and provide use based access for each home.
If you want to segregate the data even more thoroughly you can configure your persistence to save all the items from one house to one DB and all the others to a different one, though you will need two different persistence bindings set up (i.e. one uses rrd4j and another uses db4o). I'm not sure this provides any advantage.
The final step is getting the data from the remote house into openHab. How this is accomplished will depend on the nature of the sensors and triggers in the other house. You can use the HTTP binding, TCP/IP binding, or an MQTT broker. I've personally exposed a couple of my Raspberry Pi based sensors to openHAB using a python script and the paho library that publishes the sensor data read from the GPIO pins to an MQTT broker and it works great.
Centralization vs segregation - you have to decide, which one has more advantages and less risk.
The two houses will store data on the server (openhab2, mqtt, DB/rr4d) and each one have access to it - that must be clarified.
The network connectivity is obvious, it should be stable between the two sites. Security is another issue - not only digital, but safety of life (hvac controlling or safety appliances with network outage?).
Configuration is pretty supported in both ways, separate config files (items, rules, persistence, etc) and connectivity in a hierarchy has endless approaches and capabilities.
In the newest version of the android app you can add multiple openhab servers. Why don't just use two instances of openHAB?

Autodiscovery in P2P Applications

I want to create a P2P application on the internet. What is the best or if none exist a good enough way to do auto-discovery of other nodes in a decentralized network?
Grothoff and GauthierDickey from the GNUnet project (an anonymous censorship-resistant file-sharing network) researched on the question of bootstrapping a p2p network without any central hostlist.
They found that for the Gnutella (Limewire) network a random ip search needed on average 2500 connection attempts to find a peer.
In the paper they proposed a method which reduced the required connection attempts to 817 for Gnutella and 51 for the E2DK network.
Achieved was this through creating a statistical profile of p2p users for every DNS organization, this small (around 100kb) discovery database has to be created in advance and shipped with the p2p client.
This is the holy grail of P2P. There isn't a magic solution really - there's no way a node can discover other nodes without a good known point to act as a reference (well, you can do so on a LAN by using broadcasting, but not on the internet). P2P filesharing tends to work by having known websites distributing 'start points' for discovery, and then further discovery (I would expect) can come from asking nodes what other nodes they know about.
A good place to start on research would be Distributed Hash Tables.
As for security, that topic will be in the literature somewhere, I should think - again I would recommend Wikipedia. Non-existent ones are trivially dealt with: if you can't contact an IP/port, don't keep it on your list, and if a node regularly provides non-existent pointers, consider de-prioritising it or removing it from your list entirely.
For evil nodes, it depends on your use case, but let's say you are doing file sharing. If you request a section of a file, check with several nodes what the file section's hash should be, and then request by hash. If the evil node gives you a chunk that has a different hash, then you can again de-prioritise or forget that node.
Distributed processing systems work a little differently: they tend to ask several unrelated nodes to perform the same work, and then they use a voting system (probably using hashing again) to determine whether evilness is at hand. If a node provides consistently bad results, the administrator is contacted or the IP is removed from the known nodes list.
ok, for two peers to find each other they both have to know a common, lets say, mediator to exchange IPs once. You can use anything for this kind of the first handshake whilst being able to WRITE and READ from that "channel". i.e: DNS (your well known domains), e-Mail, IRC, Twitter, Facebook, dropbox, etc.

Distributed C++ game server which use database

My C++ turn-based game server (which uses database) does not stand against current average amount of clients (players), so I want to expand it to multiple (more then one) amount of computers and databases where all clients still will remain within single game world (servers will must communicate with each other and use multiple databases).
Is there some tutorials/books/common standards which explain how to do it in a best way?
The way you put the database into the picture might be misleading: clustering solutions exist for all of the mostly used RDBMS, so that if you need to support your DB activities with more than one DB node you will just have to check the documentation from your DB vendor.
More complex scenarios are there when it comes to synchronize your non-DB application state that needs to be shared among several servers. There are already a number of questions here that tackle the same problem, like here or here
You might also be interested into some messaging system, I heard good things about ZeroMQ
Hope this helps.

Resources