Publish to non-existing topic a valid pattern? - amazon-sns

For my use-case I'm going to have many topics. Because I won't be able to create all topics up-front I would like to create them dynamically. I thought a receiver of an event could create a topic first before subscribing. The sender would simply try to push all the data (the push would fail if nobody created and subscribed for it). Is this a valid pattern?
I know that I have to somehow solve the deletion of topics (the receiver can't do much as others might have subscribed in the meantime as well).

Yes, it's a valid pattern. Actually receiving and processing messages will cost money. If you use SQS recipients, you will need to beware of queue receivers that are not used, though. The costs will add up very quickly.
Depending on your use case, AmazonMQ (SaaS ActiveMQ) may work better. All kinds of Topic policies can be implemented without cost for the listeners or individual topics.

Related

Request StateStatus from a Notary

Is there any way a CorDapp can ask a Notary if a state has been consumed prior to using it in a Transaction?
Background:
I am testing FungibleToken’s that point to EvolvableTokenType’s. Eventually the EvolvableTokenType changes and holders of the tokens that are not Participants of the EvolvableTokenType end up with states in their vault that have been unknowingly consumed. When they try to execute a transaction involving these states the Notary will refuse to sign because it knows the states have been consumed.
I have written flows that will contact a Participant and request the missing state(s). However it would be more efficient if I could first ask the Notary if I need to do that (i.e. if the state hasn’t been consumed I don’t need to ask a participant for an update).
You could do this a couple ways.
For example, in your vault query, you could simply make sure to filter on StateStatus = UNCONSUMED. That's one way you can ensure this works the way you expect and you never get a state that won't fit your criteria.
check out this section of the docs on vaultQuery : https://docs.corda.net/docs/corda-os/4.7/api-vault-query.html#querycriteria-interface
The other way I'd recommend that might also work is just to include this filter in your contract verification for the transaction, but doing it at the flow level just catches the problem sooner rather than later.

REST API streaming versus repeated GET requests

I'm making a turn based game kind of like multiplayer checkers that works with firebase realtime database, so each client needs to know when moves are made.
I'm limited by third party framework that only allows REST API requests, but doesn't allow REST API streaming because there is no way to "Set the client's Accept header to text/event-stream" or "Respect HTTP Redirects, in particular HTTP status code 307".
So, I'm thinking of reading the database with GET requests every second to see if there is new data, but I'm worried that this could be inefficient in terms of data and cause a large bill. How much worse is this solution than a REST API streaming one and is it practical?
Since in multiplayer games response time is very critical, I think you should think about how this may be inefficient in terms of user experience. But of course that will depend on how the game works.
But if you think it is ok users to have 1000ms delay, then the question is how much players will be playing the game daily, how long does each game take to finish (turn wise).
((avg. turns per game) * (avg .# of players in a single game)) * (games played per day) will be the minimum reads for only the game play part. Also you must consider if you will have to constantly check multiple documents. Probably there will be many writes also reads on the other parts of the game.
So I think overall, it is very inefficient way to solve this problem in many ways.
What is the platform you are using? Maybe someone could find a way around somehow.
Firebase provides callback listeners for requests. You can attach ChildEventListener to your request to track real time changes in your database. As long as it is connected it will be considered a single request.
Refer to this link

How to subscribe to Events without coupling the Event Type from another project using Rebus?

I have several microservices and many of them have Integration Events. Imagine a microservice M1 that needs to subscribe to an event that lies in the microservice M2. How can I subscribe to the event without coupling the M2 and M1? Is there another way to subscribe without using the event type instead use the event name for example?
While Rebus encourages the use of type-based topics, the underlying mechanism is based on simple string-based topic names.
The "raw topics" mechanism is available via bus.Advanced.Topics, so you can
await bus.Advanced.Topics.Subscribe("whatever");
to subscribe to the whatever topic, and then you can
await bus.Advanced.Topics.Publish("whatever", yourEvent);
to publish events to that topic.
The RabbitMQ Topics sample demonstrates how it can be done, and it even shows how RabbitMQ's underlying topic wildcards can be used this way.
There are two ways ( depending on your need) which can be used to address this scenario
first it to IHandleMessages and handle for dynamic object
second is by providing implementation of ISerilizable
we used the second option.
Further read:
https://github.com/rebus-org/Rebus/issues/24

how to trigger event when data changed in R3 Corda Node

On our project, I store some data on block chain, my question is how can get to know when these data changed.(so that we can send emails, sms, web notifications to end user)
the first thought were list as below, but it seems either of them were best choice
query database every few seconds. Very stupid way, but it seems can be worked.
using one RPC interceptor, and check all the things send to node.
using flows, subflows
using schedule service
do you have any good ieads about this, please kindly reply me, thanks a lot.
Ok so a few points, there is no "real" blockchain on Corda, there are consumed/unconsumed states, I think you should reformulate a bit better your question, do you want to be notified when a states get consumed? When a new state is created? Try to explain in "Corda" terms :D

'Assigning a player in multiplayer game' firebase example is not very scalable or is it?

In the firebase example (https://gist.github.com/anantn/4323981), to add an user to the game, we attach the transaction method to playerListRef. Now, every time firebase attempts to update data, it will call the callback passed to the transaction method with the list of userid of all players. If my game supports thousands of users to join at a time, every instance this method executes, the entire user list will be downloaded and passed which will be bad.
If this is true, what is the recommended way to assign users then?
This is specifically what Firebase was designed to handle. If your application needs to actually assign player numbers, this example is the way to go. Otherwise, if the players just need to be in the same "game" or "room" without any notion of ordering you could remove the transaction code to speed things up a bit. The snippet as well as the backend have handled the number of concurrent connections you've mentioned—if you're seeing any specific problems with your code or behavior with Firebase that appears to be a bug, please contact us at support#firebase.com and we can dig into it.

Resources