Negative numbers are not allowed in the Amount class according to documentation: https://docs.corda.net/api/kotlin/corda/net.corda.core.contracts/-amount/
When a ContractState class has an Amount field that can go negative (for example a balance that can be overpaid) what is the best way to represent negative numbers?
You can't have a negative amount in Corda, because you can't pay a negative balance or hold a negative balance into an account.
however you can issue an obligation (iou), You can take a look at the r3 Corda Sample here: https://github.com/roger3cev/obligation-cordapp
Amount is designed to not allow negative amounts. It is prevented from doing so by the following init block:
init {
// Amount represents a static balance of physical assets as managed by the distributed ledger and is not allowed
// to become negative a rule further maintained by the Contract verify method.
// N.B. If concepts such as an account overdraft are required this should be modelled separately via Obligations,
// or similar second order smart contract concepts.
require(quantity >= 0) { "Negative amounts are not allowed: $quantity" }
}
AmountTransfer is available for modelling negative transfers. Alternatively, you can simply make a copy of the Amount class that excludes this init block.
I need to find out the peak read capacity units consumed in the last 20 seconds in one of my dynamo DB table. I need to find this pro-grammatically in java and set an auto-scaling action based on the usage.
Please can you share a sample java program to find the peak read capacity units consumed in the last 20 seconds for a particular dynamo DB table?
Note: there are unusual spikes in the dynamo DB requests on the database and hence needs dynamic auto-scaling.
I've tried this:
result = DYNAMODB_CLIENT.describeTable(recomtableName);
readCapacityUnits = result.getTable()
.getProvisionedThroughput().getReadCapacityUnits();
but this gives the provisioned capacity but I need the consumed capacity in last 20 seconds.
You could use the CloudWatch API getMetricStatistics method to get a reading for the capacity metric you require. A hint for the kinds of parameters you need to set can be found here.
For that you have to use Cloudwatch.
GetMetricStatisticsRequest metricStatisticsRequest = new GetMetricStatisticsRequest()
metricStatisticsRequest.setStartTime(startDate)
metricStatisticsRequest.setEndTime(endDate)
metricStatisticsRequest.setNamespace("AWS/DynamoDB")
metricStatisticsRequest.setMetricName('ConsumedWriteCapacityUnits',)
metricStatisticsRequest.setPeriod(60)
metricStatisticsRequest.setStatistics([
'SampleCount',
'Average',
'Sum',
'Minimum',
'Maximum'
])
List<Dimension> dimensions = []
Dimension dimension = new Dimension()
dimension.setName('TableName')
dimension.setValue(dynamoTableHelperService.campaignPkToTableName(campaignPk))
dimensions << dimension
metricStatisticsRequest.setDimensions(dimensions)
client.getMetricStatistics(metricStatisticsRequest)
But I bet you'd results older than 5 minutes.
Actually current off the shelf autscaling is using Cloudwatch. This does have a drawback and for some applications is unacceptable.
When spike load is hitting your table it does not have enough capacity to respond with. Reserved with some overload is not enough and a table starts throttling. If records are kept in memory while waiting a table to respond it can simply blow the memory up. Cloudwatch on the other hand reacts in some time often when spike is gone. Based on our tests it was at least 5 mins. And rising capacity gradually, when it was needed straight up to the max
Long story short. We have created custom solution with own speedometers. What it does is counting whatever it has to count and changing tables's capacity accordingly. There is a still a delay because
App itself takes a bit of time to understand what to do
Dynamo table takes ~30 sec to get updated with new capacity details.
On a top we also have a throttling detector. So if write/read request has got throttled we immediately rise a capacity accordingly. Some times level of capacity looks all right but throttling because of HOT key issue.
I'm new to BLE and hope you will be able to point me towards the right implementation approach.
I'm working on an application in which the peripheral (battery operated) device continuously aggregate sensor readings.
On the mobile side application there will be a "sync" button, upon button press, I would like to transfer all the sensor readings that were accumulated in the peripheral to the mobile application.
The maximal duration between sync's can be several days, hence, the accumulated data can reach a size of 20Kbytes.
Now, I'm wondering what will be the best approach to perform the data transfer from the peripheral to the central application.
I thought about creating an array of characteristics where each characteristic will contain a fixed amount of samples (e.g. representing 1hour of readings).
Then, upon sync, I will:
Read the characteristics count (how many 1hours cells).
Then read the characteristics (1hour cells) one by one.
However, I have no idea if this is a valid approach ?
I'm not sure if this is the most "power efficient" way that I can
use.
I'm not sure if Characteristic READ is the way to go, or maybe
I need to use indication instead.
Any help here will be highly appreciated :)
Thanks in advance, Moti.
I would simply use notifications.
Use one characteristic which you write something to in order to trigger the transfer start.
Then have another characteristic which you simply stream data over by sending 20 bytes at a time. Most SDKs for BLE system-on-a-chips have some way to control the flow of data so you don't send too fast. Normally by having a callback triggered when it is ready to take the next notification.
In order to know the size of the data being sent, you can for example let the first notification contain the size, and rest of them the data.
This is the most time and power efficient way since there can be sent many notifications per connection interval, compared if you do a lot of reads instead which normally requires two round trips each. Don't use indications since they also require basically two round trips per indication. They're also quite useless anyway.
You could possibly increase the speed also by some % by exchanging a larger MTU (which leads to lower L2CAP/ATT headers overhead).
While searching for a template to test a paper trading strategy, I stumbled on IBPy. I have gone through the initial set-up and can connect and receive updates from the server. What I would like to do is:
a) Gather ticks from 1..n symbols when new prices (bid/asks) are published
b) Store these temporarily in a vector (I guess with vector.append((bid,ask))
c) Once the vector reaches it's computational max (I need 30 seconds or a certain number of ticks) I will compute some valued on vector[] and decide on whether an entry is appropriate
d) If not pop(0) and keep collecting
e) exit on a stoploss or trailing profit
My questions are:
i) I have read that updates are 250 ms, that is fine for my analytics but can the program/system keep up because different symbols update at different times so just because symbolA updates every 250 ms, with 10 symbols the updates maybe very frequent
ii) When I stop to make a calculation, haven't I lost updates?
If there is skeleton code for this, it would be great to mess around with it
Thanks for listening!
If you need to handle 100s of stock symbols you shall have multiple (at least 2) threads. One thread pulls the incoming data from the socket, sorts the messages by message type and pushes the data to queues. Other threads are waiting for their respective queues to get some data and process the incoming data.
The idea is that the dispatcher thread ensures that all incoming data gets pulled from the socket as fast as possible.
Generally your PC will be able to handle anything IB will be willing to send you. If your processing does not take too much time - no locks, calls to sleep(), file operations - you can do everything in a single thread.
I'm considering writing a computer adaptation of a semi-popular card game. I'd like to make it function without a central server, and I'm trying to come up with a scheme that will make cheating impossible without having to trust the client.
The basic problem as I see it is that each player has a several piles of cards (draw deck, current hand and discard deck). It must be impossible for either player to alter the composition of these piles except when allowed by the game rules (ie drawing or discarding cards), nor should players be able to know what is in their or their oppponent's piles.
I feel like there should be some way to use something like public-key cryptography to accomplish this, but I keep finding holes in my schemes. Can anyone suggest a protocol or point me to some resources on this topic?
[Edit]
Ok, so I've been thinking about this a bit more, and here's an idea I've come up with. If you can poke any holes in it please let me know.
At shuffle time, a player has a stack of cards whose value is known to them. They take these values, concatenate a random salt to each, then hash them. They record the salts, and pass the hashes to their opponent.
The opponent concatenates a salt of their own, hashes again, then shuffles the hashes and passes the deck back to the original player.
I believe at this point, the deck has been randomized and neither player can have any knowledge of the values. However, when a card is drawn, the opponent can reveal their salt, allowing the first player to determine what the original value is, and when the card is played the player reveals their own salt, allowing the opponent to verify the card value.
Your problem is the famous Mental Poker Problem in cryptography (this was one of my favorite parts of crypto-class in college). It is possible, and it has been solved (in part, like all things crypto, by Ron Rivest), as long as you don't mind a huge performance hit.
Check the wiki page for more details.
I want to describe one idea which came into my mind rather quickly. I don't know if it serves all your needs, so please feel free to comment on this.
Assume player A has cards A1, A2, A3, ... from a set represented by numbers 0, 1, ... N-1. These cards might be also separated into piles but this does not change the following.
Instead of handling these cards with a single list [A1, A2, A3, ...] you can instead use two of them [A1_a, A2_a, A3_a, ...] and [A1_b, A2_b, A3_b, ...] where one is with player A and the other with player B. They are generated in such a way, that each one is random (in range 0...N-1) but both are correlated such that A1_a + A1_b = A1, A2_a + A2_b = A2, ... (all operations modulo N).
Thus no player actually knows the cards without access to the complementary pile (i.e. he cannot reasonably change his cards)
Whenever you need to know a card both players show their corrresponding value and you add those modulo N
you can implement things like "draw a card" quite easily, both piles just have to be treated the same way
The traditional Mental Poker scheme is overkill. Your situation is actually much easier since there is no shared deck. You could do something like this:
Player A takes his cards and encrypts them with key Ka. He sends them to B who shuffles them, encrypts them with key Kb. Each time A wants to play a card, he asks B for the (Kb) decryption of the next one. A decrypts this to find what card he drew. At the end, A and B reveal Ka and Kb and compares them with the game log, which should prevent cheating.
Updated: On reflection, here's a simpler solution: As above, player A encrypts his cards and sends them to B. B shuffles them. Whenever A wants a card, he tells B which deck he's drawing from. B returns the (encrypted) card from the appropriate pile.
While a central server would probably be the easiest way, if you want to avoid it you could use a distributed system, wherein everybody playing the game is a storage host for other players' decks (not for himself or his opponent, but anybody else). I believe Limewire and Bittorrent both work this way, so you might gain some ideas by studying those sources.
Maybe the players could exchange hashes of their piles at the start of the game.
Then when the game is complete, the players exchange the actual composition that their piles had at the beginning of the game. Now the player's client can check that it matches the hash they got before and then check that all the moves that were made would work with those decks.
The only problem is how the decks are shuffled initially. You would need to make sure the player cannot just start his deck in any order he chooses.
Maybe have the players generate their initial deck order by getting randomness from some specific source.. but in a way that the other player cannot work out what the other player's deck orders are until the end of the game, but can still check that the player didn't fiddle their decks. Not sure how you could accomplish that.
Edit:
Another idea. Maybe instead of generating the random decks at the beginning of the game, they could be generated as the game goes on.
Each time a player needs to take a new card from the pile, their opponent sends them some random seed that is used to choose what the next card will be.
That way the user has no way of knowing in what order the cards will follow in their decks.
I'm not sure how you would stop the other player from being able to tell what card they would be picking for their opponent. If the seed was used to select a card from some kind of randomly ordered list of the cards that the opponent had a hash of to verify.. they could check all the possible combinations of cards and find out which one matched the hash, thereby being able to dictate which card they wanted the other player to draw by sending them a seed that would cause that card to be selected.
this is an adaption of Acorns approach:
setup:
each player has their deck (totally ordered set(s) of cards) ... those decks are public and are in natural order
each player needs a asymetric (signing) keypair, public keys exchanged
now to the randomness problem:
since a player may not be able to see in advance what cards they will draw, the opponent will be the source of our randomness:
when we need some random value, we ask our opponent ... random values are stored along with the moves to be checked later
since our opponent may not be able to manipulate the order of our cards, we take that received random value and sign it with our private key ... the value of that signature will be our RNG seed (unpredictable for the opponent), and later the opponent may verify the signature against the random number he/she generated (so we store the signatures too, and exchange them after the game)
since we can do this random number exchange every round, the random values are not known to the player in advance -> no peeking at the order of our stack
and since we now have a seeded RNG we can get random values derived from that "shared" random value ...
that way we can draw "random" (completely deterministic, with repeateable verifiable results) cards from our deck ... the deck/pile won't need to be shuffled, since we can get random positions from our RNG
since we have all exchanged random values, the signatures, the initial decks (in total order), and all moves, we can replay the whole game afterwards and check for irregularities