I was watching the state of the WETH-DAI pool on Ropsten which was (10435607102899899961933, 422393388012675303130025) and its 0.3% fee.
So its market maker k = 4407931440163027853157049123245748733249338325.
Then I sent 1000000000000000 WETH to it making the pool state changed to (10435617102899899961933, 422388295347420373432664) and the k changed to 4407882518992274291481223238881530978138779512.
Can anyone let me know why the k not keeping its value unchanged by definition?
The so called "constant-product" formula is deceiving since the K value will actually change on each transaction because of Uniswap's 0.3% fee. It's only a constant product if there's no fee involved.
The fee is deposited as liquidity in the pool, and adding liquidity to the pool changes the K value which is what you're seeing.
I found this article to be useful in explaining how the K value is affected in different scenarios.
Related
I want to distribute a fee from every transaction to a mapping(address=>uint) of 3000 addresses, evenly.
Now, that is an issue because the function will run out of gas, so I've heard, that instead of a push method, a pull method should be able to pull it off.
So instead, pool all fees together under one single uint and then let each of every 3k address pull their own share.
Now that brings new issues because the pool uint is forever increasing and decreasing (when people take their share out and new incoming fees from transactions) and how can I control one who may only take their share once but still continuously & evenly distributed?
Some direction here would be greatly appreciated on how to solve those distribution issues because my math is far from the strongest asset I possess.
Solved it by having a mapping to store for every user what was the total incoming deposits last time they claimed their share and deduct that from the current total incoming deposits and give their % cut based on the difference if any.
Psuedo Solidity (excluding proper checks & interactions):
uint256 totalRevShare = onDeposit() pub payable {...+=msg.value}
//..in Withdraw Function
...
uint256 unclaimedScope = totalRevShare - LastTotalRevShare[user];
LastTotalRevShare[user] = totalRevShare;
uint256 _userUnclaimedCut = unclaimedScope / totalReceivers;
...
msg.sender.call{value:_userUnclaimedCut}("");
Hope it helps you to move from push to pull functionality.
I have TTL set for 60 minutes. For the past 1 month or so it was working fine, records were deleted within less than 20 mins of TTL expiration. But lately (since this week) some (not all) takes up to 3 hours to delete after TTL expired.
I understand it can take a max of 48 hours, but my customer is asking for prove or justification to the current TTL behavior. Just saying IO workload influences TTL is not enough.
What metric can I use or look at to provide concrete evidence to current TTL's behavior? Is there any benchmark, eg IO load of N will cause N hours of delay.
Unfortunately there is no publicly available way to determine the time it will take for TTL to delete your items. The 48 hours isn't guaranteed either.
You'll find anecdotal evidence online regarding the behavior of TTL under different scenarios (e.g. large tables vs small, other processing happening in your account, etc.), but no official guidance that will answer the question your client is asking.
If your client is unsatisfied the ambiguity around TTL, perhaps they should be exploring other solutions like implementing delete in your application logic.
I need to find out the peak read capacity units consumed in the last 20 seconds in one of my dynamo DB table. I need to find this pro-grammatically in java and set an auto-scaling action based on the usage.
Please can you share a sample java program to find the peak read capacity units consumed in the last 20 seconds for a particular dynamo DB table?
Note: there are unusual spikes in the dynamo DB requests on the database and hence needs dynamic auto-scaling.
I've tried this:
result = DYNAMODB_CLIENT.describeTable(recomtableName);
readCapacityUnits = result.getTable()
.getProvisionedThroughput().getReadCapacityUnits();
but this gives the provisioned capacity but I need the consumed capacity in last 20 seconds.
You could use the CloudWatch API getMetricStatistics method to get a reading for the capacity metric you require. A hint for the kinds of parameters you need to set can be found here.
For that you have to use Cloudwatch.
GetMetricStatisticsRequest metricStatisticsRequest = new GetMetricStatisticsRequest()
metricStatisticsRequest.setStartTime(startDate)
metricStatisticsRequest.setEndTime(endDate)
metricStatisticsRequest.setNamespace("AWS/DynamoDB")
metricStatisticsRequest.setMetricName('ConsumedWriteCapacityUnits',)
metricStatisticsRequest.setPeriod(60)
metricStatisticsRequest.setStatistics([
'SampleCount',
'Average',
'Sum',
'Minimum',
'Maximum'
])
List<Dimension> dimensions = []
Dimension dimension = new Dimension()
dimension.setName('TableName')
dimension.setValue(dynamoTableHelperService.campaignPkToTableName(campaignPk))
dimensions << dimension
metricStatisticsRequest.setDimensions(dimensions)
client.getMetricStatistics(metricStatisticsRequest)
But I bet you'd results older than 5 minutes.
Actually current off the shelf autscaling is using Cloudwatch. This does have a drawback and for some applications is unacceptable.
When spike load is hitting your table it does not have enough capacity to respond with. Reserved with some overload is not enough and a table starts throttling. If records are kept in memory while waiting a table to respond it can simply blow the memory up. Cloudwatch on the other hand reacts in some time often when spike is gone. Based on our tests it was at least 5 mins. And rising capacity gradually, when it was needed straight up to the max
Long story short. We have created custom solution with own speedometers. What it does is counting whatever it has to count and changing tables's capacity accordingly. There is a still a delay because
App itself takes a bit of time to understand what to do
Dynamo table takes ~30 sec to get updated with new capacity details.
On a top we also have a throttling detector. So if write/read request has got throttled we immediately rise a capacity accordingly. Some times level of capacity looks all right but throttling because of HOT key issue.
Does the average data and instruction access time of the CPU depends on the execution time of an instruction?
For example if miss ratio is 0.1, 50% instructions need memory access,L1 access time 3 clock cycles, mis penalty is 20 and instructions execute in 1 cycles what is the average memory access time?
I'm assume you're talking about a CISC architecture where compute instructions can have memory references. If you have a sequence of ADDs that access memory, then memory requests will come more often than a sequence of the same number of DIVs, because the DIVs take longer. This won't affect the time of the memory access -- only locality of reference will affect the average memory access time.
If you're talking about a RISC arch, then we have separate memory access instructions. If memory instructions have a miss rate of 10%, then the average access latency will be the L1 access time (3 cycles for hit or miss) plus the L1 miss penalty times the miss rate (0.1 * 20), totaling an average access time of 5 cycles.
If half of your instructions are memory instructions, then that would factor into clocks per instruction (CPI), which would depend on miss rate and also dependency stalls. CPI will also be affected by the extent to which memory access time can overlap computation, which would be the case in an out-of-order processor.
I can't answer your question a lot better because you're not being very specific. To do well in a computer architecture class, you will have to learn how to figure out how to compute average access times and CPI.
Well, I'll go ahead and answer your question, but then, please read my comments below to put things into a modern perspective:
Time = Cycles * (1/Clock_Speed) [ unit check: seconds = clocks * seconds/clocks ]
So, to get the exact time you'll need to know the clock speed of your machine, for now, my answer will be in terms of Cycles
Avg_mem_access_time_in_cycles = cache_hit_time + miss_rate*miss_penalty
= 3 + 0.1*20
= 5 cycles
Remember, here I'm assuming your miss rate of 0.1 means 10% of cache accesses miss the cache. If you're meaning 10% of instructions, then you need to halve that (because only 50% of instrs are memory ops).
Now, if you want the average CPI (cycles per instr)
CPI = instr% * Avg_mem_access_time + instr% * Avg_instr_access_time
= 0.5*5 + 0.5*1 = 3 cycles per instruction
Finally, if you want the average instr execution time, you need to multiply 3 by the reciprocal of the frequency (clock speed) of your machine.
Comments:
Comp. Arch classes basically teach you a very simplified way of what the hardware is doing. Current architectures are much much more complex and such a model (ie the equations above) is very unrealistic. For one thing, access time to various levels of cache can be variable (depending on where physically the responding cache is on the multi- or many-core CPU); also access time to memory (which typically 100s of cycles) is also variable depending on contention of resources (eg bandwidth)...etc. Finally, in modern CPUs, instructions typically execute in parallel (ILP) depending on the width of the processor pipeline. This means adding up instr execution latencies is basically wrong (unless your processor is a single-issue processor that only executes one instr at a time and blocks other instructions on miss events such as cache miss and br mispredicts...). However, for educational purpose and for "average" results, the equations are okay.
One more thing, if you have a multi-level cache hierarchy, then the miss_penalty of level 1 cache will be as follows:
L1$ miss penalty = L2 access time + L1_miss_rate*L2_miss_penalty
If you have an L3 cache, you do a similar thing to L2_miss_penalty and so on
I've implemented a sensor scheduling problem using OptaPlanner 6.2 that has 1 hard constraint, 1 medium constraint, and 1 soft constraint. The trouble I'm having is that either I can satisfy some of the hard constraints after 30 seconds of or so, and then the solver makes very little progress satisfying them constraints with additional minutes of termination. I don't think the problem is over constrained; I also don't know how to help the local search process significantly increase the score.
My problem is a scheduling one, where I precalculate all possible times (intervals) that a sensor can observe objects prior to solving. I've modeled the problem as follows:
Hard constraint - no intervals can overlap
when
$s1: A( interval!=null,$id: id, $doy : interval.doy, $interval: interval, $sensor: interval.getSensor())
exists A( id > $id, interval!=null, $interval2: interval, $interval2.getSensor() == $sensor, $interval2.getDoy() == $doy, $interval.getStartSlot() <= $interval2.getEndSlot(), $interval.getEndSlot() >= $interval2.getStartSlot() )
then
scoreHolder.addHardConstraintMatch(kcontext,-10000);
Medium constraint - every assignment should have an Interval
when
A(interval==null)
then
scoreHolder.addMediumConstraintMatch(kcontext,-100);
Soft constraint - maximize a property/value in the Interval class
when
$s1: A( interval!=null)
then
scoreHolder.addSoftConstraintMatch(kcontext,-1 * $s1.getInterval().getSomeProperty())
A: entity planning class; each instance is an assignment for a particular object (i.e has an member objectid that corresponds with one in the Interval class)
Interval: planning variable class, all possible intervals (start time, stop time) for the sensor and objects
In A, I restrict access to B instances (intervals) to just those for the object associated with that assignment. For my test case, there are 40000 or so Intervals, but only dozens for each object. There are about 1100 instances of A (so dozens of possible Intervals for each one).
#PlanningVariable(valueRangeProviderRefs = {"intervalRange"},strengthComparatorClass = IntervalStrengthComparator.class, nullable=true)
public Interval getInterval() {
return interval;
}
#ValueRangeProvider(id = "intervalRange")
public List<Interval> getPossibleIntervalList() {
return task.getAvailableIntervals();
}
In my solution class:
//have tried commenting this out since the overall interval list does not apply to all A
#ValueRangeProvider (id="intervalRangeAll")
public List getIntervalList() {
return intervals;
}
#PlanningEntityCollectionProperty
public List<A> getAList() {
return AList;
}
I've reviewed the documentation and tried a lot of things. My problem is somewhat of a cross between the nurserostering and course scheduling examples, which I have looked at. I am using the FIRST_FIT_DECREASING construction heuristic.
What I have tried:
Turning on and off nullable in the planning variable annotation for A.getInterval()
Late acceptance, Tabu, both.
Benchmarking. I wasn't seeing any problems and average
Adding an IntervalChangeFactory as a moveListFactory. Restricting the custom ChangeMove to whether the interval can be accepted or not (i.e. enforcing or not the hard constraint in the IntervalChangeMove.isDoable).
Here is an example one, where most of the hard constraints are not satisfied, but the medium ones are:
[main] INFO org.optaplanner.core.impl.solver.DefaultSolver - Solving started: time spent (202), best score (0hard/-112500medium/0soft), environment mode (REPRODUCIBLE), random (WELL44497B with seed 987654321).
[main] INFO org.optaplanner.core.impl.constructionheuristic.DefaultConstructionHeuristicPhase - Construction Heuristic phase (0) ended: step total (1125), time spent (2296), best score (-9100000hard/0medium/-72608soft).
[main] INFO org.optaplanner.core.impl.localsearch.DefaultLocalSearchPhase - Local Search phase (1) ended: step total (92507), time spent (30000), best score (-8850000hard/0medium/-74721soft).
[main] INFO org.optaplanner.core.impl.solver.DefaultSolver - Solving ended: time spent (30000), best score (-8850000hard/0medium/-74721soft), average calculate count per second (5643), environment mode (REPRODUCIBLE).
So I don't understand is why the hard constraints can't be deal with by the search process. Any my calculate count per second has dropped to below 10000 due to all the tinkering I've done.
If it's not due to the Score trap (see docs, this is the first thing to fix), it's probably because it gets stuck in a local optima and there are no moves that go from 1 feasible solution to another feasible solution except those that don't really change much. There are several ways to deal with that:
Add course-grained moves (but still leave the fine-grained moves such as ChangeMove in!). You can add generic course grained moves (such as the pillar moves) or add a custom Move. Don't start making smarter selectors that try to only select feasible moves, that's a bad idea (as it will kill your ACC or will limit your search space). Just mix in course grained moves (= diversification) to complement the fine grained moves (= intensification).
A better domain model might help too. The Project Job Scheduling and Cheap Time scheduling examples have a domain model which naturally leads to a smaller search space while still allowing all feasible solutions.
Increase Tabu list size, LA size or use a hard constraint in the SA starting temperature. But I presume you've tried that with the benchmarker.
Turn on TRACE logging to see optaplanner's decision making. Only look at the part after it reaches the local optimum.
In the future, I 'll also add Ruin&Recreate moves, which will be far less code than custom moves or a better domain model (but it will be less efficient).