realtime networking for games, interpolation issue - networking

I've been putting some thought lately into networking for real time, fast paced games. It seems that if you interpolate correctly then the hit calculation is done on the state that happened more than (p1.interp+p2.interp+p1.ping/2+p2.ping/2) ago from player2(p2)s points of view when the shooter is player1(p1).
The packet first goes to the server which takes p1.ping/2, then the server calculates it on the game state that has happened p1.interp + p1.ping/2 ago. The result of that calculation is send to the player2 which only sees it p2.interp later. It adds up even further because of the time it takes for all three sides to process things.
player1 server player2
| | |
.------|_1.............actual.....|.....game state......|
| | | ^ |
|intrp |_2 | | |
| | | | |
`----->|_3----ping/2 | | |
`------|_4----ping/2 |
`------5_|-------.
| |
6_| intrp|
| |
7_|<------‘
Excuse my poor ASCII art skills but I couldn't resist.
(I got home where I have Windows installed and I see how shit it looks here but hope you guys get the idea)
Assuming 50ms ping, 100ms interpolation it gives us more than 250ms summed up. This means that a player2 is seen approximately 250ms in the past by player1 but, assuming client prediciton, he sees himself in realtime. Is my logic flawed or is this just not a big deal?

I found this as an answer to a related question. Not sure if this is appropriate as an answer but I can't comment.

Related

Azure Application Insights KQL - How to accumulate values?

I would like to accumulate values by timestamp in a ever growing manner.
The following query
ContainerLog
| extend logsize = string_size(LogEntry)
| summarize sum(logsize) by bin(TimeGenerated, 1m)
| render timechart
generates a graph that goes up and down:
I would like to add the previous value with the current value, this way generating an always growing graph symbolizing the total amount of requests up to that moment.
// Sample data generation. Not part of the solution.
let ContainerLog = materialize(range i from 1 to 15 step 1 | extend TimeGenerated = ago(7d*rand()), logsize = rand(1000));
// Solution starts here.
ContainerLog
| summarize sum(logsize) by bin(TimeGenerated, 1m)
| order by TimeGenerated asc
| extend accumulated_sum_logsize = row_cumsum(sum_logsize)
| render timechart
Fiddle
P.S.
I kept sum_logsize for learning purposes.
In your scenario it can be removed.

Optimize table in MariaDB for large ibd file

We have a WordPress website using MariaDB where a wp_options table keeps growing due to a rogue plugin writing thousands of records to the table. The issue has not been resolved by the plugin maintainer yet and I keep having to remove these 'transient' (temp) records manually via DELETE statement. The problem is the ibd file keeps growing and now 35GB in size. Once this is resolved, I plan to do an OPTIMIZE TABLE on the table to cleanup. Is that the best approach to reclaim all that space? I assume I'll need as much as 40GB free space to do this and how long should the OPTIMIZE TABLE take? Since this table is used quite a bit by WordPress, it seems it will be best to take the website offline while optimizing to avoid locks. I'll looking for the quickest way to resolve.
At least I think these rogue records are the cause of the table growing. Below is a list of the top 10 type of entries in the table:
MariaDB [wmnf_www]> SELECT substr(`wp_options`.`option_name`, 1, 18) AS `option_name`, count(`wp_options`.`option_value`) AS `cnt` FROM `wp_options` GROUP BY substr(`wp_options`.`option_name`, 1, 18) ORDER BY `cnt` DESC LIMIT 10;
+--------------------+-------+
| option_name | cnt |
+--------------------+-------+
| _transient_timeout | 21186 |
| _transient_ee_ssn_ | 12628 |
| _transient_jpp_li_ | 222 |
| _transient_externa | 125 |
| _transient_wc_rela | 63 |
| jpsq_sync-14716436 | 50 |
| wpmf_current_folde | 35 |
| _wc_session_expire | 34 |
| jpsq_sync-14716465 | 29 |
| jpsq_sync-14716417 | 25 |
+--------------------+-------+
10 rows in set (0.17 sec)
The _transient_ee_ssn_ and _transient_timeout_ee_ are the issue and keep growing, the only ones in the set above that has grown since last night and was initially found with 800K records. I keep removing the records as the plugin maintainer said was safe. But is this the cause of the ibd file growing?
---UPDATE---
Oddly enough, the issue is not resolved and transient records keep getting generated by the thousands, but this ibd index file has stopped growing for the moment. After steadily growing over the weekend from 20GB to now 39GB, it has not grown in a couple of hours. Perhaps there's a limit or this file was growing for other reasons?
I think it would be a better solution to recreate the table using Percona pt-online-schema-change tool. This will recreate the table and move all the data to the new table then drop the old table. This will avoid locking the database for a long time.

Use oene physical monitor as two

I have a Big monitor which I would like to simulate more than one monitor.
i.e:
------------
| |
| M1 |
------------
should be treated as:
-------------
| M1 | M2 |
| | |
-------------
I'm running AwesomeWM version 3.5.9 on X11 1.18.3. I don't care if I can achieve this behaviour by changing the settings of my window manager or the xserver. Whichever way is the easiest.
Cheers
Edit: June 2019. A new feature currently under review for AwesomeWM v4.4 will add a :split() method to the screen API. It does what is done below, but better.
I just answered this question here:
Let awesome wm use only a part of the screen
This is the code:
local wdh = screen[1].geometry.width/2
local x, y = screen[1].geometry.x, screen[1].geometry.y
screen[1]:fake_resize(x, y, wdh, screen[1].geometry.height)
screen.fake_add(x+wdh+1, y, wdh, screen[1].geometry.height)
I will add this to the documentation soon.

React native firebase 'average' query?

I come from a php/mysql background, and json data and firebase queries are still pretty new to me. Now I am working in React Native and I have a collection of data, and one of the keys stores a integer. I want to get the average of all the integers. Where do I even start?
I have used Firebases snapshot.numChildren function before so I am getting a little more familiar in this json world, but any sort of help would be appreciated. Thanks!
So, you know that you can return all of the data and determine the average. I'm guessing this is for a large set of data where it would be ideal not to return the entire node every time you would like to retrieve and average.
It depends on what this data is and how it's being updated, but I think one option is to simply have a separate node that is updated every time the collection is added to or is changed.
Here is some really rough pseudo code. For example, if your database looks like this:
database
|
+--collection
| |
| +--item_one (probably a uid like -k2jduwi5j5j5)
| | |
| | +--number: 90
| |
| +--item_two
| | |
| | +--number: 70
|
+--collection_metadata
| |
| +--average: 80
| |
| +--number_of_items: 2
Then when a new item is added, you run a metadata calculation:
var numerator = average * number_of_items + newItem.number;
number_of_items++; <-- this is your new number of items
numerator / number_of_items; <-- this is your new average
Then when an item is updated, you run a metadata calculation:
var numerator = average * number_of_items - changedItem.oldNumber + changedItem.newNumber;
numerator / number_of_items; <-- this is your new average
Now when you want this data, you always have this data on hand.

Downloading the entire Bitcoin transaction chain with R

I'm pretty new here so thank you in advance for the help. I'm trying to do some analysis of the entire Bitcoin transaction chain. In order to do that, I'm trying to create 2 tables
1) A full list of all Bitcoin addresses and their balance, i.e.,:
| ID | Address | Balance |
-------------------------------
| 1 | 7d4kExk... | 32 |
| 2 | 9Eckjes... | 0 |
| . | ... | ... |
2) A record of the number of transactions that have ever occurred between any two addresses in the Bitcoin network
| ID | Sender | Receiver | Transactions |
--------------------------------------------------
| 1 | 7d4kExk... | klDk39D... | 2 |
| 2 | 9Eckjes... | 7d4kExk... | 3 |
| . | ... | ... | .. |
To do this I've written a (probably very inefficient) script in R that loops through every block and scrapes blockexplorer.com to compile the tables. I've tried running it a couple of times so far but I'm running into two main issues
1 - It's very slow... I can imagine it's going to take at least a week at the rate that it's going
2 - I haven't been able to run it for more than a day or two without it hanging. It seems to just freeze RStudio.
I'd really appreaciate your help in two areas:
1 - Is there a better way to do this in R to make the code run significantly faster?
2 - Should I stop using R altogether for this and try a different approach?
Thanks in advance for the help! Please see below for the relevant chunks of code I'm using
url_start <- "http://blockexplorer.com/b/"
url_end <- ""
readUrl <- function(url) {
table <- try(readHTMLTable(url)[[1]])
if(inherits(table,"try-error")){
message(paste("URL does not seem to exist:", url))
errors <- errors + 1
return(NA)
} else {
processed <- processed + 1
return(table)
}
}
block_loop <- function (end, start = 0) {
...
addr_row <- 1 #starting row to fill out table
links_row <- 1 #starting row to fill out table
for (i in start:end) {
print(paste0("Reading block: ",i))
url <- paste(url_start,i,url_end, sep = "")
table <- readUrl(url)
if(is.na(table)){ next }
....
There are very close to 250,000 blocks on the site you mentioned (at least, 260,000 gives a 404). Curling from my connection (1 MB/s down) gives an average speed of about half a second. Try it yourself from the command line (just copy and paste) to see what you get:
curl -s -w "%{time_total}\n" -o /dev/null http://blockexplorer.com/b/220000
I'll assume your requests are about as fast as mine. Half a second times 250,000 is 125,000 seconds, or a day and a half. This is the absolute best you can get using any methods because you have to request the page.
Now, after doing an install.packages("XML"), I saw that running readHTMLTable(http://blockexplorer.com/b/220000) takes about five seconds on average. Five seconds times 250,000 is 1.25 million seconds which is about two weeks. So your estimates were correct; this is really, really slow. For reference, I'm running a 2011 MacBook Pro with a 2.2 GHz Intel Core i7 and 8GB of memory (1333 MHz).
Next, table merges in R are quite slow. Assuming 100 records per table row (seems about average) you'll have 25 million rows, and some of these rows have a kilobyte of data in them. Assuming you can fit this table in memory, concatenating tables will be a problem.
The solution to these problems that I'm most familiar with is to use Python instead of R, BeautifulSoup4 instead of readHTMLTable, and Pandas to replace R's dataframe. BeautifulSoup is fast (install lxml, a parser written in C) and easy to use, and Pandas is very quick too. Its dataframe class is modeled after R's, so you probably can work with it just fine. If you need something to request URLs and return the HTML for BeautifulSoup to parse, I'd suggest Requests. It's lean and simple, and the documentation is good. All of these are pip installable.
If you still run into problems the only thing I can think of is to get maybe 1% of the data in memory at a time, statistically reduce it, and move on to the next 1%. If you're on a machine similar to mine, you might not have another option.

Resources