I have a data.frame of moderate size, and I tested storing and retrieving it to a network drive using both rds (uncompressed) and feather format. But the result shows that while write_feather is much faster than saveRDS, read_feather is much slower than readRDS.
Question(s). Does this have something to do with the particular network configuration of my workplace (i.e. is it just me)? or does it have something to do with the innate ability to handle remote files of read_feather and readRDS? Shall I stick to rds for now?
> print(object.size(impdata),unit="auto")
364.4 Mb
## SAVING
> system.time(feather::write_feather(impData,path="M:/waangData/test.feather"))
user system elapsed
0.52 0.16 4.80
> system.time(saveRDS(impData,file="M:/waangData/Data4predictImp.rds",compress=F))
user system elapsed
4.23 2.35 28.61
## READING
> system.time({t2=feather::read_feather("M:/waangData/test.feather")})
user system elapsed
0.59 1.54 134.39
> system.time({t=readRDS("M:/waangData/Data4predictImp.rds")})
user system elapsed
2.36 0.61 19.59
Related
I have a recent updated MariaDB installation that suddenly hangs on a every-minute-cronjob after my computer resumes from sleep. Configuration worked fine before latest apt upgrade.
Process list shows an insert query from a ~100 insert batch operation.
I tried cancelling all running scripts before sending the computer to sleep, but at the next wakeup it will still fail. I have to kill -9 the database to even get it to restart.
Id? User? Host? db? Command? Time? State? Info? Progress?
52 xxxxxx localhost xxxxxx Execute 4 Update INSERT INTO xxxxxxx
and it will never finish. This also happens if i wait ~30 seconds after wakeup and start the script manually, so i kinda guess MariaDB handles sleep/resume state not too well.
Is this a known bug? Any way around having my PC run 24/7 to get around this?
Server variables too much for this post, only changed
innodb_buffer_pool_size = 64G
innodb_log_file_size = 16G
As suggested, i checked SHOW ENGINE INNODB STATUS while the lock is happening, cannot post this as a comment due to length
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 0 srv_active, 0 srv_shutdown, 4664 srv_idle
srv_master_thread log flush and writes: 4663
----------
SEMAPHORES
----------
------------
TRANSACTIONS
------------
Trx id counter 314781977
Purge done for trx's n:o < 314781976 undo n:o < 0 state: running
History list length 142
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 314781976, ACTIVE 106 sec inserting
mysql tables in use 1, locked 1
3 lock struct(s), heap size 1128, 1 row lock(s), undo log entries 1
MariaDB thread id 57, OS thread handle 140131630528064, query id 273212 localhost xxx Update
INSERT INTO xyzxyz (`xyz`, `xyz`, `time`, `xyz`, `xyz`) VALUES (?,?,?,?,?)
--------
FILE I/O
--------
Pending flushes (fsync) log: 0; buffer pool: 0
19307 OS file reads, 24298 OS file writes, 24302 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 932, free list len 2162, seg size 3095, 1 merges
merged operations:
insert 11, delete mark 0, delete 0
discarded operations:
insert 0, delete mark 0, delete 0
0.00 hash searches/s, 0.00 non-hash searches/s
---
LOG
---
Log sequence number 44207022268
Log flushed up to 44207022268
Pages flushed up to 44197540796
Last checkpoint at 44197540784
0 pending log flushes, 0 pending chkp writes
24306 log i/o's done, 0.00 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 68753031168
Dictionary memory allocated 424873192
Buffer pool size 4153344
Free buffers 4133333
Database pages 20011
Old database pages 7367
Modified db pages 3024
Percent of dirty pages(LRU & free pages): 0.073
Max dirty pages percent: 90.000
Pending reads 1
Pending writes: LRU 0, flush list 0
Pages made young 32, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 19243, created 741, written 0
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
No buffer pool page gets since the last printout
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 20011, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
--------------
ROW OPERATIONS
--------------
0 read views open inside InnoDB
Process ID=0, Main thread ID=0, state: sleeping
Number of rows inserted 24161, updated 113, deleted 0, read 623966
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
Number of system rows inserted 0, updated 0, deleted 0, read 0
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================
I checked whether it's a deadlock due to a bad update-query, but it's and INSERT-query with State update which just means mariadb is about to update the table (but never does). I use mariadb 10.6.7 on debian testing - i may need to use a stable release to sort this out. Sleeping happens from seconds (still happening) to hours.
I tried:
Removed PRIMARY and UNIQUE-key with the same indizes. Same problem
Change innodb_flush_log_at_trx_commit to zero to not have it update log after every insert - still stuck on first insert.
I've come across a weird problem with a piece of CUDA code. It's compiled into a DLL using msvc community 2015 and nvcc in Windows 10. I'm using CUDA 8. The application calling the dll is being developed with Qt5.
The application is fairly large and complicated: using Qt, CUDA, VTK, HDF5. It all seems to work, the app runs and does what it's supposed to, but fails in a reproducible manner that doesn't seem to make any sense. The example function below seems to reproduce a similar error.
I'm compiling the dll with:
nvcc -m64 -arch=sm_20 -o fdm1_cuda.dll -Xcompiler "/LD /D_USRDLL /D_WINDLL" fdm1_cuda.cu
This function seems to exhibit the same problem as the main code:
extern "C" __declspec(dllexport) void fdm1_funnyproblemchecker(){
cudaError_t errorcode;
float *a_host;
float *b_host;
float *a_device;
int num, i;
num=10;
a_host = (float *)malloc(sizeof(float)*num);
if( a_host) printf("Result check, allocate host memory a: success\n");
if(!a_host) printf("Result check, allocate host memory a: failed!\n");
for(i=0;i<num;i++) a_host[i] = (float)i;
for(i=0;i<num;i++) printf("%6.3f ", a_host[i]);
printf("\n");
b_host = (float *)malloc(sizeof(float)*num);
if( b_host) printf("Result check, allocate host memory b: success\n");
if(!b_host) printf("Result check, allocate host memory b: failed!\n");
errorcode = cudaSuccess;
cudaMalloc((void **) &a_device, sizeof(float)*num);
errorcode = cudaGetLastError();
printf("Result check, allocate device memory: %s\n", cudaGetErrorString(errorcode));
errorcode = cudaSuccess;
cudaMemcpy(a_device, a_host, num*sizeof(float), cudaMemcpyHostToDevice);
errorcode = cudaGetLastError();
printf("Result check, copy host to device : %s\n", cudaGetErrorString(errorcode));
errorcode = cudaSuccess;
cudaMemcpy(b_host, a_device, num*sizeof(float), cudaMemcpyDeviceToHost);
errorcode = cudaGetLastError();
printf("Result check, copy device to host : %s\n", cudaGetErrorString(errorcode));
for(i=0;i<num;i++) printf("%6.3f ", b_host[i]);
printf("\n");
fflush(stdout);
cudaFree(a_device);
free(a_host);
free(b_host);
}
Sometimes the output from this is:
Result check, allocate host memory a: success
0.000 1.000 2.000 3.000 4.000 5.000 6.000 7.000 8.000 9.000
Result check, allocate host memory b: success
Result check, allocate device memory: no error
Result check, copy host to device : no error
Result check, copy device to host : no error
0.000 1.000 2.000 3.000 4.000 5.000 6.000 7.000 8.000 9.000
If I change something which I don't think is related elsewhere in the application (changing the size of a model during runtime), I get this:
Result check, allocate host memory a: success
0.000 1.000 2.000 3.000 4.000 5.000 6.000 7.000 8.000 9.000
Result check, allocate host memory b: success
Result check, allocate device memory: no error
Result check, copy host to device : an illegal memory access was encountered
Result check, copy device to host : an illegal memory access was encountered
0.000 0.000 0.000 0.000 0.000 0.000 0.000 270355481144287188484096.000 74936693461279934656588472647680.000 0.000
So, there is a cudaMemcpy failure.
I can't tell if it is a host malloc issue, a cudaMalloc issue, or something related to it running from a dll.
Can anyone see what I'm missing here?
I've had this application running in Linux and Mac, using dynamic libraries, without any major problems. I'm now trying to get it going under windows.
Problem solved.
It was a kernel elsewher in the code accessing an array[-1] out of bounds.
My error checking with cudaGetLastError was incorrect. If I do cudaDeviceSynchronize before each cudaGetLastError it reports errors I was missing.
Thanks.
I am developing a couple of websites, but I only have paid for an EC2 nano instance on AWS. How many websites could I possible host there, assuming the websites will only have minimum traffic? Most of the websites are for personal use only.
Only one way to find out ;)
No definite answer possible because it depends on a lot of factors.
But if traffic is really low you will only be limited by the amount of disk space and as t2.nano runs on EBS storage this can be as big as you want. So you could fit a lot of websites!
t2.nano has only 512Mb memory so best to pick a not-so-memory-hungry webserver such as ngnix.
I run five very low traffic websites on my t2 nano - four of them Wordpress, one custom PHP. I run Nginx, PHP5.6, and MySQl 5.6 on the same instance. Traffic is extremely light, in the region of 2000 pages a day, which is about a page every 30 seconds. If you include static resources it'll be higher. CloudFlare runs as the CDN, which reduces static resource consumption significantly, but doesn't cache pages.
I have MySQL on the instance configured to use very little memory, currently 141MB physical RAM. Nginx takes around 10MB RAM. I have four PHP workers, each taking 150MB RAM, but of that 130MB is shared, so it's really 20MB per worker after the first.
Here's the output of a quick performance test on the t2.nano. Note that the Nginx page cache will be serving all of the pages.
siege -c 50 -t10s https://www.example.com -i -q -b
Lifting the server siege... done.
Transactions: 2399 hits
Availability: 100.00 %
Elapsed time: 9.60 secs
Data transferred: 14.82 MB
Response time: 0.20 secs
Transaction rate: 249.90 trans/sec ***
Throughput: 1.54 MB/sec
Concurrency: 49.42
Successful transactions: 2399
Failed transactions: 0
Longest transaction: 0.36
Shortest transaction: 0.14
Here it is with nginx page caching turned off
siege -c 5 -t10s https://www.example.com -i -q -b
Lifting the server siege... done.
Transactions: 113 hits
Availability: 100.00 %
Elapsed time: 9.99 secs
Data transferred: 0.70 MB
Response time: 0.44 secs
Transaction rate: 11.31 trans/sec ***
Throughput: 0.07 MB/sec
Concurrency: 4.95
Successful transactions: 113
Failed transactions: 0
Longest transaction: 0.70
Shortest transaction: 0.33
Here is the session i am using:
<sessions>
<session type="ts_http" name="Test" probability="100">
<for var="i" to="1" from="1">
<request subst="true">
<http version="1.1" contents="%%autoupload:readdata%%" method="POST" url="/UploadFile">
<http_header name="key" value="testkey"/>
<http_header name="Filename" value="test.zip"/>
</http>
</request>
</for>
</session>
The session has got only one post request. so the mean page response time and mean request response time are same as expected, in the tsung report.
but i was expecting the mean for user session also to be nearly same with deviation of only connection time.
below is snap of tsung report:
Name highest-10sec-mean lowest-10sec-mean Highest-Rate Mean-Rate Mean Count
connect 1.55 sec 4.11 msec 0.5 / sec 0.24 / sec 0.50 sec 47
page 26.35 sec 2.50 sec 0.9 / sec 0.24 / sec 12.83 sec 43
request 26.35 sec 2.50 sec 0.9 / sec 0.24 / sec 12.83 sec 43
session 30.83 sec 6.91 sec 0.9 / sec 0.25 / sec 17.73 sec 44
Wanted to understand what is it that getting added in the session mean time, such that the session time is higher than page/request time.
IIRC page means a consecutive sequence of requests within a session, without thinktimes/waits. Depending on the load you are configuring, session also includes work required to get the session started. As starting new sessions is not free, you could try to launch 1/10th users and let each user to 10 requests. page and session should be almost identical then.
It's a bit strange though, that you see almost 5 sec difference on the mean values. Could you provide more details on your environment? (os/tsung/erlang versions, entire configuration, ...)
I've kind of ran into an ugly snag. I developed a website for a client a few years back, and since then they've transferred their site to a different domain name provider and host. Now they want some updates, but when I try to access their site I get a Network Timeout (the page just tries to load for a few minutes, then firefox shows a Network Timeout error). I can access the site via a proxy, but proxies kinda suck and don't support everything, plus I'm a little paranoid about sending sensitive data through a proxy, not to mention I don't see how that would help me with FTP access and what not. I'm not exactly sure where along the line this problem is occurring... is my ISP blocking it, is the webserver blocking me, is it my router, or what's going on? I know of two sites that do this, and I think they're hosted by the same people.
The sites are http://fvringette.com/ and http://damngoodtimes.com/
#MarkusQ: traceroute for fvringette.com (which turns out to be the same as with damngoodtimes.com)
traceroute: Warning: Multiple interfaces found; using 134.117.14.35 # hme0
traceroute to 76.74.225.90 (76.74.225.90), 30 hops max, 40 byte packets
1 unix-gate.physics.carleton.ca (134.117.14.1) 0.973 ms 0.513 ms 0.514 ms
2 10.50.254.3 (10.50.254.3) 0.437 ms 0.385 ms 0.351 ms
3 10.30.33.1 (10.30.33.1) 0.488 ms 0.394 ms 0.370 ms
4 10.30.55.1 (10.30.55.1) 0.396 ms 0.416 ms 0.391 ms
5 134.117.254.242 (134.117.254.242) 0.708 ms 0.720 ms 0.704 ms
6 10.30.57.1 (10.30.57.1) 1.338 ms 1.221 ms 1.237 ms
7 kolker.fcican.com (207.34.252.249) 1.464 ms 1.544 ms 1.459 ms
8 * * *
9 154.11.3.17 (154.11.3.17) 7.355 ms 7.393 ms 7.426 ms
10 oc48.so-2-0-3.van-hc21e-cor-1.peer1.net (216.187.114.137) 62.762 ms 62.838 ms 62.625 ms
11 oc48.pos4-0.van-hc21e-dis-1.peer1.net (216.187.89.253) 62.795 ms 63.238 ms 62.893 ms
12 64.69.91.245 (64.69.91.245) 64.103 ms 62.908 ms 63.266 ms
13 64.69.91.245 (64.69.91.245) 63.094 ms !H 63.072 ms !H 63.173 ms !H
64.69.91.245 - Geo Information
IP Address 64.69.91.245
Host 64.69.91.245
Location CA CA, Canada
City Vancouver, BC v6b4n5
Organization Peer1 Internet Bandwidth & Server Co-Location Faci
ISP Peer 1 Network
AS Number AS13768
Latitude 49°25'00" North
Longitude 123°13'33" West
Distance 9281.29 km (5767.13 miles)
64.69.91.245 - Whois Information
OrgName: Peer 1 Network Inc.
OrgID: PER1
Address: 75 Broad Street
Address: 2nd Floor
City: New York
StateProv: NY
PostalCode: 10004
Country: US
NetRange: 64.69.64.0 - 64.69.95.255
CIDR: 64.69.64.0/19
NetName: PEER1-BLK-01
NetHandle: NET-64-69-64-0-1
Parent: NET-64-0-0-0-0
NetType: Direct Allocation
NameServer: NS1.PEER1.NET
NameServer: NS2.PEER1.NET
Comment:
RegDate: 2000-04-12
Updated: 2007-08-29
RTechHandle: ZP55-ARIN
RTechName: Peer1 Network Inc.
RTechPhone: +1-604-683-7747
RTechEmail: net-admin#peer1.net
OrgAbuseHandle: NSA-ARIN
OrgAbuseName: Peer 1 Network AUP Enforcement
OrgAbusePhone: +1-604-484-2588
OrgAbuseEmail: abuse#peer1.net
OrgTechHandle: ZP55-ARIN
OrgTechName: Peer1 Network Inc.
OrgTechPhone: +1-604-683-7747
OrgTechEmail: net-admin#peer1.net
OrgName: Peer1 Internet Bandwidth & Server Co-Location Facilities
OrgID: PIBSCF
Address: 2100-555 W. hastings St.
City: Vancouver
StateProv: BC
PostalCode: V6B 4N5
Country: CA
NetRange: 64.69.91.240 - 64.69.91.255
CIDR: 64.69.91.240/28
NetName: PEER1-GVLANPRI-01
NetHandle: NET-64-69-91-240-1
Parent: NET-64-69-64-0-1
NetType: Reassigned
Comment:
RegDate: 2002-03-14
Updated: 2002-03-14
RTechHandle: MT1763-ARIN
RTechName: Teolis, Mark
RTechPhone: +1-604-683-7747
RTechEmail: net-admin#peer1.net
# ARIN WHOIS database, last updated 2009-04-06 19:10
# Enter ? for additional hints on searching ARIN's WHOIS database.
It occured to me that it may be more useful if I do a trace from my home computer, to fvringette.com rather than from some random computer which may actually be able to connect. Output of tracert:
1 <1 ms <1 ms <1 ms <snip>
2 24 ms 24 ms 23 ms 70.71.106.1
3 24 ms 24 ms 24 ms rd1bb-ge5-0-0-1.vc.shawcable.net [64.59.149.2]
4 25 ms 29 ms 29 ms rc2bb-tge0-15-0-0.vc.shawcable.net [66.163.69.137]
5 28 ms 30 ms 29 ms rc2wh-tge0-15-1-0.vc.shawcable.net [66.163.69.121]
6 25 ms 24 ms 24 ms 204.239.129.213
7 26 ms 29 ms 29 ms oc48.pos3-0.van-spenc-dis-1.peer1.net [216.187.89.250]
8 27 ms 29 ms 30 ms 64.69.91.245
9 * * * Request timed out.
10 * * * Request timed out.
After this all requests just keep timing out. It's the same IP, 64.69.91.245 that seems to be causing the problem... does this mean that I'm just unlucky and got a dead server that won't forward my request? I have no idea how these things work.
I can ping 64.69.91.245, but not telnet 64.69.91.245. It says 'connect failed', not 'connection refused'... I can't think of why it would fail for me, but no one else?
Try doing a traceroute for starters.
The "!H"s on line 13 mean that the host (64.69.91.245) is unreachable--but not all the time (see line 12). This looks at a glance to be further than your ISP, maybe at theirs.
Next step would be to figure out who that is...
My guess is either their hardware is faulty, or that they are blocking you inadvertently.
Some networks have the policies to block telnet connections/packets on the premise that they are unencrypted and only allow SSH.
Of course they may be blocking you for another reason.
Perhaps you should discuss it with them. It may be the case that they have blocked your IP address because you've been trying to telnet into their servers all day. :)