I want to test the systems reaction to a process that wants to consume more memory than there is available.
I run stress-ng with the following command (on a 6G RAM machine):
stress-ng --vm-bytes 8G --vm-keep -m 1 --aggressive
but I get this error:
stress-ng: error: [5035] stress-ng-vm: gave up trying to mmap, no available memory
Is it possible to force the program to ignore its own secure mechanism ?
try to add this parameter --vm 4
I was having the same problem and it is gone after that.
To monitor a log file I have to connect to an ssh connection and redirect the output of the log file(let's call it RemoteLog.txt) out to a local machine so it can be read by a java program and put on a GUI.
Right now I have the output redirected out of the ssh connection and onto the local machine with the command:
ssh remote#ip.address tail logs/RemoteLog.txt -f > ~/Log/LocalLog.txt
and everything works fine technically with one exception: for some reason LocalLog.txt only gets updated with the changes to RemoteLog.txt every 35 seconds to the millisecond.
It doesn't matter the number of changes to RemoteLog, the number of lines specified with the tail command, or using the >> operator vs the > operator; there is always a 35 second delay between updates of LocalLog.txt while RemoteLog is constantly updating.
Does anyone have any clue why this might be?
I have a problem to make IIS to allow download file with Bitrate Throottling. I set limit file to 100kb/s. There is no problem without bitrate limitation. But with limit I have a problem.
I'm using a code similar to described in this article:
Securing Large Downloads Using C# and IIS 7
I also tried to switch off IIS Bitrate Throottling and control bitrate "by hand" calculating with TimeSpan the bitrate and using Thread.Sleep(10) in a while...
But all my tries was useless, I don't get any exceptions.
to test download I use wget, this way:
wget -t 1 http://db.realestate.ru/yrl/RealEstateExportToYandex.xml
(you can try it with wget for windows)
this is a 240Mb text file, wget always stops, at random position of downloading, 5% - 60% and throws this error message:
Read error at byte ... (Connection reset by peer).
May be the problem is not with IIS, because in may localhost is working well, but not online on highly loaded server.
Solved with this parameters specified in wget command:
wget -t 1 --header="Keep-Alive: 30000" -nv http://db.realestate.ru/yrl/RealEstateExportToYandex.xml
Environment:
Unix client and unix server.
Tool used : curl.
Client/Server should ignore the time wait time (2 *MSL ) when establishing connection.
This is done by executing the following commands :
sysctl net.ipv4.tcp_tw_reuse=1
sysctl net.ipv4.tcp_tw_recycle=1
Local port must be specified so that it can re-used.
Start the connection.
Example : while [ 1 ]; do curl --local-port 9056 192.168.40.2; sleep 30; done
I am still seeing the error even though it should have ignored time wait period.
Any idea why this is happening?
Say my program attempts a read of a byte in a file on a ZFS filesystem. ZFS can locate a copy of the necessary block, but cannot locate any copy with a valid checksum (they're all corrupted, or the only disks present have corrupted copies). What does my program see, in terms of the return value from the read, and the byte it tried to read? And is there a way to influence the behavior (under Solaris, or any other ZFS-implementing OS), that is, force failure, or force success, with potentially corrupt data?
EIO is indeed the only answer with current ZFS implementations.
An open ZFS "bug" asks for some way to read corrupted data:
http://bugs.opensolaris.org/bugdatabase/printableBug.do?bug_id=6186106
I believe this is already doable using the undocumented but open source zdb utility.
Have a look at http://www.cuddletech.com/blog/pivot/entry.php?id=980 for explanations about how to dump a file content using zdb -R option and "r" flag.
Solaris 10:
# Create a test pool
[root#tesalia z]# cd /tmp
[root#tesalia tmp]# mkfile 100M zz
[root#tesalia tmp]# zpool create prueba /tmp/zz
# Fill the pool
[root#tesalia /]# dd if=/dev/zero of=/prueba/dummy_file
dd: writing to `/prueba/dummy_file': No space left on device
129537+0 records in
129536+0 records out
66322432 bytes (66 MB) copied, 1.6093 s, 41.2 MB/s
# Umount the pool
[root#tesalia /]# zpool export prueba
# Corrupt the pool on purpose
[root#tesalia /]# dd if=/dev/urandom of=/tmp/zz seek=100000 count=1 conv=notrunc
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0715209 s, 7.2 kB/s
# Mount the pool again
zpool import -d /tmp prueba
# Try to read the corrupted data
[root#tesalia tmp]# md5sum /prueba/dummy_file
md5sum: /prueba/dummy_file: I/O error
# Read the manual
[root#tesalia tmp]# man -s2 read
[...]
RETURN VALUES
Upon successful completion, read() and readv() return a
non-negative integer indicating the number of bytes actually
read. Otherwise, the functions return -1 and set errno to
indicate the error.
ERRORS
The read(), readv(), and pread() functions will fail if:
[...]
EIO A physical I/O error has occurred, [...]
You must export/import the test pool because, if not, the direct overwrite (pool corruption) will be missed since the file will still be cached in OS memory.
And no, currently ZFS will refuse to give you corrupted data. As it should.
How would returning anything but an EIO error from read() make sense outside a file system specific low level data rescue utility?
The low level data rescue utility would need to use an OS and FS specific API other than open/read/write/close to to access the file. The semantics it would need are fundamentally different from reading normal files, so it would need a specialized API.