I get huge .cap files by iptrace on AIX. The file is about 800MB. I'm on MacOS, and tshark has been running for a whole day parsing it.
CPU of my host keeps 99% occupied. I really need to speed it up.
I've already added -n flag of tshark.
I'm thinking about adding frame number range to the filter, which should narrow down the amount of packets for analysis. But I don't know the total amount of frames, therefore can't really add that parameter.
Can I browse some general info about the .cap file before fully open it?
Is there anything else to do to remarkably speed up tshark performance?
Thanks.
Perhaps TShark is stuck in an infinite loop, in which case the problem isn't "the file is too big" (I have a capture that's 776MB, and it takes only a few minutes to run it through tshark -V, albeit on a 2.8 GHz Core i7 MBP with 16GB of main memory), the problem is "TShark/Wireshark have a bug".
File a bug on the Wireshark Bugzilla, specifying the TShark command that's been running for a day (all command-line arguments). You may have to either provide the capture file for testing by the Wireshark developers or run test versions of TShark.
Related
I can't seem to figure out what's wrong with my iPerf executable. I am trying to automate the execution of iPerf using a Telnet script (this is the one I am using https://github.com/ngharo/Random-PHP-Classes/blob/master/Telnet.class.php). I'd like to know what I can do to find the reason for the bottleneck, assuming the PHP script works as expected. Basically, if I run it manually over the command line, I get the rates desired however if I run it remotely using the script I get capped performance.
What I have tried is using tcpdump to output the logs while iperf is running and then reading it using Wireshark. All I can observe is that the time differences between the fragments are larger when using the script, which means the rates will be lower. I don't know what to do next after this. Any ideas what else I can look at/try? I've tried changing kernel values for buffer sizes using sysctl but this has no effect as running it manually always works anyway.
Note that I have tried to play around with all the iperf configuration options such as -w, -l, -b (I havent tried burst mode). No success.
I have a 3 Disk RAIDz1 Pool, encrypted with AES128 in GEOM_ELI, that I have been using in FreeNAS since version 8.
There have been many zpool upgrades, and over all I am very happy with ZFS.
Lately however I have been growing frustrated with FreeNAS. Largely many bugs that haven't been fixed over the years. But overall its the INSISTING on me using a Flash drive for their os, even though most of it is read only.
It's still a Single point of failure and has always extended boot times by several minutes. Bottom line, I just want to use Vanilla FreeBSD with this pool.
I am looking for more flexibility and a I wish to educate myself with this awesome Operating System.
Doing some more extended research I have found many tutorials on installing FreeBSD naively to a ZFS volume and mounting it as /
It wasn't till I did more research and found an article on mounting a zfs encrypted volume as root. Later I found that FreeBSD 10 does this during installation, which is awesome to say the least.
Tutorial I used
I made a VM With VMWare workstation, with three 2TB Drives, passed through as Physical Disks, and followed every step to a T and everything worked out very well. Now that I had a better grasp on the commands I was doing and why I was doing them, I wanted to do this to an already existing pool, that has a lot of data already on it.
By Default, FreeNAS Creates a 2GB SWAP partition at the front of every data disk. I removed the swap space and made it 1.5GB partition on each drive with 512MB remaining for Swap. I followed through every step, changing things as needed. (I have 3 disks, tutorial speaks of 4, My pool name is foxhole, the tutorial is zroot.) I was successful in decrypting my volume with geom_eli and mounted it successfully.
I did not skip any steps provided. I even copied every command I was given and altered them in a text file so they would suit my case.
Here is my problem now.
After finally restarting to test everything, The kernel begins starting, then I am spat at a mountroot terminal. It seems that geom_eli didn't make an attempt to decrypt my root volume. I have a suspicion why. Correct me if I am wrong.
At the start of the tutorial, I am given commands to create new geoms for the encrypted volume:
geli init -b -B /boot/zfs/bootdir/da0p4.eli -e AES-XTS -K /boot/zfs/bootdir/encryption.key -l 256 -s 4096 /dev/da0p4
geli init -b -B /boot/zfs/bootdir/da1p4.eli -e AES-XTS -K /boot/zfs/bootdir/encryption.key -l 256 -s 4096 /dev/da1p4
geli init -b -B /boot/zfs/bootdir/da2p4.eli -e AES-XTS -K /boot/zfs/bootdir/encryption.key -l 256 -s 4096 /dev/da2p4
Since my volume already exists, I cant perform those commands that would have created "/boot/zfs/bootdir/daXp4.eli" files.
I am really just guessing at this being the cause.
I noticed this when i attempted to perform:
mv bootdir/*.eli bootdir/boot/
Gave me "No Match."
I assumed those would have been created when the pool was decrypted.
I apologize for this post. I am trying to give as much info as I can without giving too much. I have been working on this for the last 18 hours. I would really love someone with a clear head to take a peek at this.
If I missed any useful information, let me know.
Turns out I was correct. The daXp4.eli files are necessary as it's the metadata of each disk. A reference point if you will.
By performing:
geli backup /dev/daXp4 /boot/daXp4.eli
It create the meta files required for geom to attempt a decryption at boot time.
I hope this helps someone else interested in this stuff.
I now have a NAS with 23 Disks. 3 ZFS Volumes, all encrypted with geom_eli
I've been having problems transferring files over a pretty bad connection (I'm trying to upload a file on a cloud server) via rsync.
The rsync essentially hangs after about a minute or so. This is the way I'm trying to perform the upload:
rsync -avz --progress -e "ssh" ~/target.war root#my-remote-server:~/
There is no error message or other info. It simply hangs displaying something like:
7307264 14% 92.47kB/s 0:07:59
Ping-ing the remote endpoint doesn't seem to be loosing packages as far as I see.
Any help on how to overcome this problem would be nice. Thank you.
The --partial option keeps partially transferred files if the transfer is interrupted. You could use it to try again without having to transfer the whole file again.
In fact, there is the -P option, which is equivalent to --partial --progress. According to rsync's man page, "Its purpose is to make it much easier to specify these two options for a long transfer that may be interrupted."
My application is very heavy (it downloads some data from internet and puts it into a zip file), and sometimes it takes even more than a minute to respond (please, note, this is a proof of concept). CPU has 2 cores and internet bandwidth is at 10% utilization during a request. I launch uWSGI like this:
uwsgi --processes=2 --http=:8001 --wsgi-file=app.py
When I start two requests, they queue up. How do I make them get handled simultaneously instead? Tried adding --lazy, --master and --enable-threads in all combinations, neither helped. Creating two separate instanced does work, but that doesn't seem like a good practice.
are you sure you are not trying to make two connections from the same browser (it is generally blocked) ? try with curl or wget
I've been trying to trace down why I have 100% iowait on my box. If I do something like a mysql select query, system goes to 100% iowait (on more than one cpu on my server,) which kills my watchdogs and sometimes kills httpd itself.
In vmstat I see that every 8 seconds or so, there's a 5MB disk write. And that causes at least one cpu (out of 4) to be blocking for one or two seconds.
I have to say that there are a few million files in my ext3 (and I tried ext2, and I have no atime and no journaling enabled.) There is a hardware raid, mirroring two 300GB ides.
I'm missing dtrace. Is there any way to find out what causes these writes? and how do I speed my filesystem up?
Ideas are welcome!
Thank you!
Use iotop.
OK, possible diagnosis steps (for posterity):
Have you confirmed that you're not actually running out of virtual memory and therefore swapping processes out to disk?
If it's not the kernel swapping, you may be able to use strace (as you don't have dtrace) to prove whether it's MySQL doing the writes
Can you please provide more details of the hardware and O/S configuration?