Dear Community Members,
I have a situation here ! One of the VM running on OVS 3.2.11 is stuck and I tried to kill it from OVMM but that didn't work. I also tried to use xm destroy command but it hangs indefinitely.
Could you please share your thoughts on how to debug, where to look the logs(I have checked all the default log locations) but no clue. Could you please share what could have been wrong with this VM. All the other VMs responds normal.
Thanks in advance..
Related
I use CheckPoint VPN to log in to my place of work's servers to work remotely. The VPN has been working (mostly) fine all year, and I haven't changed any of the settings, but this morning, when I tried to log in, it's giving me the "Arg_NullReferenceException." I can't seem to find anything on this particular error on google.
I have tried restarting my computer, because it's not the first issue I've had with CheckPoint VPN (though it is the first time I've seen that error message), and a restart usually resolves whatever issue I'm having. I've also tried creating a new connection with the same settings, but I'm getting the same error with that one, too.
I'm not entirely sure what other information I would need to provide. I'm also not sure if it's a problem on my end, or on the company servers. I have already emailed tech support, but I thought I should be thorough.
This is a known issue. I have been jumping through hoops trying to get the capsule client to work. Raise a ticket with TAC if you have support. If not then you can download the E86 Endpoint connect client and run it. That has been my work around for this issue.
They just issued an update to the Capsule via the Microsoft Store. It seems one of the recent Windows Security Update broke the L2TP protocol within windows.
I set up a small zeek cluster and had it working fine. Here's my rough setup:
Proxy/Manager/Logger - 192.168.1.10
Worker-1 - 192.168.1.10 (em1)
Worker-2 - 192.168.1.15 (em1)
Worker-3 - 192.168.1.15 (p1p1)
Worker-4 - 192.168.1.15 (p1p2)
Worker-5 - 192.168.1.16 (em1)
Worker-6 - 192.168.1.16 (p1p1)
Worker-7 - 192.168.1.16 (p1p2)
Everything was going swell. However, now nothing gets brokered except for worker-1 which is local to the proxy/manager/logger. I can do deploy, start, and stop workers through zeekctl. However, peerstatus hangs indefinitely when checking a remote worker.
I've even set up a new cluster on brand new systems starting from scratch with the same issues. This leads me to believe it's in the network but I can't figure out for the life of me what it could be. I know this is vague but does anyone have at least some troubleshooting ideas for me to try?
Let me know what else I can give you. I appreciate any kind of help you can send my way!
Seth Hall nailed it. I messed up the rules without knowing. Thankfully an easy fix. Thanks.
We have a process (written in c++ /managed), which receives network data via tcpip.
After running the process for a while while tracking network load, it seems that network get into freeze state and the process does not getting data, there are other processes in the system that using networking (same nic) which operates normally.
the process gets out of this frozen situation by itself after several minutes.
Any idea what is happening?
Any counter i can track to see if my process reach some limitations ?
It is going to be very difficult to answer specifically,
-- without knowing what exactly is your process/application about,
-- whether it is a network chat application, or a file server/client, or ......
-- without other details about your process how it is implemented, what libraries it uses, if relevant to problem.
Also you haven't mentioned what OS and environment you are running this process under,
there is very little anyone can help . It could be anything, a busy wait loopl in your code, locking problems if its a multi-threaded code,....
Nonetheless , here are some options to check:
If its linux try below commands to debug and monitor the behaviour of the process and see what could be problem-
top
Check top to see ow much resources(CPU, memory) your process is using and if there is anything abnormally high values in CPU usage for it.
pstack
This should stack frames of the process executing at time of the problem.
netstat
Run this with necessary options (tcp/udp) to check what is the stae of the network sockets opened by your process
gcore -s -c
This forces your process to core when the mentioned problem happens, and then analyze that core file using gdb
gdb
and then use command where at gdb prompt to get full back trace of the process (which functions it was executing last and previous function calls.
I am using makeCluster function from R package snow from Linux machine to start a SOCK cluster on a remote Linux machine. All seems settled for the two machines to communicate succesfully (I am able to estabilish ssh connections between the two). But:
makeCluster("192.168.128.24",type="SOCK")
does not throw any result, just hangs indefinitely.
What am I doing wrong?
Thanks a lot
Unfortunately, there are a lot of things that can go wrong when creating a snow (or parallel) cluster object, and the most common failure mode is to hang indefinitely. The problem is that makeSOCKcluster launches the cluster workers one by one, and each worker (if successfully started) must make a socket connection back to the master before the master proceeds to launch the next worker. If any of the workers fail to connect back to the master, makeSOCKcluster will hang without any error message. The worker may issue an error message, but by default any error message is redirected to /dev/null.
In addition to ssh problems, makeSOCKcluster could hang because:
R not installed on a worker machine
snow not installed on a the worker machine
R or snow not installed in the same location as the local machine
current user doesn't exist on a worker machine
networking problem
firewall problem
and there are many more possibilities.
In other words, no one can diagnose this problem without further information, so you have to do some troubleshooting in order to get that information.
In my experience, the single most useful troubleshooting technique is manual mode which you enable by specifying manual=TRUE when creating the cluster object. It's also a good idea to set outfile="" so that error messages from the workers aren't redirected to /dev/null:
cl <- makeSOCKcluster("192.168.128.24", manual=TRUE, outfile="")
makeSOCKcluster will display an Rscript command to execute in a terminal on the specified machine, and then it will wait for you to execute that command. In other words, makeSOCKcluster will hang until you manually start the worker on host 192.168.128.24, in your case. Remember that this is a troubleshooting technique, not a solution to the problem, and the hope is to get more information about why the workers aren't starting by trying to start them manually.
Obviously, the use of manual mode bypasses any ssh issues (since you're not using ssh), so if you can create a SOCK cluster successfully in manual mode, then probably ssh is your problem. If the Rscript command isn't found, then either R isn't installed, or it's installed in a different location. But hopefully you'll get some error message that will lead you to the solution.
If makeSOCKcluster still just hangs after you've executed the specified Rscript command on the specified machine, then you probably have a networking or firewall issue.
For more troubleshooting advice, see my answer for making cluster in doParallel / snowfall hangs.
I am currently using pfiles to get the process occupying certain port in Solaris10,
but it causes problem when run parallely.
problem is pfiles can't be run parallely for the same pid.
the second one will return with error message.
pfiles: process is traced :
Is there any alternative to pfiles to get the process occupying a port in Solaris.
OR any information on OS API's to get port/process information on Solaris could help.
A workaround would be to use some lock mechanism to avoid this.
Alternatively, you might install lsof from a freeware repository and see if it supports concurrency (I think it does).
I just tested Solaris 11 Express pfiles and it doesn't seem to exhibit this issue.