Not able to use the package plot in Julia. GKS error - julia

Good morning everyone. I am running Julia 1.6.1 on a Windows 10 machine.
I cannot use the function plot from the package Plots. Anytime I try to use it I run in the following error:
connect: No error
GKS: can't connect to GKS socket application
GKS: Open failed in routine OPEN_WS
GKS: GKS not in proper state. GKS must be either in the state WSOP or WSAC in routine ACTIVATE_WS
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine FILLAREA
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine FILLAREA
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine TEXT
GKS: GKS not in proper state. GKS must be either in the state WSAC or SGOP in routine POLYLINE
Do you have any idea of how to solve it? I tried some of the suggestions in some other thread, but they did not give any good outcome.
Thanks in advance

I got the same issue on Windows 10. As Bill mentioned before, the antivirus blocks gksqt.exe. After that worked fine!

Related

R make the connection time longer

when I connect to a database, api, service, etc. I always met this error:
error in open.connection(con rb ) timeout was reached in r
sometimes there is some problem with the network, code, etc.
but sometimes I just need to wait longer time
in the second case, is there a way to make the connection time limit/threshold longer? so that I could finish the connection?

when signal will be processed in unix?

When exactly signal will start execution in unix ?Does the signal will be processed when system turns into kernel mode? or immediately when it is receives signal? I assume it will be processed immediate when it receives.
A signal is the Unix mechanism for allowing a user space process to receive asynchronous notifications. As such, signals are always "delivered by" the kernel. And hence, it is impossible for a signal to be delivered without a transition into kernel mode. Therefore it doesn't make sense to talk of a process "receiving" a signal (or sending one) without the involvement of the kernel.
Signals can be generated in different ways.
They can be generated by a device driver within the kernel (for example, tty driver in response to the interrupt, kill, or stop keys or in response to input or output by a backgrounded process).
They can be generated by the kernel in response to an emergent out-of-memory condition.
They can be generated by a processor exception in response to something the process itself does during its execution (illegal instruction, divide by zero, reference an illegal address).
They can be generated directly by another process (or by the receiving process itself) via kill(2).
SIGPIPE can be generated as a result of writing to a pipe that has no reader.
But in every case, the signal is delivered to the receiving process by the kernel and hence through a kernel-mode transition.
The kernel might need to force that transition -- pre-empt the receiving process -- in order to deliver the signal (for example, in the case of a CPU-bound process running on processor A being sent a signal by a different process running on processor B).
In some cases, the signal may be handled for the process by the kernel itself (for example, with SIGKILL -- or several others when no signal handler is configured).
Actually invoking a process' signal handler is done by manipulating the process' user space stack so that the signal handler is invoked on return from kernel-mode and then, if/when the signal handler procedure returns, the originally executing code can be resumed.
As to when it is processed, that is subject to a number of different factors.
There are operating system (i.e. kernel) operations that are never interrupted by signals (these are generally relatively short duration operations), in which case the signal will be processed after their completion.
The process may have temporarily blocked signal delivery, in which case the signal will be "pending" until it is unblocked.
The process could be swapped out or non-runnable for any of a number of reasons -- in which case, its signal handler cannot be invoked until the process is runnable again.
Resuming the process in order to deliver the signal might be delayed by interrupts and higher priority tasks.
A signal will be immediately detected by the process which receives it.
Depending on the signal type, the process might treat it with the default handler, might ignore it or might execute a custom handler. It depends a lot on what the process is and what signal it receives. The exception is the kill signal (9) which is treated by the kernel and terminates the execution of the process which was supposed to receive it.

What happens to the rest of the stack during a signal handler?

I've set up a signal handler in my main thread. A separate thread then sends my main thread this signal. My signal handler is being called appropriately, but I'm not sure what the 'State' of the main thread is at this point, and whether it can be recovered. basically, my main thread was blocked on a read() call, and a different thread has sent it a signal due to an extraordinary event. I thus want the read() call to fail (EINTR?), hence my other thread sending the main thread this signal.
It depends on how you installed the signal handler. If the signal handler was installed using sigaction() and without specifying the SA_RESTART flag, then the read() will fail with EINTR if it has not transferred any data yet.
In general, the thread that has handled a signal can continue normally after the signal handler returns. That's really the whole point.
Remember though, that the signal might have arrived just after the read() had successfully returned, too - or worse, just before you called read() (in which case the read() will still block).

process state in Unix

I need to understand what happens to a process in Unix when it calls the pause() function.
Considering a simple state diagram with three states: ready, run and wait. If my programm only prints its pid and than makes pause, will the program be indefinitely in "wait" state?
If it does while(1) { pause() }, it will be indefinitely in "wait" state too?
From the manpage:
pause() causes the calling process (or thread) to sleep until a signal is delivered that either terminates the process or causes the invocation of a signal-catching function.
So the program may not be indefinitely in sleep state ("wait", to use your word). It will leave that state if a signal is received. However, if you enclose the pause() call in a tight infinite loop as per your example, the program will run again when a signal is received but promptly go back to sleep.
When signals are received during pause(), the signal handler (if any) will run, and control will return to the point right after the pause() syscall as soon as the handler returns.

behaviour of unix process when a signal arrives and the process is already in signal handler?

I have a process which is already in signal handler , and called a process blocking call. What will happen if one more signal arrives for this process ?
By default signals don't block each other. A signal only blocks itself during its own delivery. So, in general, an handling code can be interrupted by another signal delivery.
You can control this behavior by setting the process signal mask relatively to each signal delivery. This means that you can block (or serialize) signal delivery. For instance you can declare that you accept to be interrupted with signal S1 while handling signal S2, but not the converse...
Remember that signal delivery introduces some concurrency into your code, so controlling the blocking is needed.
I'm pretty sure signals are blocked while a handler is being executed, but I'm having a hard time finding something that says that definitively.
Also, you may wish to see this question - some of the answers talk about what functions you should and shouldn't call from a signal handler.
In general, you should consider a signal handler like an interrupt handler - do the very least you can in the handler, and return quickly.

Resources