Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I learned how you can list all the processes and their id's using either of these commands:
ps
tasklist
But so far all I have seen people use a process id is to kill the process. Given a pid, is there any other purpose?
ps -> Process statistics - lists currently executing processes by owner and PID (process ID) in linux.
Uses of ps:
To display system processes
To force certain actions like forcefully logging off a user, killing certain process
The /proc directory contains subdirectories with unusual numerical names. Every one of these names maps to the process ID of a currently running process. Within each of these subdirectories, there are a number of files that hold useful information about the corresponding process.
Find painfully slow process
Identify top processes by CPU and memory usage etc
Find the hierarchy in relationship between processes
Clubbing ps with watch command would make it a realtime reporting tool
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
The H2O R package is a great resource for building predictive models. But I am concerned with the security aspect of it.
Is it safe to use patient data with H2O in terms of security vulnerabilities ?
After data ingestion into H2O-3, the data lives in-memory inside the java server process. Once the H2O process is stopped, the in-memory data vanishes.
Probably the main thing to be aware of is your data is not sent to a SaaS cloud service or anything like that. The H2O-3 java instance itself handles your data. You can create models in a totally air-gapped, no-internet environment.
So the short answer is, it’s perfectly safe if you know what threats you are trying to secure against and do the right things to avoid the relevant vulnerabilities (including data vulnerabilities like leaking PII and software vulnerabilities like not enabling passwords or SSL).
You can read about how to secure H2O instances and the corresponding R client here:
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/security.html
(Note if you have a high-value use case and want detailed personal help with this kind of thing, H2O.ai the company offers paid enterprise support.)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
My question is basically whether it is safe, when using parallel processing in R, to have multiple threads accessing an SQLite database simultaneously.
I understand that SQLite is a file level dbs, so every connection gets access to the whole db. So, it is possible to have multiple connections going simultaneously (e.g., via the SQLite3 front end and, in R, via RSQLite's dbConnect() and via dplyr's src_sqlite()). I guess that this is OK so long as there is a single user who can assure that commands submitted one way are completed before other commands are submitted.
But with multithreading, it would seem possible that one thread might submit a command to an SQLite db while a command submitted by another thread might not have completed.
Does the underlying SQLite engine serialize received commands so that it is assured that one command is completed before the next one is processed, so as to avoid creating an inconsistent status of the database?
I have read the SQLite documentation on locking and "ACID," and as I understand this documentation, the answer appears to be "Yes."
But I want to be sure that I have understood things correctly.
Another question is whether it is safe to have separate threads submitting commands simultaneously that actually change the database.
Since one can't control the exact timing by which the two threads submit their commands, I assume that using parallel processes that might change an SQLite data table in an inconsistent way would not be a good idea -- e.g., having one thread insert a record into a table and another thread doing a SELECT on the same table.
It is okay if it is reading the database, but writing to the database locks the database for at least a few milliseconds. If you try to read while it is writing (or write while it is writing), an error will be returned, which can be used in order to determine whether you should retry the read/write operation. If this is for a relatively simple process, you should be fine with sqlite3. Source
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Hello I am working on a web page in drupal. One of the content is about a scholarship and there are certain zipcodes that are eligible for that scholarship. I was wondering if there is to have a search box within that web page were the user types in there zip code and than tells you if they are eligible or not I was thinking some javascript, but I was wondering if there is any better ideas. Thanks!
Sure, you could use javascript on the client side or php (as Drupal is in php) on the server side. The tradeoff with the javascript approach is you'll have to send all the valid zip codes (or some rule that computes them) to the client every time your page is loaded. But the upside is then it'll be very fast for the client to try various zip codes (since no server communication will be needed). And this may be easier for you to code.
For your use, you'd probably get better overall performance doing this in php on the server. But then you'll need to be familiar with some form of client-server communication (ajax for instance) so that you can send the zip code back to the server and listen for a response.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We have come across 2 ways to do cache breaking for our CSS files.
Cache breaker passed as a query parameter:
http://your1337site.com/styles/cool.css?v=123
Cache breaker as part of the name:
http://your1337site.com/styles/123.cool.css
Which way is better? And why?
I feel that the second way is more verbose, because the file matches the name on the folder structure. Where as the first way is good if you want to share "cool.css" on other parts of the site, which don't have access to the unique name you generate each time.
Steve Souder's article Revving Filenames: don’t use querystring makes a good argument for changing the filename as the better of the two.
...a co-worker, Jacob Hoffman-Andrews, mentioned that Squid, a popular proxy, doesn’t cache resources with a querystring. This hurts performance when multiple users behind a proxy cache request the same file – rather than using the cached version everybody would have to send a request to the origin server.
As an aside, Squid 2.7 and above does cache dynamic content with the default configuration
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am working on sn AIX6.1 box, but my question probably applies to all Unix platforms. Unfortunately I haven't found any satisfactory answers on the web.
My question:
Whenever I log in to my AIX box (or, say, any Unix machine), I see a message like,
You have mail in /usr/spool/mail/root
Can anyone tell me a good explanation for this message, like what is its purpose, and on what events this message is displayed to user?
It's shell mail checking. If the file named by the shell variable MAIL - defaulting to something like /var/spool/mail/username - is larger than the last time it checked, then it echoes that message to let you know that well, new mail has arrived.
This is a remainder from old CLI-only Unix times. When there were no GUI email clients yet, and the user logged in to his account on a computer which had an associated mailbox (in our school we have these also), it came handy to notify the user of having unread emails.