I see that the Logstash 1.4.2 tar install via the below curl command is around 140 MB & am wondering if there is a way to get smaller footprint download without the extra baggage of Kibana, ElasticSearch, some filters, inputs,outputs. Is it safe to purge the vendor directory.
The latest version of Logstash 1.5.0 appears to have grown bigger in size & is about 160MB.
Would appreciate if anyone can provide any recommendation and /or inputs around the same.
curl -s https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz | tar xz
Instead of manually deleting stuff from the Logstash distribution that you don't think you need in order to save a few tens of megabytes, just use a more light-weight shipper and do all processing on machine that isn't so low on disk space. Some of your choices are logstash-forwarder, Log Courier, and NXLog. These are just a handful of megabytes each (and use far less RAM since they don't run through the JVM).
Alternatively, NXLog's configuration language is quite rich and you can probably use that for doing the processing you need on your leaf nodes without the need for a separate machine for the processing of logs. NXLog's overhead is quite small.
Related
I search and it doesn't seem to have a Julia core way to limit RAM used so I search Linux instead.
According to this question, I can limit the RAM used by my command to 64GB with:
ulimit -v 64000000
I am wondering if I do:
$ulimit -v 64000000
$julia
$julia>
Am I doing everything alright, i.e., everything I do like launching a JuMP model within my Julia console will be limited to 64GB of RAM?
Since ulimit appears to set resource limits for your entire user account, I can find no reason why this should not also apply to Julia processes run under that user account.
The one exception that comes to mind might be if you are running Julia processes on multiple nodes of a linux cluster with, e.g. the Distributed.jl stdlib, but this is a rather niche case.
I am trying to perform a Revision Cleanup activity on AEM Repository to reduce the size of the same by Tar Compaction. The Repository Size is 730 GB and Adobe Version is 6.1 which is old. The estimated time is 7-8 hours for the activity to get completed. We ran it for 24 hours straight but the activity is still running and no output. We have also tried running all the commands to speed up the process but still it is taking time.
Kindly suggest an alternative to reduce the size of the repository.
Adobe does not provide support to older versions, so we cannot raise a ticket.
Try to check the memory assigned to your machine, RAM memory I mean for JVM. Maybe if you increase it will take less and finish.
The repository size is not big at all. Mine is more than 1TB and is working.
In order to clean your repo you can try to run the Garbage Collector directly from AEM on JMX console.
The only way to reduce the datastorage is compact the server, or delete content like big assets or also big packages. Create some queries to see which assets / packages are huge and also delete them.
Hope you can fix your issue.
Regards,
I'm currently in the process of transferring what should have been about 2.7TB of data to a 5TB backup disk. Everything looks like it has been going smoothly, except for one thing. The fact that my three 1TB source disks have apparently transferred 3.7TB of data so far and it's still going...
That doesn't add up. All three sources and the destination are Mac OS Extended (with one of the sources being non-journaled, but that finished destination folder is still showing equivalent data amounts with the source disk).
Does anyone know a potential cause of this, or what could be going on? Even if the sources were full to the brim how am I receiving almost a whole TB of extra data when going between same filesystems?
This last source disk is sitting at about 300/899GB transferred so there is still another 600GB to move, pushing the eventual total above 4TB from 3 x 1TB source disks. I'm so confused...
This seems to have been caused by rsync full-copying symlinks. Adding -a --no-links or -a --no-l solves the issue.
ref: https://serverfault.com/a/233682
Is this possible to do?
Conceptually, a solution should apply across a lot of possible configurations, ranging from two vim instances running in separate virtual terminals in panes in a tmux window, to being in separate terminals on separate machines in separate geographical regions, one or both connected over network (in other words, the vims are hosted by two separate shell processes, which they would already be under tmux anyhow).
The case that prompted me to ponder this:
I have two tmux panels both with vim open and I want to use the Vim yank/paste to copy across the files.
But it only works if I've got them both running in the same instance of Vim, so I am forced to either:
use tmux's copy/paste feature to get the content over (which is somewhat tedious and finicky), or
use the terminal (PuTTY, iTerm2)'s copy/paste feature to get the content over (which is similarly tedious but not subject to network latency, however this only works up to a certain size of text payload to copy at which point this method will not work at all due to the terminal not knowing the contents of the not-currently-visible parts of the file), or
lose Vim buffer history/context and possibly shell history/context in reopening the file manually in one of the Vim instances in either a split buffer or tab and then closing the other terminal context (much less tedious than 1 for large payloads but more so with small payloads).
This is a bit of a PITA and could all be avoided if I have the foresight of switching to an appropriate terminal already running vim to open my files but the destiny of workflow and habit rarely match up with that which would have been convenient.
So the question is, does there exist a command or the possibility of a straighforwardly-constructed (shell) script that allows me to join buffers across independently running vim instances? Am having a hard time getting Google to answer that adequately.
In the absence of an adequate answer (or if it is determined with reasonable certainty that Vim does not possess the features to accomplish the transfer of buffers across its instances), a good implementation (bindable to keys) for approach 3 above is acceptable.
Meanwhile I'll go back to customizing my vim config further and forcing myself to use as few instances of vim as possible.
No, Vim can't share a session between multiple instances. This is how it's designed and it doesn't provide any session-sharing facility. Registers, on-the-fly mappings/settings, command history, etc. are local to a Vim session and you can't realistically do anything about that.
But your title is a bit misleading: you wrote "buffer" but it looks like you are only after copying/pasting (which involves "register", not "buffers") from one Vim instance to another. Is that right? If so, why don't you simply get yourself a proper build with clipboard support?
Copying/yanking across instances is as easy as "+y in one and "+p in another.
Obviously, this won't work if your Vim instances are on different systems. In such a situation, "+y in the source Vim and system-provided paste in the destination Vim (possibly with :set paste) is the most common solution.
If you are on a Mac, install MacVim and move the accompanying mvim shell script somewhere in your path. You can use the MacVim executable in your terminal with mvim -v.
If you are on Linux, install the vim-gnome package from your package manager.
If you are on Windows, install the latest "Vim without Cream".
But the whole thing looks like an XY problem to me. Using Vim's built-in :e[dit] command efficiently is probably the best solution to what appears to be your underlying problem: editing many files from many different shells.
I'm working on an assignment where I need to remote connect into my company's UNIX boxes and parse out a particular set of log entires. I've figured out a method for doing so with Grep and the -C flag, but the version of UNIX installed on these machines doesn't support that functionality. One alternative I've considered is doing the work on my local machine through Cygwin and using the local version of Grep to handle this task. However, these logs are especially larges, upwards of 50 megabytes and the connection to the boxes are very slow so it would take several hours to complete the downloads.
My main question, is it possible to remote connect through SSH to a remote server, but be able to invoke the locally installed versions of certain programs? For example, if I SSH into the server, can I make use of the local version of Grep instead of the remote system's version of Grep?
I've attempted to do something similar using Awk and Sed but I haven't had much success. At this point, aside from a long period of downloading, I'm not sure what other options I have. Any advice? Thanks in advance. :)
Even if you could use a remote file with a local application, you'd still be downloading the entirety of the log files - ssh allows output/input to pass between boxes, but you're not actually running your local grep on the local machine - it'd be the remote machine sending its file to your local grep.
One alternative is to gzip the logfiles before sending them through ssh,e.g.
ssh user#remotebox 'gzip -9 - logfile'|gzcat -|grep whatever
You'd still be sending the entirety of the log files, but log files tend to compress very well, so you'd only be sending a small fraction of the original data (e.g. a couple megs v.s. 50 uncompressed).
Or, in the alternative, you could try compiling gnu grep from source on the remote machines, assuming there's an appropriate compiler toolchain on those machines.