I deploy applications to Unix boxes, we will work on around 100 boxes and lets say application A will be deployed on 5 boxes like Box1,Box2,Box3,Box4,Box5, Every time we deploy an application A we will go to each Box1,2,3,4,5 and check whether the application A which i s deployed has started properly or not in the path of BOX1/A/B/C/logs folder on each and every box and for every single application.
Is there a way we can pull the logs to local from all the Boxes 1,2,3,4,5 and it should allow me to Search the logs by Application A name.
Thanks for your help in advance ...
Of course, your question is not so specific that I can tell you exactly what to do in your particular case, but something very like this will nonetheless aggregate the data on your local stdout, after which you can process it locally as you like:
for I in $(seq 1 5); do echo "box$I:"; ssh username#box$I 'cat /var/log/mylog'; echo; done
Many variations on the theme are possible, but if you can get this one to work, then you should soon be able to see how to adapt it to your own need.
Note that, for Ssh to do its work without requiring a manual login on each machine requires some setup on both the local and remote boxes: man 1 ssh and review the AUTHENTICATION section, especially the paragraph that speaks of the authorized_keys file.
Related
I'm looking for some software (web based ideally) that will allow me to spin up quick clones of my live website(s) for client approval/testing purposes on to a temporary URL.
Ideally I'd be able to specify a GIT feature branch to deploy to it so my client only sees the exact features they are approving (as there could be lots of work in progress on the current test site).
As an example:
My client wants to see what a new module looks like on mysite.com.
Currently I would add the module to my local project and test (wamp), commit to GIT feature branch, push, then create a new staging/test server by dumping/tarring, uploading, creating hosting space, etc etc etc.
This is really time consuming and expensive.
I don't even know if anything like this even exists but if it does I'd love to hear about it! Thanks.
3 options, fastest/easier > longest
Create a skeleton of the website, only the basics.
1a. Use Node.JS to deploy a temp webserver on localhost.
(Exemple, you can deploy a linux machine where you host (with IP, no domain name) where you can display you PoCs
1b. Display PoC for your modules.
If that's not good enough, create a separate folder, on target(client) server where you can upload the "testing website" you'll use and password lock it (mydomain.com/testsite/).
Use Snapshot to make a snap of client server (if you want to display the PoC on his server/website)
So, yeah, the form is saying:
Please contact the system administrator to change your publisher name
and I'm the system administrator for the project and I don't know what the hell am I doing. :)
I can change the name of the publisher but not the URL. I guess this also changes some URL in the dataset? Do I need to do the change with API or there is some GUI?
BTW. I'm using DGU package.
You should ssh to your server and run commands like this to change the name, and optionally the title too with the -t:
source /home/co/ckan/bin/activate && cd /vagrant/src/ckanext-dgu
python ../ckanext-dgu/ckanext/dgu/bin/publisher_rename.py $CKAN_INI highways-agency highways-england -t 'Highways England'
From: http://guidance.data.gov.uk/publisher_editing.html#rename-a-publisher
The problem is it can take a few minutes to rename and reindex all the datasets, so this often can't be done during a web request. Ideally someone would code this as a background task so this can still be done in the form. However in the meantime DGU implemented this command-line script to do it, hence the need for a sysadmin.
I have a shared host with ASP MVC, my worker process times out after 5 minutes causing site to take up to 30 seconds to restart. I can't edit these settings with shared hosting. I found some info online where I can use a schedule task that will keep hitting the site every few minutes keeping it from going idle.
Executable C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Argument -c "(new-object
system.net.webclient).downloadstring('http://[domain.tld][path][file_name]')"
Not sure about the Executable and the Argument? I not sure what to put there. Should I put the path to the home page? Or to a page with few views like the privacy page?
What's a good practice setup to keep site from going idle, with a schedule task?
The executable & argument form the command that can be executed by a scheduler to make a request to a webpage and print out the data returned. For example, if you run this using the command line terminal (assuming you have powershell), you should see a whole bunch of javascript and html code present on google.com:
powershell -c "(new-object system.net.webclient).downloadstring('https://google.com')"
I am not sure whether or not this is an acceptable practice to keep websites from going idle on shared hosting spaces.
i want make a new directory in application server in sap system ,and send my file in it .
for sending file in existing directory i find and use this transaction
CG3Z :/usr/sap/R3D/exe .
But i can not find a solution , neither with transaction nor abap codes .
I know that we can see directory with AL11 but I want to make my own directory.
I searched in SAP SCN and Stackoverflow but have not been able to find any similar problem.
Usually this is NOT done by application code but by a system administrator - otherwise you would have to add provisions for all supported operating systems. Also, there are a lot of other issues to take care of, like setting the proper file system permissions or making sure that a DFS is available on all application servers (writing stuff to application servers randomly depending on which server the user was logged on to usually won't do you any good). Have your system administrator setup a logical file name for you and use that.
I've found several rsync commands for moving my wordpress site from one local machine to a remote server. I've successfully used the following command suggested by another Stackoverflow user:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
Would you say that it's enough, or would you suggest other attributes?
It's my understanding that an SSH connection would be safer. How does this command change if I want to make it over SSH. Also are there other things to be done besides including the SSH command ( like generating/installing the keys etc etc ). I just starting so a detailed explanation for a noob would be very much appreciated.
Pakal
There are thousands of ways in which you can customize this powerful command. Don't worry about SSH, by default its using SSH. Rest of the options depend on your requirement. You can consider '--perms' to preserve permissions. Similarly '-t' preserves times. I don't know if its relevant in transfer of a site.
Also '-n' would show you a dry run of transfer scenario. '-B' switch allows you to override custom block size.
Probably you should have a look at options yourself and find the appropriate ones by running 'info rsync'.
the above command will use ssh and i see no problems with it's general usage.