I've found several rsync commands for moving my wordpress site from one local machine to a remote server. I've successfully used the following command suggested by another Stackoverflow user:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
Would you say that it's enough, or would you suggest other attributes?
It's my understanding that an SSH connection would be safer. How does this command change if I want to make it over SSH. Also are there other things to be done besides including the SSH command ( like generating/installing the keys etc etc ). I just starting so a detailed explanation for a noob would be very much appreciated.
Pakal
There are thousands of ways in which you can customize this powerful command. Don't worry about SSH, by default its using SSH. Rest of the options depend on your requirement. You can consider '--perms' to preserve permissions. Similarly '-t' preserves times. I don't know if its relevant in transfer of a site.
Also '-n' would show you a dry run of transfer scenario. '-B' switch allows you to override custom block size.
Probably you should have a look at options yourself and find the appropriate ones by running 'info rsync'.
the above command will use ssh and i see no problems with it's general usage.
Related
I'm new into Git and I'm working with Wordpress themes.
I was always using FTP client to push every small change into my remote server... I mean sometimes it was just one line of code to check the change of CSS. It was easy and nice but there will be always problem with reverting changes and since I'm learning Git, I want to change it.
I've found two ways to do it:
git-ftp
i've tried to connect my local respository with GitHub and my intention was to automatic pull changes into my remote server from GitHub (it's not working yet, i need to configure it better)
BUT, do I have to commit every single small change? Because I cant just save file and check changes with Browsersync on second monitor, I will have to commit so many times. Also which way will be better for me - maybe there are another, better ways?
I really want to improve my performance, but it looks like that's not easy or I'm doing something wrong? I know about existence things like WP-CLI, webpack, gulp but often I'm creating small websites and probably I will spend more time on configurating those things than create theme. Also I thought about working on localhost, but I really think that I'm complicating things and my job.
Really sorry if it's wrong section, but I'm new on stackoverflow - hey! I will be really thankful if you can help me, because I think that i need knowledge of someone experienced.
I'm not sur to be helpful but I'll try :
First, even for a small project, I always prefer to install a local environment for testing. It avoid risks on your remote server !
You can take a look here : https://make.wordpress.org/core/handbook/tutorials/installing-a-local-server/
Then, if you have an SSH access to your server, may be you can try to configure it to push directly from your local environment to your remote server. Here is a simple tutorial : https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
It depends on what remote system or vps you are using.
It could be from GCP, AWS, DIGITAL OCEAN, or WP itself.
It looks like that you are using the wordpress hosting your website.
If so, you might use wp cli to login the server.
①As for the frequent testing and updating, it is a good idea to copy the remote project to your localhost. Run your web app using wampserver. And create a new repository in the github and connect it with your local folder.
Then you could use git to version control your codes, do pull and push, stash or whatever.
And after testing, you could upload the specific files or folder to the remote server via ftp or sftp periodically.
②Another way is to install the git bash or git software in your server side.
It depends on the OS you are using. If it is a win or linux.
$ add-apt-repository ppa:git-core/ppa
$ apt update; apt install git
and create new user, add it into the sudo group
create a repository in your server side and link it to the github remote repository.
I am not sure whether the second way would work.
I recommend you try the first method.
Hope this could help. Happy coding.
Is there a simple way to have someone ssh into a machine and edit a text file while I view text they are writing?
I am interested in using this for phone screen interviews. Basically, I would tell the candidate to ssh into a particular machine and edit/compile a source file to test programming ability. Ideally, I would like to see what they are typing in real time rather than only when they save.
Is there a simple solution for this?
screen is the solution to this conundrum; see the -x option to create a shared session. You can create the session beforehand and have the dialin [ :) ] account invoke screen -x -R to attach to it. (I habitually use screen -A -U -R -x.)
You could use GNU Screen's multiuser feature (also called session sharing). http://aperiodic.net/screen/multiuser
It might be easier to setup a VNC & watch them that way. TightVNC allows you to run a 'web' server that they can connect to via a browser. http://www.tightvnc.com/winst.php#start_java
I've had a nothing but good experiences using a Google Doc for this. Realtime updating and collaboration for both parties, realtime cursor positions and highlighting for both parties, and it's stored somewhere really easy to get to afterwards, with revision history.
No syntax highlighting or auto-formatting, however, but honestly it's fine without it.
Dynamically I want to edit/update hosts(etc/hosts) file to add domain.
To edit hosts(etc/hosts) file require Admin privileged. Using Linux I can do this by this command
sudo gedit /etc/hosts
But I am trying to do this from using Programming Language.
How can i do it?
Your best bet is to use something like SSH and connect to the computer as root (or sudo in a system()), modify the file then disconnect. The added advantage of this is the convenience of prompting the user for the password.
To do this without prompting, the user would have to set up some means to accomplish it as root. I.e. setuid'ing a helper application, installing a password-less key, modifying a LDAP tree, or various other ways. That's a little 'icky' for the lack of a better term.
There's no way to make this work for user who does not normally have privilege escalation capabilities.
Your program will need to be running with appropriate privileges. One of the classic techniques is to make the file owned by root and set the setuid bit. When your program is run, it will become root and will be able to modify /etc/hosts.
That said, setuid code is risky. A bug in the code can cause the program to do something so bad that your system becomes unuseable. Certain bugs can be used by malicious users to run arbitrary programs as root and take over your system.
You still must have the right permissions to edit the file.
To change the file, open the file in read/write/append mode (ie. mode "a" using fopen()) and write the new text to the end of the file.
I'm assuming you are at the command prompt, where you could issue that sudo command ..
Provided you have the access rights, as you claim you do, then any programming language that can add a line of text to an existing textfile (or create it, when not, which is unlikely), will work. You might habe to give that programm some additional rights, but that's a different topic!
Summary: what language do you know? => use that!
Amazon's official tools for interacting with EC2 are kind of clunky and a pain to deal with. I have to set up a bunch of environment variables, store separate private keys just for EC2, add extra items to my PATH, and so on. They all output tab delimited lines that are hundreds of characters long with no headings, so it's a bit of a pain to interpret them. Their instructions for setting up an SSH keypair give you one that isn't protected by a passphrase, rather than letting you use an existing keypair that you already have. The programs are all just a bit clunky and aren't very good Unix programs.
So, are there any easier to use command line tools for accessing EC2? I know there is ElasticFox, and there is their web based console, which do make the process easier, but I'm wondering if anyone else has written better command line tools for interacting with EC2.
I'm a bit late but I have a solution!
I found the same problems with the Amazon AMI tools. They're a decent reference implementation but very difficult to use particularly when you have more than a couple instances. I wrote a replacement command-line tool as part of another project, called Rudy that answers most of your concerns
The commands are more intuitive than Amazon's AMI tools:
rudy-ec2 instances -C
rudy-ec2 groups -A -p 8080 -a 11.22.33.44 group-name
rudy-ec2 volumes -C -s 100
rudy-ec2 images
...
All configuration is in a single file (~/.rudy/config).
It can output in several formats (yaml, json, csv, tsv, and of course regular text):
rudy-ec2 -f yaml snapshots
---
:awsid: snap-2457b24d
:progress: 100%
:created: "2009-05-08T15:24:17.000Z"
:volid: vol-4ee10427
:status: completed
Regarding the private keys, There are no EC2 tools that allow to create private keys for with a password for booting a public instance because the API doesn't support it. However, if you create your own image, you can use your private keys.
Here's more info:
GitHub Project
An introduction to rudy-ec2
ElasticFox is handy for most tasks. They are occasions though that a command line tool will be better suited. I personally use boto library for python. It is very easy to script all the required operations. You can also use it to upload/download files from S3. In general, I would say that a scripting language like Python or RUby, together with a AWS library, is the best solution.
I personally use Tim Kay's Perl command line tools and haven't used original Java based API for quite some time. Excellent for UNIX environment.
Not command line, but take a look at what a free RightScale account will give you - much, much easier than command line or ElasticFox IMO.
About ec2-api-tools:
I agree that they are a bit too clunky, I particular dislike the output of ec2-describe-instances.
I recently switched to python-boto which offers a very clean and easy to use interface to ec2.
About not being able to specify a passphrase for the ssh key generated by EC2:
That's not the case. You can change the passphrase of any ssh private key anytime, using:
ssh-keygen -p -f /path/to/keyfile
E.g.
ssh-keygen -p -f ~/.ssh/id_rsa
About uploading your own ssh key pair:
You can use ec2-import-keypair, like this:
for i in $(ec2-describe-regions|cut -f 2);do
ec2-import-keypair --region $i mykey --public-key-file ~/.ssh/id_rsa.pub
done
The example above will upload the public key in ~/.ssh/id_rsa.pub to every region under the name "mykey". Remember that each region has it's own keypair.
In order to have the key installed in your ec2 instances, you'll have to pass the "-k mykey" option to ec2-run-instances.
Incidentally, uploading your own keypair is the only way to login with the same key to all the instances in all regions. If you create the keypair from the web interface, you'll have a different key in every region.
I have an open source graphical system admin tool called EC2Dream that replaces the command line tool. It installs on windows, linux and Mac OS clients and is written in Ruby and FXRuby. See www.ec2dream.com.
Neill Turner
www.ec2dream.com
If you use windows, try the tool linked below (part of the O2 Platform) which gives you an easy way to start and stop Amazon EC2 images (and if you need to extend the tool it you can easily add new features (since it just an C# script that is dynamically compiled and executed)
O2 Tool – Amazon EC2 Browser
Amazon EC2 Browser – Timer to Stop Instances
the problem with alternative libraries is that they are not always kept up to date, so if new features for AWS are released, then you need to wait. You posted that your main problems are the bunch of environment variables, add extra items to your PATH, etc. We had this
issue at BitNami, and is the main reason we created BitNami Cloud Tools that ships all of the AWS command line tools together with preconfigured Java and Ruby language runtimes. You only have to download it and everything that you need will be installed in a folder without modify your system configuration. We keep it regularly up to date.
There is an entire industry called Cloud Management which try to solve this type of problems. Scalr and RightScale and the leaders in this sector (disclaimer: I work at Scalr).
Cloud management softwares are built on top of Amazon EC2 API (and usually on other public IaaS like Rackspace) and provide an improved user interface along with automation tools like backups or SSH management as you mentioned it. They don't provide easier command lines tools stricto sensu. Their goal is to make interaction with Amazon EC2 easier.
Different options are available in the market:
Scalr: Scalr is available as a hosted service with a trial version.
Otherwise you can download and install the source code yourself, as it is released under the Apache 2 license.
RightScale: while they are usually considered as expensive for small businesses, they do offer a free account.
enStratus: they offer a freemium model like RightScale.
Like most *nix people, I tend to play with my tools and get them configured just the way that I like them. This was all well and good until recently. As I do more and more work, I tend to log onto more and more machines, and have more and more stuff that's configured great on my home machine, but not necessarily on my work machine, or my web server, or any of my work servers...
How do you keep these config files updated? Do you just manually copy them over? Do you have them stored somewhere public?
I've had pretty good luck keeping my files under a revision control system. It's not for everyone, but most programmers should be able to appreciate the benefits.
Read
Keeping Your Life in Subversion
for an excellent description, including how to handle non-dotfile configuration (like cron jobs via the svnfix script) on multiple machines.
I also use subversion to manage my dotfiles. When I login to a box my confs are automagically updated for me. I also use github to store my confs publicly. I use git-svn to keep the two in sync.
Getting up and running on a new server is just a matter of running a few commands. The create_links script just creates the symlinks from the .dotfiles folder items into my $HOME, and also touches some files that don't need to be checked in.
$ cd
# checkout the files
$ svn co https://path/to/my/dotfiles/trunk .dotfiles
# remove any files that might be in the way
$ .dotfiles/create_links.sh unlink
# create the symlinks and other random tasks needed for setup
$ .dotfiles/create_links.sh
It seems like everywhere I look these days I find a new thing that makes me say "Hey, that'd be a good thing to use DropBox for"
Rsync is about your best solution. Examples can be found here:
http://troy.jdmz.net/rsync/index.html
I use git for this.
There is a wiki/mailing list dedicated to the topic.
vcs-home
I would definetly recommend homesick. It uses git and automatically symlinks your files. homesick track tracks a new dotfile, while homesick symlink symlinks new dotfiles from the repository into your homefolder. This way you can even have more than one repository.
You could use rsync. It works through ssh which I've found useful since I only setup new servers with ssh access.
Or, create a tar file that you move around everywhere and unpack.
I store them in my version control system.
i use svn ... having a public and a private repository ... so as soon as i get on a server i just
svn co http://my.rep/home/public
and have all my dot files ...
I store mine in a git repository, which allows me to easily merge beyond system dependent changes, yet share changes that I want as well.
I keep master versions of the files under CM control on my main machine, and where I need to, arrange to copy the updates around. Fortunately, we have NFS mounts for home directories on most of our machines, so I actually don't have to copy all that often. My profile, on the other hand, is rather complex - and has provision for different PATH settings, etc, on different machines. Roughly, the machines I have administrative control over tend to have more open source software installed than machines I use occasionally without administrative control.
So, I have a random mix of manual and semi-automatic process.
There is netskel where you put your common files on a web server, and then the client program maintains the dot-files on any number of client machines. It's designed to run on any level of client machine, so the shell scripts are proper sh scripts and have a minimal amount of dependencies.
Svn here, too. Rsync or unison would be a good idea, except that sometimes stuff stops working and i wonder what was in my .bashrc file last week. Svn is a life saver in that case.
Now I use Live Mesh which keeps all my files synchronized across multiple machines.
I put all my dotfiles in to a folder on Dropbox and then symlink them to each machine. Changes made on one machine are available to all the others almost immediately. It just works.
Depending on your environment you can also use (fully backupped) NFS shares ...
Speaking about storing dot files in public there are
http://www.dotfiles.com/
and
http://dotfiles.org/
But it would be really painful to manually update your files as (AFAIK) none of these services provide any API.
The latter is really minimalistic (no contact form, no information about who made/owns it etc.)
briefcase is a tool to facilitate keeping dotfiles in git, including those with private information (such as .gitconfig).
By keeping your configuration files in a git public git repository, you can share your settings with others. Any secret information is kept in a single file outside the repository (it’s up to you to backup and transport this file).
-- http://jim.github.com/briefcase
mackup
https://github.com/lra/mackup
Ira/mackup is a utility for Linux & Mac systems that will sync application preferences using almost any popular shared storage provider (dropbox, icloud, google drive). It works by replacing the dot files with symlinks.
It also has a large library of hundreds of applications that are supported https://github.com/lra/mackup/tree/master/mackup/applications