How to automate MAMP Pro virtual host creation through a command-line shell script? - plist

I'm writing a script to create virtual hosts in Mamp Pro. I want them to be created and appear in the GUI next to the normal ones I've created manually through the GUI. I've found the following questions on SO:
Automatic Virtual Hosts with MAMP Pro?
Add MAMP Pro Vhosts with script
Here are my findings, so far:
I've found out that the hosts appearing in the MAMP Pro GUI are found in: ~/Library/Application\ Support/appsolute/MAMP\ PRO/settings3.plist; I've tried editing it but I can't seem to get the entries right with the command PlistBuddy -c 'print ":virtualHosts"' settings3.plist which says Print: Entry, ":virtualHosts", Does Not Exist
From the second question I've listed above, I found out that I can edit the httpd.conf files (one found in user library and one in the root library) through the GUI.
The hosts file including all of the IP addressing is in /private/etc/hosts
The questions are dead, even though I commented on the latest one asking for an update on how he solved his scripting problem in the end.
In the end, I can easily add the values into the hosts file and the vhosts.conf files to make the website work. My only problem is getting it to show up in the list with the other virtual hosts in the MAMP Pro GUI.
Update: After further investigation and experimentation, I realized the process in which the virtual hosts are created; when I first create a host through the GUI, the settings3.plist file gets updated, when I hit "save" to save the changes, the hosts and httpd.conf files are updated accordingly. I understand that settings3.plist can be converted to an XML through plutil -convert xml1 -o - settings3.plist > test.txt and then edit it and convert it back to binary through plutil -convert binary -o - test.txt > settings3.plist.
My problem with that is that, even though I got the gist of how the CP$UID works in the XML formats, I cannot create a script to undestand the concept, check for the position of the values through the list, and then put in the values accordingly. I even asked a question about it here: https://stackoverflow.com/q/33775025/1934402

The solution provided in Automatic Virtual Hosts with MAMP Pro? refers to MAMP PRO version 2.x where host configuration is saved in settings.plist which is an XML format property list file while in MAMP PRO version 3.x host settings are stored in settings3.plist which is a binary format property list file.
Even in this format you should be able to do:
/usr/libexec/PlistBuddy -c Print settings3.plist
and still get the contents of the file. Your problem arises from the fact that virtualHosts is no longer there as you will see by running the above command. The above command output is not very helpful for understanding the structure of your plist file so you could run:
plutil -convert xml1 -o - settings3.plist > ~/settings3.plist.xml
and then work out the structure of ~/settings3.plist.xml in order to find out which keys to use in PlistBuddy commands. It is a good idea to check the manual pages for plist and PlistBuddy. Do note that key names have changed and the structure is not that clear even in the xml file.
I hope this helped. I will investigate further into it and modify this answer if I have a full recipe for editing host details.

Related

NGINX - /etc/nginx/conf.d is empty

I'm working with Nginx From Beginner to Pro. The book affirms that:
The /etc/nginx/conf.d folder contains two files, default.conf and
example_ssl.conf .
but when I open my /etc/nginx/conf.d then make the list command I have nothing but just some dots like .. . which appears and are impossible to handle -neither read or open. So I'm wondering what it happens and why I haven't my files appears in the directory. After having made some searches on web I have found nothing relevant to me.
Also it's impossible to create neither a folder or file in this directory.
Currently I have create an another folder and probably will use it to insert some custom configuration.
Today, I also fell into the same issue.
When I was trying to fetch this file using FileZilla, it was not showing up.
But then I logged in through Putty and ran the command
sudo nano /etc/nginx/conf.d
The file got opened up with all its contents.
I hope this helps others.
PS: we should not try to alter this file at beginner level.
I had to find it only to increase the hash_bucket_limit inside http object. (Because the length of my domain name regex was larger than default limit).

How to recover deleted iPython Notebooks

I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash).
Does anyone know how I can recover the notebook? I am using Mac OS X.
Thanks!
This is bit of additional info on the answer by Thuener,
I did the following to recover my deleted .ipynb file.
The cache is in ~/.cache/chromium/Default/Cache/ (I use chromium)
used grep in binary search mode, grep -a 'import math' (replace search string by a keyword specific in your code)
Edit the binary file in vim (it doesn't open in gedit)
The python ipynb should file start with '{ "cells":' and
ends with '"nbformat": 4, "nbformat_minor": 2}'
remove everything outside these start and end points
Rename the file as .ipynb, open it in your jupyter-notebook, it works.
The "delete" functionality now sends the file to OS trash rather than permanently deleting it, see this PR: https://github.com/jupyter/notebook/pull/1968. So you can just open your Trash (wherever that is on your system) and restore it.
I think the easiest way (until developers handle this issue) to retrieve your Ipython history is to write them all into an empty file.
You need to check by the date you created your last script. Obviously, it is going to be the last part of your Ipython history.
To write your Ipython history into a file:
%history -g -f anyfilename
On linux:
I did the same error and I finally found the deleted file in the trash
/home/$USER/.local/share/Trash/files
If you deleted it through the OS (rm file.ipynb) then you can probably get it from ~/.ipython_checkpoints/ However, if you deleted it from the browser menu option, it is gone (by design!).
See discussion here: https://github.com/jupyter/notebook/issues/405
If you use PyCharm, you can do the following.
Open the Local History view.
Select the version you want to roll back to.
On the context menu of the selection, choose Revert.
Worked for me!
Source: here
For the unlucky ones like me, that delete some files on JuliaBox(jupyter for julia), there is a solution. I successifly recovery all my deleted files.
The browsers strore cache information about the pages you visit. You have to find your cache browser folder (in ubuntu with crhome was ~/.cache/google-chrome/Default/Cache) and grep for some text of your notebook in the binarys. Then, cut the text part of the file that is correspond to your ipynb.
https://groups.google.com/forum/#!searchin/julia-box/delete%7Csort:relevance/julia-box/Rt9LG9RldrU/3s_vVSrivJEJ
If you're using windows, it sends it to the recycle bin, thankfully. Clearly, it's a good idea to make checkpoints.
As long as your Kernel is active, the code of each executed cell is stored in input history list. This will come in handy when you accidentally deleted a cell and want to retrieve its content.
_ih[-10:] *# code of the 10 most recently run cells (Even if those cells are deleted now)*
If you are running on Jupyterlab on linux like me. What I did is went into command prompt and went to my trash folder.
Trash directories on linux are typically
/home/$USER/.local/share/Trash
or
If you deleted something as root (e.g. deleted a file using Nautilus invoked via gksu), it is at
/root/.local/share/Trash
I ended up changing directories to /home/$USER/.local/share/Trash/files and my deleted notebook was there! depending on how you access your backend you could also try /home/jupyter/.local/share/Trash/
ps
If you are having issues changing directories from Trash to files due to permissions dont forget to become root:
sudo -i
then after sudo -i, go up with:
cd ..
and then
cd home/jupyter/.local/share/Trash
cd files
Best of luck,
Sadly my file was neither in the checkpoints directory, nor chromium's cache. Fortunately, I had an ext4 formatted file system and was able to recover my file using extundelete:
Figure out the drive your missing deleted file was stored on:
df /your/deleted/file/diretory/
Switch to a folder located on another you have write access to:
cd /your/alternate/location/
It is proffered to run extundlete on an unmounted partition. Thus, if your deleted file wasn't stored on the same drive as your operating system, it's recommended you unmount the partition of the deleted file (though you may want to ensure extundlete is already installed before proceeding):
sudo umount /dev/sdax
where sdax is the partition returned by your df command earlier
Use extundelete to restore your file:
sudo extundelete --restore-file /your/deleted/file/diretory/delted.file /dev/sdax
If successful your recovered file will be located at:
/your/alternate/location/your/deleted/file/diretory/delted.file
I had the very problem and I ended up solving it this way. It might be the case for some of the folks.

Testing plugins live with Varying Vagrant Vagrants

I'm currently trying to use VVV to develop and test my plugins. My host OS is Win10.
My plugins are in D:\Workshop\projects\vendor\module. I've used this folder structure for a long time, and it is really convenient, especially for use with Composer and friends.
Now I've installed VVV, created a site with VV. I want to test a plugin, the source code of which is in D:\Workshop\projects\XedinUnknown\my-project. So, I create a symlink in D:\Workshop\projects\XedinUnknown\vvv-local\www\my-test-site\htdocs\wp-content\plugins that points to that project's folder. Alas, it doesn't work. If I SSH into VVV and ls /srv/www/my-test-site/htdocs/wp-content/plugins, I can see my-project there, but it points to ../../../../../../../XedinUnknown/my-project, which, of course, doesn't exist. If instead of symlink I create a junction, it's just an empty file.
I suspect that this has to do with how the Linux environment handles Windows symlinks, but I'm not entirely sure. Is it possible to make this work somehow? I really don't wanna copy the whole project folder into VVV.
This is also addressed here.
So, it would seem like I've found somewhat of a solution. I added a synched folder, which maps to my projects home. I then create a symlink to that folder from the WP plugins directory, inside the VM.
Step 1 - Add Shared Folder
This should be done in a Customfile as explained here. This file should go into the same directory as the Vagrantfile, e.g. it will become the Vagrantfile's sibling. In my case, if you're following along from my question, it is in D:\Workshop\projects\XedinUnknown\vvv-local. Anything put here becomes global for the whole of VVV. This also gives you the ability to use different combinations of your projects in different websites. Add these contents to your Customfile, creating it if it does not exist.
config.vm.synced_folder "D:/Workshop/projects", "/srv/projects", :owner => "www-data", :mount_options => [ "dmode=775", "fmode=774" ]
Of course, you should replace D:/Workshop/projects with the path to where you store your projects. Note the forward slashes (/). This works on Win/Nix. For a Windows-only configuration, I suspect you'd have to replace them with \\, because this is an escape sequence.
Step 2 - Add Link to Project
This should be done in your site's vvv-init.sh file. In my case, this file was in D:\Workshop\projects\XedinUnknown\vvv-local\www\my-test-site\, because I want to create this symlink specifically for the my-test-site site. Please note that your VVV path will probably be different, and it doesn't have to be inside the projects directory. It's wherever you cloned VVV into. Add the below lines to your site's vvv-init.sh file.
if [ ! -f "htdocs/wp-content/plugins/my-project" ]; then
echo 'Creating symlink to plugin project...'
cd ./htdocs/wp-content/plugins
ln -s /srv/projects/XedinUnknown/my-project my-project
cd -
fi
In the above snippet, change the path to your desired project path, keeping in mind that /srv/projects/ now maps live to the projects root in your host OS. You can also replace the second occurrence (last word) of my-project in ln -s /srv/projects/XedinUnknown/my-project my-project with whatever you want. As long as you don't change it later, your plugin should not suddenly get de-activated.
Also, from what I understood, vvv-init.sh runs during provisioning, not every time the machine is brought up. So, if you want to run the code in there, you have to run vagrant up --provision from the VVV directory. If you don't want to provision, you can run it manually. SSH into VVV with vagrant ssh, then cd /srv/www/my-test-site (replace my-test-site with name of your site), and run . vvv-init.sh.
Afterword
I am quite new to Bash scripting, and I don't know if my solution is the best one, so please feel free to suggest better versions of the Bash script. I also don't know Ruby, and am new to Vagrant, so please feel free to suggest improvements to the Customfile - this is in essence the same as the Vagrantfile.
One possible issue that I can anticipate with this solution (and this is inherently by design of the filesystem architecture) is that if WordPress decides to make changes to your plugin, e.g. if you run a WP update, it will effectively delete all files in your project, including the repository. So, on the testing site I would recommend using something like this. I am in no way associated with this plugin.

xenserver_don't found /etc/xend/xend-config.sxp file

I am trying to create a bridge network. My problem is I can not find /etc/xend/xend-config.sxp file on my xenserver.
Can someone give me a hand? Someone knows where can I find xend-config.sxp file?
Note: I'm using Xenserver 6.5.
Firstly I assume you have connected to your server through a terminal, likely using ssh.
Use the command cd /etc/xend/ This will change the directory into this folder.
To see the contents of the folder type ls. If there is anything in the folder, such as the xend-config.sxp file it will show up.
If the file does not show up you can create it. It often seems to be the case that configuration files need to be created by one self the first time.
You can do this by running the command touch xend-config.sxp
Running the command ls should now show you the file. Running pwd will show you that it has been created at /etc/xend/
To edit the file you can use an editor such as nano or vim or whatever is your personal choice, e.g. sudo nano xend-config.sxp will open the file xend-config.sxp in a nano text editor.
I hope this helps

rsync error: failed to set times on "/foo/bar": Operation not permitted

I'm getting a confusing error from rsync and the initial things I'm finding from web searches (as well as all the usual chmod'ing) are not solving it:
rsync: failed to set times on "/foo/bar": Operation not permitted (1)
rsync error: some files could not be transferred (code 23)
at /SourceCache/rsync/rsync-35.2/rsync/main.c(992) [sender=2.6.9]
It seems to be working despite that error, but it would be nice to get rid of that.
If /foo/bar is on NFS (or possibly some FUSE filesystem), that might be the problem.
Either way, adding -O / --omit-dir-times to your command line will avoid it trying to set modification times on directories.
The issue is probably due to /foo/bar not being owned by the writing process on a remote darwin (OS X) system.
A solution to the issue is to set adequate owner on the remote site.
Since this answer has been voted, and therefore has been hopefully useful to someone, I'm extending it to make it clearer.
The reason why this happens is that rsync is probably trying to set an arbitrary modification time (mtime) when copying files.
In order to do this darwin's system utime() function requires that the writing process effective uid is either the same as the file uid or super user's one, see opengroup utime's page.
Check this discussion on rsync mailing list as reference.
As #racl101 has commented on an answer, this problem might be related to the folder owner. The rsync command should be done by the same user as the folder owner's one. If it's not the same, you can change it.
chown -R userCorrect /remote/path/to/foo/bar
I had the same problem. For me the solution is to delete the remote file and let rsync create again.
The problem in my case was that the "receiver mountpoint" was incorrectly mounted. It was in read-only mode (for some extrange reason).
It looked like rsync was copying the files, but it was not.
I checked my fstab file and changed mount options to default, re-mount file system and execute rsync again. All fine then.
I've seen that problem when I'm writing to a filesystem which doesn't (properly) handle times -- I think SMB shares or FAT or something.
What is your target filesystem?
This happened to me on a partition of type xfs (rw,relatime,seclabel,attr2,inode64,noquota), where the directories where owned by another user in a group we were both members of. The group membership was already established before login, and the whole directory structure was group-writeable. I had manually run sudo chown -R otheruser.group directory and sudo chmod -R g+rw directory to confirm this.
I still have no idea why it didn't work originally, but taking ownership with sudo chown -R myuser.group directory fixed it. Perhaps SELinux-related?
I came across this problem as well and the issue I was having was a permissions issue with the root folder that contained the files I was trying to send over. I don't care about that root folder being included with rsync I just care what's in it. The error was coming from my command where I need to specify an additional / at the end. If you do not have that trailing slash rsync will attempt to set times the folder.
Example:
This will attempt to set times on html
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html
This will not
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html/
This error might also pop-up if you run the rsync process for files that are not recently modified in the source or destination...because it cant set the time for the recently modified files.
I ran into this error trying to fix timestamps on a new MacOS Monterey, after the Migration Assistant decided to set all of them to the time the copy operation occurred, instead of the original file's.
anddam's answer did not help me, as the remote user used in the rsync command did match the directories and files owner.
After further research, I realised that I had no access to the Mac's Documents directory over SSH (error ls: Documents: Operation not permitted).
I managed to fix the problem by opening System Preferences on the Mac, then selecting Security & Privacy, go to Privacy tab select Full Disk Access and check the box next to sshd-keygen-wrapper.
It could be that you don't have privileges to some of the files. From an administrator account, try "sudo rsync -av " Alternately, enable the root account and sign in as root. That should allow you to completely hose your system and brute force your rsync! ;-) I'm not sure if the above mentioned --extended-attributes will help, but I threw it in too, just for good measure.

Resources