Trying to evaluate CoreOS. It really looks like it is an interesting product and I was trying to see about simply starting up networking. I got a static configuration to work by doing the following:
Create a static network file in the /etc/systemd/network/ folder.
It is my understanding that the important parts of the file name I drop into this directory are the number at the beginning of the file for cases when I have multiple network files this will help to determine which file is applied first and the ".network" suffix to declare that this is a network configuration file
The contents of /etc/systemd/network/10-static.network is as follows (yes, this is a very simple configuration):
[Network]
Address=192.168.1.102/24
Gateway=192.168.1.2
I then tried starting the service: sudo systemctl start systemd-networkd
This actually worked and assigned a static ip address that was visible when running ifconfig.
Here is my problem. I rebooted the CoreOS virtual machine and noticed that the networking was no longer set on reboot. When I check the /etc/systemd/network/ folder it is empty and my configuration file apparently disappeared on reboot.
Does anyone know why this would have happened?
Thanks in advance for any help on this!
You must remove ISO image, coreOS maybe reboot same ISO image. If you remove ISO image, coresystem can reboot from new system.
I experienced the same situation before.
Files on disk shouldn't disappear on you like that. Did you happen to PXE-boot this VM or somehow use a file system in RAM?
A better way to do this config is with cloud-config, which CoreOS uses to configure machines at boot. It's intended to provide a repeatable way to set up networking, mount disks and things like that. The steps that you completed manually can be done with cloud-config like this: https://coreos.com/docs/cluster-management/setup/network-config-with-networkd/
More info about cloud-config in general: https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/
Related
I have a strange behaviour and I need some ideas.
I have a Redhat7 server with standard apache and php7 installation.
The application running on this server uses PHPExcel to export data from a database to an excel file.
Because the appliction is very old and not from me, I dont want to change from PHPExcel to PHPWorksheet if there is any other option.
The server is also not installed by me, but I have sudo access.
What happens:
It looks like every night the server does some changes to the httpd service because every morning when I try to use the application I got an error that PHPExcel cannot create folder /tmp/xxxxxx. All I have to do is restart the httpd service as sudo and the applcation works fine until the next day.
I know this is not much information so if you need something more please ask.
I also know its hard to help here but perhaps someone has a similar problem in the past?
Thanks
Claus
After all I have solution which is not my favorite but it works.
I have changed the
sys_temp_dir
upload_temp_dir
in the php.ini to a different self created folder. It looks like the /tmp folder (which is default) is changed every night from any other application running on the server in that way, that PHP has no longer write access to it.
I am not very sattisfied not knowing exactly whats going wrong with the /tmp folder but because I am not the admin of the server I think I must be ok with it.
I work on a Symfony project using Vagrant. The host machine is using Windows. Due to fact that the request time is very high, I decided to install the vendor files inside the vm and the entire "rest" of the project remains inside the synced folder (project root => /vagrant).
Everything is working fine and the request time is under 100ms now. But there is one issue left. I have to install the vendor on my Windows machine first and then again in the vm, otherwise PhpStorm is not able to index the files correctly (I know, this is a logical consequence).
So my question is, if it is possible, to host a project on the Windows machine and the files are for example under "C:\Users\SampleUser\Project\ProjectX" and the vendor is installed under "/home/vagrant/vendor" and let PhpStorm index the files of both directories?
Otherwise I will have to live with this one and code completion won't work.
Or I will have to install the libraries on both machines to improve request time and have a more or less "good" workflow.
I hope, I could explain good enough, what my actual problem is.
Thank you very much for your time.
Had the same exact problem. Indeed a bummer.
One possible solution is to leave the vendor folder on the VM and manually copy it to your host machine.
Pros:
PHPStorm is able to index files
Cons:
If you add a dependency, you have to copy some parts of the vendor folder manually to the host machine
To those facing the same problem, I might advise SFTP (Tools -> Deployment -> Configuration in PHPStorm) - files can be transferred without leaving the IDE window. The only thing to do is get the VM box password, which is located at
%USERNAME%/.vagrant.d/boxes/your box/box version/virtualbox/Vagrantfile
Second solution: if you are using Virtualbox, you can use vm.synced_folder with type: "virtualbox" (the sync works both ways, host<->guest), and leave the vendor folder in your project (for it to sync all the time).
Pros:
vendor folder always up to date, no manual work
Cons:
Horrible performance (tested myself)
If you want to use non-virtualbox rsync (type: "rsync"), you will not get the ability to sync back from the guest (someone, please correct me if I'm wrong!), so you are left with the 1st solution.
It would be great if we could include the vendor folder directly from the VM (using some kind of rsync/symlink magic) to the "Languages & Frameworks -> PHP -> include path" list, at least when using VirtualBox, but oh well...
From my research I understand that VirtualBox synced folders have permissions set up during the mounting process. Later, I am unable to change it therefore permissions for the whole synced folder MUST be same for every single file/folder in the shared folder. When trying to change with or without superuser permissions, changes are reverted straight away.
How this can work with for example Symfony PHP framework where there are several different permissions for different files/folders? (i.e. app/console needs execute rights but I don't want to have 7XX everywhere).
I have found in different but similar question (Vagrant and symfony2) that I could set the permissions to 777 for everything in the Vagrantfile, however this is not desirable as I need to use GIT behind my source code which is than deployed to the live environment. Running everything under 777 in the production is, nicely put, not correct.
How do you people cope with this? What are yours permissions setups?
A possible solution could be using the rsync synced folder strategy, along with the vagrant rsync and vagrant rsync-auto commands.
In this way you'll lose bidirectional sync, but you can manage file permission and ownership.
I am in a similar situation. I started using Vagrant mount options, and found out that as I upgraded parts of my tech stack (Kernel, Virtualbox, Vagrant, Guest Additions) I started getting different behavior while trying to set permissions in synced folders.
At some point, I was perfectly fine updating a few of the permissions in my shell provisioner. At first, the changes were being reflected in the guest and the host. At another point in time, it was being done the way I expected, with the changes being reflected only in the guest and not the host file-system. After updating the kernel and VB on my host, I noticed that permission changes in the guest are being reflected on the host only.
I was trying to use DKMS to compile VBOX against an older version of my Kernel. No luck yet.
Now when I have little more experience, I can actually answer this question.
There are 3 solution to this problem:
Use Git in your host system because vagrant basic shared folders setup somehow forces 777 (at least on Windows hosts)
Use NFS shared folders option from the vagrant (not available on Windows out of the box)
Configure more complex rsync as mentioned in Emyl's answer (slower sync speeds).
I've setup a Vagrant box that runs my webserver to host my Symfony2 application.
Everything works fine except the folder synchronization.
I tried 2 things:
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER, type="rsync"
Option 1: First option works, I actually don't know how file is shared but it works.
Files are copied in both way, but the application is SUPER slow.
Symfony is generating cache files which might be the issue, but I don't really know how to troubleshoot this and see what is happening.
Option 2: Sync is only done in one way (from my local machine to the vagrant box), which covers most of the case and is fast.
Issue is that when I use symfony command line on the vagrant box to generate some files they are not copied over to my local machine.
My question is:
What is the best way to proceed with 2 ways syncing? With option 1 how can I (as it might be the issue) exclude some files from syncing.
With Option 2 how can I make sure changes on remote are copied to my local machine?
If default synced folder strategy (VirtualBox shared folders, I imagine) is slow for your use case, you can choose a different one and, if you need, maintain the two-way sync:
If your host OS is Linux or Mac OS X, you can go with NFS.
If your host OS is Windows you can instead choose SMB.
Rsync is very fast but, as you've pointed out, is one-way only.
As it doesn't seem Vagrant offers a "built-in" way to do this here is what I did:
Configure Vagrant RSYNC folder on the folders that will contains application generated files (in Symfony2 it is your Bundle/Entity folder). Note that I didn't sync the root folder because some folders doesn't have to be rsynced (cache/logs...) and also because it was taking way too much time for the rsync process to parse all the folders/subfolders when I know that only the Entity folder will be generated.
As the Rsync has to be done from the Vagrant box to the host, I use vagrant-rsync-back plugin and thus run this manually everytime I use a command that generates code.
https://github.com/smerrill/vagrant-rsync-back#getting-started
Create an watcher on my local machine that will track any change in code and rsync it to the vagrant box.
https://gist.github.com/laurentlemaire/e423b4994c7452cddbd2
Vagrant mounts your project root as /vargrant folder inside box as 2 way share.
You can run your command there do get required files synced. Any I/O will be damn slow (like you already mentioned), however you will get your files. For other stuff use your 1-way synced folder.
While developing websites (using win7) I find myself changing the IP addresses in my hosts file quite often. I have a development environment on my machine, code on the testing server, code on the staging server and code on the live server. I toggle through these servers a bunch of times throughout the day. I normally manage this by having a slew of host entries commented out, uncommenting the one I want to use and then using ipconfig /flushdns and finally restarting my browser. Such a PITA!!
I've read that installing a proxy server locally would take several steps out of this process. What's the best proxy server (on win7) for this scenario?
You can set up your host file to point the domain to a LAN ip (just to prevent packets going somewhere in case you break something), then you would need a proxy that dynamically changes the destination. What you're looking for is a Reverse Proxy, such as Squid or Varnish. Unfortunately the set up of such a server is beyond my knowledge.
What you could do is create 3 hosts files; hosts_1, hosts_2 and hosts_3, in the hosts directory; C:\Windows\System32\Drivers\Etc, each with resp. the test, staging and live setting.
You could then write a batch file (*.bat) which overwrites the real hosts file with the hosts setting of your choice, for instance with a user prompt.
#echo off
set /p UserInputPath= Which hosts file? (1=test, 2=staging, 3=live)
cd C:\Windows\System32\Drivers\Etc
xcopy /y hosts_%UserInputPath% hosts
ipconfig /flushdns
Combine this example with some commands for killing the browser process, restarting it, etc.
Googling 'batch + processes' can help you.
Copy your normal hosts file into hosts.normal, hosts.testing, hosts.staging and hosts.live. For each file have a bat file, which deletes the current hosts and copies the appropriate hosts.* file into hosts. Then you can just run the chosen batch file to switch configuration.
Additionally, pin command prompt into taskbar. This way you can just right click the icon and you should see the batch files in the context menu. Pin them and you'll have a quick access menu for these configs in your taskbar.
But otherwise I'd just use web.config transforms to handle this kind of situation.
you can use http://hostprofiles.codeplex.com/ pretty handy.
Greetings Gijs