While developing websites (using win7) I find myself changing the IP addresses in my hosts file quite often. I have a development environment on my machine, code on the testing server, code on the staging server and code on the live server. I toggle through these servers a bunch of times throughout the day. I normally manage this by having a slew of host entries commented out, uncommenting the one I want to use and then using ipconfig /flushdns and finally restarting my browser. Such a PITA!!
I've read that installing a proxy server locally would take several steps out of this process. What's the best proxy server (on win7) for this scenario?
You can set up your host file to point the domain to a LAN ip (just to prevent packets going somewhere in case you break something), then you would need a proxy that dynamically changes the destination. What you're looking for is a Reverse Proxy, such as Squid or Varnish. Unfortunately the set up of such a server is beyond my knowledge.
What you could do is create 3 hosts files; hosts_1, hosts_2 and hosts_3, in the hosts directory; C:\Windows\System32\Drivers\Etc, each with resp. the test, staging and live setting.
You could then write a batch file (*.bat) which overwrites the real hosts file with the hosts setting of your choice, for instance with a user prompt.
#echo off
set /p UserInputPath= Which hosts file? (1=test, 2=staging, 3=live)
cd C:\Windows\System32\Drivers\Etc
xcopy /y hosts_%UserInputPath% hosts
ipconfig /flushdns
Combine this example with some commands for killing the browser process, restarting it, etc.
Googling 'batch + processes' can help you.
Copy your normal hosts file into hosts.normal, hosts.testing, hosts.staging and hosts.live. For each file have a bat file, which deletes the current hosts and copies the appropriate hosts.* file into hosts. Then you can just run the chosen batch file to switch configuration.
Additionally, pin command prompt into taskbar. This way you can just right click the icon and you should see the batch files in the context menu. Pin them and you'll have a quick access menu for these configs in your taskbar.
But otherwise I'd just use web.config transforms to handle this kind of situation.
you can use http://hostprofiles.codeplex.com/ pretty handy.
Greetings Gijs
Related
Am I overlooking something in DrupalVM documentation? To run multiple instances, what steps do people follow?
I’ve seen mentions that after you’ve launched DrupalVM once, it’s quick to launch another instance.
Various approaches have had the same results, including some of the helpers on http://docs.drupalvm.com/en/latest/other/management-tools as well as the vagrant plugin vagrant-cachier. With each, starting a new instance takes the same (very long) length of time….
First, do you really need to launch 2 same machines at once? You can have multiple websites (vhosts) on one VM. That way you would save some computer resources (memory). Edit hosts file on your (host) machine to match web server settings, where you defined your website
But if you insist, should be possible to copy whole project dir, change ip of one of those 2 machine (config.vm.network "private_network", ip: "192.168.something.something" in vagrant file) and run them simultaneously.
Trying to evaluate CoreOS. It really looks like it is an interesting product and I was trying to see about simply starting up networking. I got a static configuration to work by doing the following:
Create a static network file in the /etc/systemd/network/ folder.
It is my understanding that the important parts of the file name I drop into this directory are the number at the beginning of the file for cases when I have multiple network files this will help to determine which file is applied first and the ".network" suffix to declare that this is a network configuration file
The contents of /etc/systemd/network/10-static.network is as follows (yes, this is a very simple configuration):
[Network]
Address=192.168.1.102/24
Gateway=192.168.1.2
I then tried starting the service: sudo systemctl start systemd-networkd
This actually worked and assigned a static ip address that was visible when running ifconfig.
Here is my problem. I rebooted the CoreOS virtual machine and noticed that the networking was no longer set on reboot. When I check the /etc/systemd/network/ folder it is empty and my configuration file apparently disappeared on reboot.
Does anyone know why this would have happened?
Thanks in advance for any help on this!
You must remove ISO image, coreOS maybe reboot same ISO image. If you remove ISO image, coresystem can reboot from new system.
I experienced the same situation before.
Files on disk shouldn't disappear on you like that. Did you happen to PXE-boot this VM or somehow use a file system in RAM?
A better way to do this config is with cloud-config, which CoreOS uses to configure machines at boot. It's intended to provide a repeatable way to set up networking, mount disks and things like that. The steps that you completed manually can be done with cloud-config like this: https://coreos.com/docs/cluster-management/setup/network-config-with-networkd/
More info about cloud-config in general: https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/
I've setup a Vagrant box that runs my webserver to host my Symfony2 application.
Everything works fine except the folder synchronization.
I tried 2 things:
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER
config.vm.synced_folder LOCALFOLDER, HOSTFOLDER, type="rsync"
Option 1: First option works, I actually don't know how file is shared but it works.
Files are copied in both way, but the application is SUPER slow.
Symfony is generating cache files which might be the issue, but I don't really know how to troubleshoot this and see what is happening.
Option 2: Sync is only done in one way (from my local machine to the vagrant box), which covers most of the case and is fast.
Issue is that when I use symfony command line on the vagrant box to generate some files they are not copied over to my local machine.
My question is:
What is the best way to proceed with 2 ways syncing? With option 1 how can I (as it might be the issue) exclude some files from syncing.
With Option 2 how can I make sure changes on remote are copied to my local machine?
If default synced folder strategy (VirtualBox shared folders, I imagine) is slow for your use case, you can choose a different one and, if you need, maintain the two-way sync:
If your host OS is Linux or Mac OS X, you can go with NFS.
If your host OS is Windows you can instead choose SMB.
Rsync is very fast but, as you've pointed out, is one-way only.
As it doesn't seem Vagrant offers a "built-in" way to do this here is what I did:
Configure Vagrant RSYNC folder on the folders that will contains application generated files (in Symfony2 it is your Bundle/Entity folder). Note that I didn't sync the root folder because some folders doesn't have to be rsynced (cache/logs...) and also because it was taking way too much time for the rsync process to parse all the folders/subfolders when I know that only the Entity folder will be generated.
As the Rsync has to be done from the Vagrant box to the host, I use vagrant-rsync-back plugin and thus run this manually everytime I use a command that generates code.
https://github.com/smerrill/vagrant-rsync-back#getting-started
Create an watcher on my local machine that will track any change in code and rsync it to the vagrant box.
https://gist.github.com/laurentlemaire/e423b4994c7452cddbd2
Vagrant mounts your project root as /vargrant folder inside box as 2 way share.
You can run your command there do get required files synced. Any I/O will be damn slow (like you already mentioned), however you will get your files. For other stuff use your 1-way synced folder.
I have a file on my desktop and I need to get it onto another server, but I have no means of getting it there, i.e. email/usb or any way like that.
The server is on the same network as me.
I have heard of a way that the file can be copied via the command line.
Would anyone have any information on this and if so could you please help me?
Not sure whether you have command line access to that server or not? If yes, are you accessing it via telnet or via ssh?
If ssh, you should be able to transfer the file via SCP (secure copy), since it uses the same ssh connection you use to get your cli. If you want to transfer your file from a Windows environment, you may want to look at WinSCP, else do a man scp on your Linux or Unix server and, assuming you have it, you'll get the hang of it... it's not complicated.
If ssh is not an option, then you depend on the server having some service available for you to transfer the file, most obvious one being FTP.
Does that help?
How can I read a text file resides in a remote machine? There is no share exists in that machine and I am not allowed to create any share or file in the remote machine. Also I am not allowed to run any client program in the remote machine. My program is a ASP.net in C# residing in a IIS webserver. For linux machine we used ssh connections and file reads are easy. Is there something by default available in windows similiar to it ?
Thanks,
Sreejith
The first question to ask is if there's a good business reason to read that file. If yes, the IT people will have to allow you a reasonable solution to the problem.
I have frequently used SFTP (secure FTP) for this kind of problem. Unfortunately SFTP is not part of Windows, but there are free and low-cost SFTP servers available. Here's a list from Wikipedia
Explain to IT why you need access to that file and discuss options including SFTP. If you have a valid business reason for this and they will "not let you because of policy", it's the job of your project manager or boss to clear out that roadblock. Ask them to help.
Finally, consider whether it's practical for the file on the remote machine to be pushed to you instead of you pulling it. If you can setup a file share on your PC, ask them to setup a job on the remote server that copies the file to your file share every time it is changed.
You could try accessing the Admin share of the machine. Windows by default created a share for all disks (named C$, D$ etc). But in that case the application you write should be running with the credentials of a user with rights to that share ((local) administrators have sufficient rights to do that).
If that doesn't work you need to create a share or install software to get files from that machine (like FTP). This is all because of security, it's a good thing you are not able to just read a file from any machine...
I have done this many time with the Remote File port 34
http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers