I am working on a raspberry for a POC demo.
My raspberry needs to be set up as a hotspot and that went fine following this tutorial: https://www.raspberrypi.org/documentation/configuration/wireless/access-point.md
However I can't get to easily switch between a "normal" wifi and the hotspot. I need to get back to a normal behavior to download packages to the raspberry for instance.
I found : http://sirlagz.net/2013/01/22/script-starting-hostapd-when-wifi-goes-down/ . Typing the same command does not seem to do the job.
What I've tried is :
Raspberry set up as hotspot
Stop dnsmasq and hostapd
edit /etc/dhcpcd.conf (to remove the static ip configuration)
restart dhcpd
I can see the raspberry is connected to the correct wifi but apparently I have no internet connection and can't download any packages.
Maybe there is something more to do about the iptables but I really don't know much about that and I prefer not screwing all my configuration.
Any idea about the procedure to switch between the two "modes"?
Cheers
I have found a solution that works perfectly.
Disable access point :
sudo systemctl disable hostapd dnsmasq
comment the static ip config in /etc/dhcpcd.conf
sudo reboot
Enable access point
sudo systemctl enable hostapd dnsmasq
comment the static IP config in /etc/dhcpcd.conf
sudo reboot
The difference is instead of just stopping the 2 services I completely disable them.
Related
I have installed it with command "sudo yum install ngingx" and its visible from computer host using its own ip in the browser, but in other computer in the same LAN and resolving ping it doesnt work and answers a timeout error. I know there is a /etc/nginx/nginx.conf file but I didnt see any valid configuration to resolve this (or I didnt search very well).
Machine has internet and resolves ping to other machines in lan
Could somebody guide me?
I use virtualbox to run Fedora
Thank you, here I left nginx.conf enter image description here
First of all, are you using bridge mode in virtualBox? If so and this is still not working, check if Fedora has enabled the firewall by typing in a shell:
systemctl status firewalld.service
If active, check the zone where the main adapter is configured
firewall-cmd --get-active-zones
Add ports 443 and 80 to the zone related to your interface (for instance FedoraWorkstation)
firewall-cmd --zone=FedoraWorkstation --permanent --add-port=80/tcp
firewall-cmd --zone=FedoraWorkstation --permanent --add-port=443/tcp
This should do the trick
Finally the problem was that apache is installed by default in fedora workstation and main page of nginx was showing apache as current web server, so any changes made in nginx wasnt being loaded. Solution: purge apache from system and reboot. Now nginx load as the main web server and changes made are applied
I have created a virtual machine instance from snapshot taken the production server. SSH key is set. But I am unable to ssh into instance both from the putty and google cloud ssh option from browser.
I have search around and find out that the issue new release which does not set the
default IP gateway for the instance. I have set the IP gateway and restart the instance but instance still showing the same error .
I have also check the Firewall rule and port 22 traffic allowed to the instance.
All other instance in same zone are working on SSH other than instance newly created using snapshot.
After looking into the logs from the serial port ifup: failed to bring up lo
Image of the error
#Patrick answer helps me get to answer, explanatory steps
1) Serial Console.
Go to you instance detail and enable serial port.
Connect to your instance using serial port and login with the user and password
If you do not have user create one by following script as a startup-script
#!/bin/bash
sudo useradd -G sudo user
sudo echo 'user:password' | chpasswd
sudo systemctl status networking.service to check networking status
Remove the /etc/network/interfaces.d/setup file then edit your /etc/network/interfaces
auto lo
iface lo inet loopback
Restart networking service by running sudo systemctl status networking.service
2) Following startup script also work for me
#!/bin/bash
sudo dhclient eth0
It seems the issue here is that the network interface of your new instance is not coming up. You can try one of two steps:
1) try connecting through the serial console. This does not connect through port 22 or use SSH. However, if the network card is not coming up at all, this may also fail.
2) Add a startup script to the instance which will run the commands you need to configure the network card
I could connect to coreOS through Putty in Windows10.
But after changing DHCP to static IP in coreOS,
I suddenly became unable to use SSH through putty(cannot connect to coreOS through putty in Windows10).
I wonder why this happened, and how I could solve this problem.
I investigated status of ssh in coreOS. and it says inavtive.
What should I do to solve this problem?
If anyone knows please help me.
I have no clue... TT
If your sshd is inactive, you might be able to restart it. I'd be interested whether you used networkd (as documented here) when you changed from DHCP to static IP, as I think that should have been picked up automagically by CoreOS.
If you are seeing that the following command shows sshd as "inactive (dead}":
sudo systemctl status sshd
You can start sshd with:
sudo systemctl start sshd
And just in case you need it here is documentation on how to customize the ssh daemon.
Are you sure that your network unit was formatted correctly as is being accepted?
Did you restart networkd afterwards if you added the network unit manually? sudo systemctl restart systemd-networkd
Are you using cloudconfig to add the network unit? See if there is anything in the journal: journalctl _EXE=/usr/bin/coreos-cloudinit
You can also validate your cloud-config here: https://coreos.com/validate/
How to run meteor on a different port, for example on port 80.
I tried to use meteor --port 80, but I get this error Error: listen EACCES
help me please.
Sounds like it might be access issue on your machine.
Check out this following answer, which might be related to your question. Quoting:
"As as a general rule processes running without root privileges cannot bind to ports below 1024.
So try a higher port, or run with elevated privileges via sudo."
So, you can see that sudo meteor run with your port number will work, but that you can address the root cause, which is fixing the node root privilege.
Node.js EACCES error when listening on most ports
You can't bind to ports < 1024 on Linux/Unix operating systems with a non-privileged account.
You could get around this by running meteor as root, but this is a really bad idea. In development mode, running as root will modify the permissions of the files under your application directory. In production, it's just a giant security hole. Never run a meteor app as root.
Listed below are the best practices depending on your environment.
Development
Run meteor on a high port number. The default is 3000 when you don't give a --port argument. Connect to it via the URL printed in the console - e.g. http://localhost:3000/.
Production
Here you have two choices:
Run meteor on a high port number and connect it to the outside world via a reverse proxy like nginx or HAProxy.
Start the webserver as root but step down the permissions once it's running using something like userdown. This is how mup works which, incidentally, is probably what you should be using to deploy your app.
run it with sudo
sudo meteor --port 80
The meteor run --port 8080 terminal command can be used.
When I run meteor, it just says killed.
Did Digital Ocean install a new firewall or block some ports recently? I'm pretty sure these used to work. ping www.google.com seems to work fine