As you may assume I need a little assistance here.
I have four routers (TP-Link WDR3600) that I need to use to create an Ad-Hoc network. Currently I am only dealing with two of the four routers for simplicity. All of the routers have OpenWRT Chaos Calmer 15.05 OS installed on them and all of them are running the OLSR routing protocol. My question is super simple but the answer eludes me and I would love some direction on the matter.
How do I get these two (and eventually four) routers to talk to each other using HNA (Host and Network Association) and the setup specified above?
Edit: they need to be connected to each other wirelessly too. End edit.
I have followed this specific guide to the T but as soon as it gets to "HOW TO Step 4" the guide breaks down in terms of application because the file they point to (/etc/olsrd.conf) does not exist in my setup. When continuing anyway and running "olsrd start" it spits out: Notice how it says "Could not find specific config file /etc/olsrd/olsrd.conf" and how that differs from earlier when it asked me to modify "/etc/olsrd.conf"
In addition, the folder "/etc/olsrd" also does not exist in case you are wondering. I'm at a loss regarding this. Does anybody have any input on the matter? I'm certain that I'm missing something simple.
Thanks in advance.
I had to create /etc/olsrd.conf using the template provided and uncomment the third line of /etc/config/olsrd. I would also recommend installing olsrd-mod-httpinfo using opkg like he recommends.
One thing I noticed is that he never specifies giving the wireless interface (wlan0 in my case) an IP address to communicate with the mesh. Since I believe that is required, I had to use LuCI to give the interface an IP. I think I have my setup working but I am trying to get my new OpenWRT node to communicate with my previous DD-WRT nodes right now. Might just have to change them all to OpenWRT since it offers more "customization" due to it's bare-bones type configuration.
Can you try to run :
/usr/sbin/olsrd -d -f /etc/olsrd.conf
Related
I am looking for a way to programmatically get the name of the vendor that owns a MAC address within a block/range that they purchased. Preferably by querying some API or database, language agnostic. Or if there is some other way that applications do it that I am unaware of.
For example, running nmap -sn 192.168.1.0/24 with root privileges yields
...
Nmap scan report for 192.168.1.111
Host is up (0.35s latency).
MAC Address: B8:27:EB:96:E0:0E (Raspberry Pi Foundation)
...
... and that tells me that the Raspberry Pi Foundation "owns" that MAC Address, within the prefix range that they own: B8:27:EB.
However, I am not sure how nmap knows this, nor how I could find this out myself. Parsing nmap output is not an ideal solution for me. Here's what I found from digging online:
This stackoverflow question references a site that appears to do this, however it appears to not have been updated since 2013, nor does it expose any API endpoints. Most notably, it does not have the newer block of MAC Addresses that the Raspberry Pi Foundation reserved for their newer models (under Raspberry Pi Team, or something along those lines).
I found that the IEEE handles these registrations through their site, however it appears to be for their customers and I could not find an exposed endpoint for their search function.
On that same IEEE page linked above, it looks like I can get a CSV file of their entire database. However that seems large, and would have to be actively kept up-to-date. Does nmap come with an updated database generated from those files locally?
If a public-facing API like I'm envisioning doesn't exist, I'll make one myself for fun. I'd first like to know if I'm thinking about this wrong and if there is an official, "canonical" way that I have not found. Any help would be appreciated, and thank you.
The maintainers of nmap keep a list of prefixes as part of the tool. You can see it here:
https://github.com/nmap/nmap/blob/master/nmap-mac-prefixes
They keep this up to date by periodically importing the public registry on this site:
https://regauth.standards.ieee.org/standards-ra-web/pub/view.html#registries
Note that those files are rate-limited so you should not be querying those csv files ad hoc as part of a software package; rather you should do what nmap does and keep an internal list that you synchronize periodically.
I'm not aware of a publicly available tool to query them as an API; however, creating one that works the same way that nmap does would be fairly trivial. nmap does not update that file more than once or twice a year which makes me suspect that the list doesn't significantly change often enough that keeping your own list would be too onerous (you could even download nmap's list every so often).
I am looking for a basic thing yet I have not found not even a single good documentation on getting it done.
I want to allocate a floating IP, then associate it to a network interface of a droplet other than eth0.
The reason is I want to have the ability to very easily switch from one IP to the other with a programming language.
In a few words, I want to be able to do these two commands and both should provide a different response.
curl --interface eth0 https://icanhazip.com
curl --interface eth1 https://icanhazip.com
Also, I want to know what to do once I release the Floating IP, how do I roll back to the starting point.
All documentation I read, rely heavily on "ip route" and "route", most did not even work, some worked but replaced completely the old IP by the floating and that's not what I want, and also they did not show how to rollback the introduced configuration changes.
Please help, I spent 1 whole day now trying to get this to work for a project, and no results so far.
I guess there is no need to know DigitalOcean, how to make this work on other Cloud Providers would apply here too I think.
Update
After asking this on DigitalOcean community forum (https://www.digitalocean.com/community/questions/clear-guide-on-outbound-network-through-floating-ip), they claim that is not supported, although there may be some solutions to this if somebody can provide such a "hacky" solution I would take it too. Thanks
In the cloud (AWS. GCP etc.) ARP is emulated by the virtual network layer, meaning that only IPs assigned to VMs by the cloud platform can be resolved. Most of the L2 failover protocols do break for that reason. Even if ARP worked,the IP allocation process for these IPs (often called “floating IPs”) would not integrate with the virtual network in a standard way, so your OS can't just "grab" the IP using ARP and route the packets to itself.
I have not personally done this on Digital Ocean, but I assume that you can call the cloud's proprietary API to do this functionality if you would like to go this route.
See this link on GCP about floating IPs and their implementation. Hope this is helpful.
Here's an idea that needs to be tested:
Let's say you have Node1(10.1.1.1/24) and Node2(10.1.1.2/24)
Create a loopback interface on both VMs and set the same IP address for both like (10.2.1.1/32)
Start a heartbeat send/receive between them
When NodeA starts it automatically makes an API call to create a route for 10.2.1.1/32 and points to itself with preference 2
When NodeB starts it automatically makes an API call to create a route for 10.2.1.1/32 and points to itself with preference 1
The nodes could monitor each other to withdraw the static routes if the other fails. Ideally you would need a 3rd node to reach quorum and prevent split brain scenarios, but you get the idea right?
I am working for a company that is providing File-Share-Software for all sorts of Protocols such as FTP, SFTP, FTPS and so on. One of our customers is facing an issue with Key-Auth and spontaneously login-problems.
Going trough the code I am pretty certain that the server collapses with too many requests at the same time. What I need right now is a simple tool to test a situation just like this. I need a simple SFTP-Fuzzer or Stresser, sending invalid or broken Auth-Attempts to the SFTP-Server.
I am not a developer but a technician and instead of writing something myself (which would take forever) I would love to have a simple script or toolset to go...if there is one.
Ok, found one faster than I thought.
Steps:
Download Kali Linux (or any Distro that contains Metasploit)
Fire up Kali Linux and put it in the same subnet as your SFTP-Server
Start Metasploit and use the SSH-Fuzzer /auxiliary/fuzzer/ssh/ssh_version_2
Set RHOST and RPORT to the relevant IP and port your server is listening to
Exploit and see what will happen
I want to preface this by saying that I've never taken a networking class but I'm learning on the job. Things like TCP/IP networking I have a pretty basic grasp of and if you think this will hinder my attempt at this let me know.
The task I have at hand is thus: I have an Open Stack network with a bunch of nodes that can communicate with each other, all running CentOS virtual machines (just for simplicity's sake) with applications running on top of them. The task is basically to find a way to monitor the ping of every node and report whenever some kind of message (probably through http) that reports what happened. The logic of checking for the actual latency problems isn't what I'm struggling with, its the best structure to complete this task.
I'm thinking of using Nagios and setting up a distributed monitoring system. Basically my plan is to instal nagios on each node after writing my plugin (unless its already offered or exists) and it would simply ping everything else in the network once its setup and the other nodes ping it once the fact that it has joined the network is detected. I'm not sure exactly how scalable this is because if the number of nodes increase a lot would having every node pinging every other node actually be a good thing? Could it actually end up being a lot of stress on the network?
Is this a bad idea? I know a more efficient solution would be something where as long as every node is being checked (not necessarily have to have every node connected to by every other node) is more efficient. Visualizing it as a graph with a couple of points, it would be a bidirectional graph with just one path connecting each point rather than every possible point having edges between each other. But I don't know if this is the level I should be thinking about it or not.
In short, what I'm asking is: How would one go about setting up a ping monitoring system between a bunch of Open Stack nodes?
Let me know if this question makes sense. Thanks.
Still not entirely sure what you're trying to accomplish with this setup, but the Nagios setup you're describing sounds messy and likely won't cover what you need. I'd look at building packetbeat into the provisioning of each of your hosts, and then shipping that data off to Elasticsearch. That way you can watch your actual application-level traffic and response times. https://www.elastic.co/products/beats/packetbeat
When developing an app that will listen on a TCP/IP port, how should one go about selecting a default port? Assume that this app will be installed on many computers, and that avoiding port conflicts is desired.
Go here and pick a port with the description Unassigned
First step: look at IANA listing :
There you will see at the tail of the list
"The Dynamic and/or Private Ports are those from 49152 through 65535"
so those would be your better bets, but once you pick one you could always google on it to see if there is a popular enough app that has already "claimed" it
If by widely-used, you mean you want to protect against other people using it in the future, you can apply to have it marked as reserved for your app by IANA here
The most comprehensive list of official IANA port numbers and non-official port numbers I know is nmap-services.
You probably want to avoid using any ports from this list (Wikipedia).
I would just pick one, and once the app is used by the masses, the port number will become recognized and included in such lists.
Choosing an unassigned one from the IANA list is usually sufficient, but if you are talking about a commercially-released product, you really should apply to the IANA to get one assigned to you. Note that the process of doing this is simple but slow; the last time I applied for one, it took a year.
As others mention, check IANA.
Then check your local systems /etc/services to see if there are some custom ports already in use.
And please, don't hardcode it. Make sure it's configurable, someway, somehow -- if for no other reason that you want to be able to have multiple developers using their own localized builds at the same time.
If this is for an application that you expect to be used widely, then register a number
here so no-one else uses it.
Otherwise, just pick an unused one randomly.
The problem with using one in the dynamic range is that it may not be available because it may be being used for a dynamic port number.
Well, you can reference some commonly used port numbers here and try not to use anyone else's.
If by "open to the public at large" you mean you're opening ports on your own systems, I'd have a chat with your system administrators about which ports they feel comfortable with doing that with.
Choose a number that is not very common
Choose a default port that doesn't interfere with the most common daemons and servers. Also make sure that the port number isn't listed as an attack vector for some virus -- some companies have strict policies where they block such ports no matter what. Last but not least, make sure the port number is configurable.
Use iana list. Download the csv file from :
https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.csv
and use this shell script for searching for unregistred ports:
for port in {N..M}; do if ! grep -q $port service-names-port-numbers.csv; then echo $port;fi; done;
and put 2 numbers instead of N and M.