Very slow SMB over VPN - vpn

access to our server via VPN is extremely slow using the Mac Finder. It will take 20-30 seconds just for the directory contents to be listed. File transfers are also excruciatingly slow.
The configuration:
Server: Asustor NAS AS6302T (ADM version: 3.5.2.RAG2)
Router: Telekom Digitalisierungsbox Box Premium (firmware version 11.01.03.103) which is the evil twin of the bintec elmeg be.ip plus
Internet connection: Telekom DeutschlandLAN SIP trunk (100 MBit / s download, 40 MBit / s upload)
VPN connection via Digibox via IKEv1 using the following instructions: https: //archive.bintec-elmeg.com/Files/Weiter_Downloads/Documentation/workshops/current_de/ws_be_IP ...
VPN client: Several MacBook Pros with different OS (e.g. 10.15.7, 10.14.6) - Mac's own VPN client (Cisco IPSec IKEv1)
Client router: various routers, etc. Fritz boxes, Vodafone (Unity Media) boxes, mobile phone hot spots
If VPN is established, the connection to the NAS is established via SMB and is painfully slow. If I access the NAS via HTTP with an existing VPN, the connection is fast. Also when I access via FTP.
My guess is this is somehow related to the SMB connection. I had already read that SMB signing has a major impact on performance. However, the SMB connection in the LAN is fast.
Any ideas where to even look?

A couple of things to consider based on my experience with slightly older systems:
Depending on the age of the NAS, it may be defaulting to SMB version 1. This was usually done for backward compatibility a number of years ago. I believe SMB v3 is the default on Windows and Mac systems these days.
The Mac OS finder by default scans folder contents and .DS_Store files before displaying. On a network share with lots of items this can be painful. The scan can be disabled using something like below. Apple support item included.
[https://support.apple.com/en-au/HT208209][1]
defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE
In general, I rarely use SMB over VPN like because of the same behaviour you have observed. Once you have a VPN connection, it is better to connect to a remote server and access the NAS from it. Alternatively, use a different technology such as FTP or HTTP, as you mention, to get files on your local machine.

Related

Is there something I should be concerned about before port-forwarding my server?

I'm setting up my first server on a Raspberry Pi 4 but after reading some articles online I was wondering whether my server is ready to be open to the internet or not. I premise I'm just an individual who would like to publish some programming projects on a site that is accessible on a browser.
After some concerns I designed a PHP page which checks the client IP and returns a 403 header until i give that user the permission to access. Is it enough? Is it necessary?
And also, are there ports that are more safe to open than others?
You "can" open ports 80 and/or 443 for displaying webpages - depending on SSL certificates
I do it myself (not for web hosting) and restrict the open ports to certain IPs - my friends (not smart enough to levy an attack 😂). Though IPs are likely to change every so often and your firewall will need updating.
It's a key thing to remember that anything is open to exploitation if it's not properly maintained/set up. Also displaying a 403 isn't a silver bullet.
Port 25 would give a user access to the files on your device if proper authorisation isn't set up. Opening ports 80 and 443 will give users access to webpages but makes your device/network exposed to DoS attacks or platform level attacks. If there's a known exploit for your version of PHP or your firewall/router or possibly the device itself then an attacker will exploit it.
Hosting providers have layers upon layers of security and are constantly updating devices throughout their network. Keeping your device and platform up to date will help - but it may be worth instead investing a little in a host (from about £4 a month).
There are loads more things I can touch on but will leave it at that for now
Edit after comment:
my website is just a little project i mean who could casually target it?
Strictly speaking, anyone. "Who would want to?" Again, anyone. Sure you're a small target that wouldn't provide any useful data. But your device, once hacked, can be used as a DoS zombie or as a crypto-miner and you probably wouldn't even realise.
And also can't I use whatever port like 6969 or 45688?
Yes, strictly speaking, you can. You could tell your device to listen on that port and reply with the website data. To do this you would also need to provide the port number on the end of the URL in the format www.example.com:6969. Though, again, this isn't a silver bullet. Most security issues aren't with port-forwarding but with poor management/security and bugs in the components themselves. All a port forwarder is doing is saying "oh, device X wants data on this port... here you go".
Another point is, data sent on "Well-known ports" (1-1023) tend to have their headers checked for irregularities by the firewall - which can dispose of any irregular packets. By using a custom port the firewall doesn't really know what to expect, so it sends it anyway. Also, steer away from "Private ports" (49152-65535) these are used as source ports, not destination ports.

beginner backend web programming questions about SSH

So, I've taken a handful of programming courses(object-oriented, web) but never had "hands-on" projects where it's outside of coding.
Now I'm trying to figure out what these SSH stuff is about, I can't even figure out which client to use, so picked filezilla for now.
My question is, where can I read more about these terms like ports, and whatnots, in a way so I'm not learning aimlessly.
Thanks!
Basically, SSH is a way to command another computer exactly what to do over the Internet. You can execute any commend the remote system has, and your user has permission for.
The Internet
The Internet runs on a series of protocols collectively named TCP/IP. TCP/IP defines a way to find and address individual computers (IP) and a way to communicate between them (TCP).
You can think of computers on the Internet as a large collection of office buildings all close together. Each office has the exact same number of windows: 65535. Offices (computers) communicate by stringing channels between windows (ports). Each channel has two ends, called sockets. Each socket is associated with a port on the respective computer. We send data back and forth, and then the connection is closed.
Client/Server
There are two types of computers on the Internet: clients, and servers. Clients request information, and servers provide it. Ports 1-1024 are reserved for servers, 1 port per protocol. The full list is here, and as you can see, it is not without contention.
Let's say you visit a website
Your browser, the client program, sees that you typed "stackoverflow.com", and using DNS, discovers that stackoverflow.com is computer number 64.34.119.12. This is it's IP address. It allows your computer to find the network stackoverflow.com is located in, route to it, and establish a connection to the Stack Overflow web server. The web server is a program that accepts client requests from a browser like yours.
They speak in a protocol called HTTP - it allows your browser to request a page determined by a URL. The server sees the request, runs a program to construct a web page (or retrieves an HTML file, image, or any other file), and sends the result back to the browser. Port 80 has been reserved for HTTP. That means, your computer chooses a random port to connect from, and connects to port #80 on the server.
Unix and the shell
The majority of the Web (The Internet, even) runs on an OS called Linux (a Unix variant), instead of something like Windows. Unix systems possess a command-line interface, running a program called a "shell", which is a direct interface to the system. The shell accepts input, one command at a time. You type text in, and it spits out the out put of the command.
Secure Shell
SSH allows you to do this securely. All data traffic is encrypted using a well-studied published "public-key" cryptographic system. (In fact, it was major news when a vulnerability was discovered in a supporting encryption scheme, see these advisories).
SSH is a protocol commonly running on port 22. Anyone with a computer on the Internet (not behind a firewall) can run an SSH server, and allow users to connect to it and execute commands.
The majority of systems administrators and software developers using Unix on the server use SSH to configure, control, and upload programs to that server (located in some data center somewhere).
More
There are many many more details to all of this. Any term or acronym above can be typed into Wikipedia for pretty comprehensive information. There are plenty of books on Unix, Networking, and Web programming.
SSH is originally a secured replacement for telnet. The need for SSH arose from the fact that telnet does not support encryption and therefore everything (commands, output and password) was plainly visible on the network for all to see.
Because in the beginning SSH encryption (based on key exchange) was supposed to be strong (and it was indeed a marked improvement), and was open source, it took off rapidly and several extensions to the protocol were added, especially in the domain of remote file manageent and transfer.
In addition, SSH is used in tunelling and port forwarding configurations.
In the domain of file copy there are several options.
SCP: cp (copy). Inspired by rcp, an early file transfer extension to ssh.
SFTP: SSH File Transfer Protocol, a newer SSH extension to support File copy and browsing (but not really like FTP with 2 ports). It is more feature rich than both scp and ftp. Think of it as a remote file system protocol (however, however somewhat slower than scp).
FTPS: FTP over TLS/SSL. Needs 2 ports like ftp, one for command and one for data. Both connections can be encrypted.
Secure FTP. Real FTP tunelled over SSH.
The site to which you will need to connect probably offers SFTP. You just need to declare the remote server connection configuration in Filezilla site manager. You will need to provide the server ip address or name, the SSH server port, usually 22 but there are other possibilities (you should have been provided with this info) and select sftp as server type). When the connection is established, accept the public key and that should be it.
You can then drop your devs on the remote server.
OS choice
You shall first make a kind of choice between 2 worlds (MS or Linux).
Provided that the Linux community is somehow significantly less reluctant to share explanations. Also you will loose less time by choosing one or the other one, avoiding to wonder the same questions twice, with different answers depending on which OS you chose.
I experienced both, starting to search for solutions in the MS world, that I knew. Big mistake, loss of time. Then I changed, too late, to the Linux world. So I would advice to go straight to the linux OS for learning. Really many distributions for this. I would advice Debian (opened, user friendly, simple, safe, huge community) but you'll get as many proposals as there are admin.
OS understanding
http://www.linuxfromscratch.org/lfs/
http://www.ibm.com/developerworks/library/l-bash.html
http://tldp.org/LDP/abs/html/
Specific Questions about SSH
It depends a lot on the system you will choose but you could easily build a small client and a small server, then configure both and use ssh. Your 2 servers could even be hosted on the same machine, locally if you wish. Then you will learn how to set up the ssh-client side (often called ssh_config) and the ssh server side (often named sshd_config, with "d" standing for daemon).
Here you can find explanations about ssh for both worlds :
http://support.suso.com/supki/SSH_Tutorial_for_Linux
Some keywords for your google searches
List_of_TCP_and_UDP_port_numbers
ssh-keygen : encrypted keys (private/public),
ssh-add ssh agent
Gentoo keychain
and later but soon if you administrate your server on your own
The two main ones :
1) iptables
You may start with this and then go further with that one
2) fail2ban
this is a complement tool for which you'll find easily plenty of docs
...
Have fun :-)
EDIT: you can easily experience a Linux machine hosted in a windows OS, using virtualization (virtualbox, vm-ware..). It's a safe start and offer a good payback for this time investment. It would allow you to host as many machines (for example one linux server and one linux client) as you wish, in the limits of your HD room.
I assume you need to learn shell scripting. I recommend this book.
Filezilla is a FTP client. Try Putty - free SSH Client. And of course you need Linux server.
If you want to learn about SSH in depth then may I advise you this book SSH: The Secure Shell The Definitive Guide
See here for more info: http://www.snailbook.com/
I've read the book and learned really a lot. It teaches you all about setting up servers, clients, key agents and various (practical) applications.

Use Synergy on a computer on a workgroup and a laptop on a different domain

So, I recently installed synergy because I was tired of using two mice and keyboards. Problem is, set up is not working. First, the setup.
Server:
Desktop
Windows 7 64 - on our home network, part of Workgroup: WORKGROUP
Client:
work issued laptop
Windows XP SP2 32 - on home network, part of workd Domain: DOMAIN
Server is set up, all the computer names are correct. I'm a bit of a noob at networking things, and I don't want to mess up the configuration of my work laptop again (I already switched the domain to my workgroup, BAD). So, any suggestions that aren't too crazy please, since it's a company laptop.
I've tried putting in the ip on the client as well, firewall is allowing on the port in use, just can't get it to work. I think I'm SOL with the Workgroup/Domain difference though...
From what I remember, Synergy doesn't care about the workgroup and/or domain, it just needs to be able to communicate with the server/client IPs. Did you try to manually insert IPs of client/server?
In a very similar situation I discovered that when trying to ping my non-domain desktop with its workgroup name the dns resolver was appending the work domain to the desktops name. So when I tried synergy with an IP address I successfully connected the two computers.
The only caveat I can offer is maybe you needed to add the application to the windows firewall exception list for both machines. I would assume the port setting was the same between the two computers (default is 24800) in which case you should only use the IP address because the application knows to access 24800 via that setting in the advanced configuration.
You can add the program to the whitelist or specifically the port if you prefer via the Windows Firewall. On a side note - I am also using an older version of synergy (1.3.1) and not the latest as of this answer (1.4.2 Beta) which did not work for me, but I will assume it's because my server was running 1.3.1.
I chose not to update all 6 machines and their respective horrific configuration constructs that synergy loves to enforce upon us. [caution... rant: x is left of y and y is right of x... really? are you sure about that Einstein? Synergy could at least INFER that bit of logic instead of REQUIRING it!]
Hope that helps.

How to retain one million simultaneous TCP connections?

I am to design a server that needs to serve millions of clients that are simultaneously connected with the server via TCP.
The data traffic between the server and the clients will be sparse, so bandwidth issues can be ignored.
One important requirement is that whenever the server needs to send data to any client it should use the existing TCP connection instead of opening a new connection toward the client (because the client may be behind a firewall).
Does anybody know how to do this, and what hardware/software is needed (at the least cost)?
What operating systems are you considering for this?
If using a Windows OS and using something later than Vista then you shouldn't have a problem with many thousands of connections on a single machine. I've run tests (here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html) with a low spec Windows Server 2003 machine and easily achieved more than 70,000 active TCP connections. Some of the resource limits that affect the number of connections possible have been lifted considerably on Vista (see here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html) and so you could probably achieve your goal with a small cluster of machines. I don't know what you'd need in front of those to route the connections.
Windows provides a facility called I/O Completion Ports (see: http://msdn.microsoft.com/en-us/magazine/cc302334.aspx) which allow you to service many thousands of concurrent connections with very few threads (I was running tests yesterday with 5000 connections saturating a link to a server with 2 threads to process the I/O...). Thus the basic architecture is very scalable.
If you want to run some tests then I have some freely available tools on my blog that allow you to thrash a simple echo server using many thousands of connections (1) and (2) and some free code which you could use to get you started (3)
The second part of your question, from your comments, is more tricky. If the client's IP address keeps changing and there's nothing between you and them that is providing NAT to give you a consistent IP address then their connections will, no doubt, be terminated and need to be re-established. If the clients detect this connection tear down when their IP address changes then they can reconnect to the server, if they can't then I would suggest that the clients need to poll the server every so often so that they can detect the connection loss and reconnect. There's nothing the server can do here as it can't predict the new IP address and it will discover that the old connection has failed when it tries to send data.
And remember, your problems are only just beginning once you get your system to scale to this level...
This problem is related to the so-called C10K problem. The C10K page lists a large number of good resources for addressing the problems you will encounter when you try to allow thousands of clients to connect to the same server.
I've come across the APE Project
a while back. It seems like a dream come true. They can support up to 100k concurrent clients on a single node. Spread them across 10 or 20 nodes, and you can serve millions. Perfect for RESTful applications. Might want to look deeper for any shared namespace. One drawback is that this is a standalone server, as in supplementary to a web server. This server is of course Open Source, so any cost is hardware/ISP related.
You cannot use UDP. If the client sends a request and you don't reply immediately, a router is going to forget the reverse route in 30 seconds or less, so your server will never be able to reply to the client.
TCP is the only option, and it, too, will give you headaches. Most routers are going to forget the route and/or drop the connection after a few minutes, so your client/server code is going to have to send "keep alives" fairly often.
I recommend setting up a "sniffer", to see how the phone companies are staying in touch with your smartphone for their "push" technology. Copy whatever they're doing, because that stuff works!
As Greg mentioned, the problem you are describing is C10K (or rather "C1M" in your case )
I recently made a simple TCP echo server on linux that scales very well with the number of sessions (only tested up to 200.000 though), by using the epoll queue. On BSD, you have something similar called kqueue.
You can check out the code if you want to. Hope this helps and good luck!
EDIT: As noted in the comments below, my original assertion that there is a 64K limit based on the number of ports is incorrect, however there is a 32K limit on the number of socket handles, so my suggested design is valid.
With a typical TCP/IP server design, you're limited in the number of simultaneous open connections you can have. The server has one listening port, and when a client connects to it the server makes an accept call, and that creates a new socket on a random port for the rest of the connection.
To handle more than 64K simultaneous connections I think you need to use UDP instead. You only need one port for the server to listen on, and you need to manage the connections using a 32-bit client ID in the packet data instead of having a separate port for each client. The 32-bit client ID could be the client's IP address, and the client can listen on a known UDP port for messages coming back from the server. That port would be the only one that needs to be open on the firewall.
With this approach, your only limitation is how quickly you can handle and respond to UDP messages. With millions of clients, even sparse traffic could give you large spikes, and if you don't read the packets fast enough your input queue will fill up and you'll start dropping packets. The C10K page Greg points to will give you strategies for that.

Router to handle multiple public IP addresses

I am presently running several websites and a mail server from my home network. I have a business DSL account with 8 public IP addresses (1 by itself, and 7 in a block). To handle routing/firewall/gateway, I am presently using RRAS, DNS, & DHCP from Windows 2003 running on a ancient (circa 2001) PC -- which I suspect is going to fail any time now.
What I would like to do is replace that with a simple router. Have a consumer model LinkSys Wifi-router, which I'm presently just using as an access point (don't have the model number handy, but it's one of their standard models). It seems to be able to handle all the NAT/firewall/DHCP tasks -- except for handling routing the multiple public addresses. (e.g., I need x.x.x.123, port 21 getting to one machine, but port 80 of x.x.x.123 & x.x.x.124 to going to another, and x.x.x.123, port 5000 to still another etc).
So my questions are:
Can this be done with standard Linksys router, which they just don't explain in the consumer manual?
Can this be done ... if I replace the firmware with a community/OS version (and if so, which one?)
If neither of the above, can someone recommend a profession router (preferably with wifi) that does do this, which is close to a consumer level price point.
Alternately, is there a reliable OS/3rd party replacement to RRAS which handles this (since RRAS is the part causing the most trouble)
Alternate-Alternately, can someone point to a VERY simple HOWTO to doing this (ie. follow these steps and forget about it), to installing a LINUX system to do this) (since I assume I can run Linux longer on the old machine)?
This can't be done on a Linksys router with stock firmware. It can be done if you load a third-party firmware, but there's no GUI (afaik) to accomplish it, so you'll be hacking system shell scripts which is pretty hairy. I would recommend getting a low-power or older PC and installing PFSense.
PFSense is an open-source router appliance OS distribution with a very easy to use web front end.
Install DD-wrt On your linksys box. I believe this will have everything you need link text

Resources