Hyperlinking to a nonserver machine - dbase

My church has a management software that we would like to have several people access at the same time but over the internet. We have a website but it is hosted by another company. Is it possible to create a hyperlink on our website to access this program on our office computer? The hyperlink would be setup so that it is not visible to those who don't have access to the program. We have tried several remote access program such as TeamView and Go2MyPC. These give access to the entire computer and that is not something we want either. If we can't do the hyperlink is it possible for us to turn this computer into a server and access it that way> Again the focus is for at least 5 people to be able to use this software at the same time should that need arise.
Our church management software is designed to be run on a network. We have already setup the user IDs and passwords for the group who currently have access to the it. The problem is that we only have one office computer and all of or group can't use it at one time because the each have access to different parts. I.e. Treasurer can only acces financial module, clerk can only access membership roster and so on. The goal is to find the path of least resistance that will allow as many of these people to access this software at the same time as possible remotely. I understand the security issues so to that I ask if anyone thinks we should get another computer to make into a server or turn this on we have into one.
Is there an advantage to hosting our own website on our machine where the management software is already located?

You actually have three different problems:
You are trying to use a "nonserver machine" as a server. I assume it doesn't have a static IP address; a static hyperlink will fail. Since you have a web site, you can set up your Management Software Server (for lack of a better name) to check in with your web site server, which will hold the latest IP address for your MSS. Your users can then check in with the web server and then connect to your MSS.
You are trying to keep most people out, but some people in. You need an authentication scheme. This usually means a login name and password, and a secure way to transmit it. SSL is probably what you want to look at. You'll need a SSL certificate (you can make your own), and a client program (web browsers work) that can make SSL connections.
You are trying to allow your users to do only some things, but not others. You need an authorization scheme. This is provided by the server application, such as the Windows Remote Desktop. Without knowing how much granularity you need, it's hard to say exactly what you need.

#bdares hit a bunch of your issues right on the head...
And then some... As a church, sounds of funding already ringing that they won't have that type of funding to handle... especially opening up a channel to the church management and possible accounting. Even if offering your own SSL certificate, getting hacked is getting easier and easier. If its a low budget operation with financial data readily easy for the taking, I'd hate to see a church (or any other legitimate non-profit organization) get messed up.
There are a lot of security issues to deal with and you can NOT take it lightly.

Related

Is there something I should be concerned about before port-forwarding my server?

I'm setting up my first server on a Raspberry Pi 4 but after reading some articles online I was wondering whether my server is ready to be open to the internet or not. I premise I'm just an individual who would like to publish some programming projects on a site that is accessible on a browser.
After some concerns I designed a PHP page which checks the client IP and returns a 403 header until i give that user the permission to access. Is it enough? Is it necessary?
And also, are there ports that are more safe to open than others?
You "can" open ports 80 and/or 443 for displaying webpages - depending on SSL certificates
I do it myself (not for web hosting) and restrict the open ports to certain IPs - my friends (not smart enough to levy an attack 😂). Though IPs are likely to change every so often and your firewall will need updating.
It's a key thing to remember that anything is open to exploitation if it's not properly maintained/set up. Also displaying a 403 isn't a silver bullet.
Port 25 would give a user access to the files on your device if proper authorisation isn't set up. Opening ports 80 and 443 will give users access to webpages but makes your device/network exposed to DoS attacks or platform level attacks. If there's a known exploit for your version of PHP or your firewall/router or possibly the device itself then an attacker will exploit it.
Hosting providers have layers upon layers of security and are constantly updating devices throughout their network. Keeping your device and platform up to date will help - but it may be worth instead investing a little in a host (from about £4 a month).
There are loads more things I can touch on but will leave it at that for now
Edit after comment:
my website is just a little project i mean who could casually target it?
Strictly speaking, anyone. "Who would want to?" Again, anyone. Sure you're a small target that wouldn't provide any useful data. But your device, once hacked, can be used as a DoS zombie or as a crypto-miner and you probably wouldn't even realise.
And also can't I use whatever port like 6969 or 45688?
Yes, strictly speaking, you can. You could tell your device to listen on that port and reply with the website data. To do this you would also need to provide the port number on the end of the URL in the format www.example.com:6969. Though, again, this isn't a silver bullet. Most security issues aren't with port-forwarding but with poor management/security and bugs in the components themselves. All a port forwarder is doing is saying "oh, device X wants data on this port... here you go".
Another point is, data sent on "Well-known ports" (1-1023) tend to have their headers checked for irregularities by the firewall - which can dispose of any irregular packets. By using a custom port the firewall doesn't really know what to expect, so it sends it anyway. Also, steer away from "Private ports" (49152-65535) these are used as source ports, not destination ports.

Set a WiFi whitelist for specific apps/sites

Here's the issue...
I work in a remote area of Alaska where cell service can be... questionable. We do have decent WiFi, however, is not openly available to staff because it has a low enough data limit that we don't want to deal with people streaming Netflix and running out the company data.
The big issue is that we want to use an app like Slack or Discord to communicate more effectively across the business. Because the cell service is spotty and the WiFi is currently off-limits, I was wondering if there was a way for me to create a WiFi network that was whitelisted to only allow Slack, for example. Then we get the benefits of using the WiFi without risking running out of data.
Thoughts? I was thinking about setting up a network proxy, but I wanted to get the internet's take on it before I dive down the rabbit hole.
The best way I can think of to handle something like this is to use a router that you can configure the dns server settings and block all dns entries that aren’t on your allowlist. This doesn’t strictly block traffic to everywhere but it will do a pretty good job.
You could also block all dns traffic that’s not going to the local dns server which would help not allow people around it. To have a hard block on things you would need to block specific options addresses which with services such as slack or discord could change randomly which would be hard to keep up on.
Another option that would work well is use your own self hosted version of mattermost, rocket.chat, or riot/matrix that you would have control over and knowledge of the IP address so that you can allowlist only those up addresses. The other advantage with this is if the business is just localized communication and you don’t need to chat across long distances then you could set this to work completely on a network with no internet access so you wouldn’t have to do any blocking because the wifi is completely separated from the internet.
Things are heavily based on your situation but I hope this gives you a good place to start

localised ip assist + DDOS prevention + google billing

We are very new in Google Cloud and learning.
I have two question marks in my mind.
First is
Can I create localisation IP addresses for virtual instances? like I open web site with German IP range or another web site I want assign under Italian IP range.
Where is the best place to start or is it possible under cloud.
Second is
We had DDOS attack to under cloud and resources made peak while under attack, Will google charge extreme price for that peak time or will be normal billing.
Second question brings to third one,
We using cloudflare for domains, Is there stable way yo prevent DDOS attacks under google cloud?
I appreciate your time and answers.
To your first point, are you after finding the shortest path between your users and wherever you serve your content? If that's the case, you can simply put a load balancer in front of your backend services within Google Cloud, with a global public forwarding IP address, and the service itself will take care of redirecting the traffic to the nearest group of machines available. Here is an example of a HTTP(S) Load Balancer setup.
Or is localization what you are trying to achieve? In that case I'd rely on more standard forms of handling the language of choice like using browser settings (or user account settings if existing) or the Accept-Language header. This is a valuable resource from LocalizeJS.
Lastly if you are determined to having multiple versions of your application deployed for the different languages that you support, you could still have an intermediate service that determines the source of the request using IP-based lookups and redirect the user to the version of your choice. Said so, my feeling is that this is a more traditional behavior that in the world of client applications that are responsive and localized on the spot, the extra hop/redirect could get to annoy some users.
To your second point, there is a number of protections that are already built-in on some services within Google Cloud, in order to help you protect your applications and machines in different ways. On the DDoS front, you can benefit from policies and protections on the CDN side, where you get cache and scaling based preventive measures.
In addition to that, and if you have a load balancer put in front of your content, you can benefit from protections on layers 3, 4 and 7 of the OSI model. That includes typical HTTP, SYN floods, port exhaustion or NTP amplification attacks.
What this means is that in many of these situations, your infrastructure will not even notice many of these potential attacks, as they'll be alleviated before they reach your infrastructure (and therefore you will not be billed for that). Said so, I have heard and experienced situations in which these protections did not act in a timely fashion, or were triggered at all. In these scenarios, there is a possibility for your system to need to handle that extra load. However, and especially in events when the attack was obviously malicious and documented to be supposedly handled by Google Cloud, there is a chance to make a point with Google in order to get some support on the topic.
A bit more on that here.
Hope this is helpful.

Is there any reliable way to determine a user's location from their Internet connection?

I have created a Business Management System which is to be used by retailers with or without multiple sites.
It is important that a logged in user identifies his/her location, or site, so that the system can perform site related tasks automatically.
I currently have a database of locations which includes an IP Prefix field, when the user goes to the log in page it looks for the first 5 digits of the current IP address, then:
If start of current IP matches a stored record it assumes user is at
that site.
If no IP matches then it asks the user which site they're in and asks them to update the IP.
This basic, and manual check works when the sites are a fair distance away or are on different ISPs for certain, the update is usually only required after a router restart and I've been using the system myself for about 4 years with no issues... BUT... I am not confident with it, so my question is; is there a better solution?
I realise the IP address is probably not the way as the best that gives me is the location of their ISP, but that's not what I need.
In case it matters I am using ASP.NET coding in VB
Also, should mention, I'm looking for desktop based application, not mobile.
I think you are going to have to rely on user input for this one. It's impossible (or at least, very very difficult) to know whether a user is using a proxy or not, and if they are you have no way of knowing where they really are. This is right and proper; would you trust every website you access with that kind of information? I sure as hell wouldn't.
You can't use the IP address to give you 100% reliable location data if your clients connect over the internet (they could be going through a proxy or as you said you might just get the ISP's IP address)
Your best bet is to use javascript to get the users geolocation: W3 Schools Example
More complex example on html5demos
No, of course it is not possible to reliably locate an user by IP Adress.
That adress can be faked, so the base of your info is not reliable.

Do we really need the app server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm about to start writing a web app (Asp.Net/IIS7) which will be accessible over the internet. It will be placed behind a firewall which accepts http and https.
The previous system which we are going to replace doesn't let this web server talk directly to a database, but rather have it making highly specialized web service calls (through a new firewall which only allows this kind of calls) to a separate app server which then go to the DB to operate on the data.
I have worked on many systems in my day, but this is the first one which has taken security this seriously. Is this a common setup? My first thought was to use Windows Authentication in the connectionstring on the web server and have the user be a crippled DB-user (can only view and update its own data) and then allow DB access through the inner firewall as well.
Am I Naïve? Seems like I will have to do a lot of mapping of data if we use the current setup for the new system.
Edit: The domain of this app is online ordering of goods (Business to business), Users (businesses) log in, input what they can deliver at any given time period, view previous transaction history, view projected demand for goods etc. No actual money is exchanged through this system, but this system provides the information on which goods are available for sale, which is data input to the ordering system
This type of arrangement (DMZ with web server, communicating through firewall with app server, communicating through firewall with db) is very common in certain types of environment, especially in large transactional systems (online corporate banking, for example)
There are very good security reasons for doing this, the main one being that it will slow down an attack on your systems. The traditional term for it is Defence in Depth (or Defense if you are over that side of the water)
Reasonable security assumption: your webserver will be continually under attack
So you stick it in a DMZ and limit the types of connection it can make by using a firewall. You also limit the webserver to just being a web server - this reduces the number of possible attacks (the attack surface)
2nd reasonable security assumption: at some point a zero-day exploit will be found that will get to your web server and allow it to be compromised, which could lead to to an attack on your user/customer database
So you have a firewall limiting the number of connections to the application server.
3rd reasonable security assumption: zero-days will be found for the app server, but the odds of finding zero-days for the web and app servers at the same time are reduced dramatically if you patch regularly.
So if the value of your data/transactions is high enough, adding that extra layer could be essential to protect yourself.
We have an app that is configured similarly. The interface layer lives on a web server in the DMZ, the DAL is on a server inside the firewall with a web service bridging the gap between them. In conjunction with this we have an authorization manager inside the firewall which exposes another web service that is used to control what users are allowed to see and do within the app. This app in one of our main client data tracking systems, and is accessible to our internal employees and outside contractors. It also deals with medical information so it falls under the HIPAA rules. So while I don’t think this set up is particularly common it is not unheard of, particularly with highly sensitive data or in situations where you have to deal with audits by a regulatory body.
Any reasonably scalable, reasonably secure, conventional web application is going to abstract the database away from the web machine using one or more service and caching tiers. SQL injection is one of the leading vectors for penetration/hacking/cracking, and databases often tend to be one of the more complex, expensive pieces of the overall architecture/TOC. Using services tiers allows you to move logic out of the DB, to employ out-of-process caching, to shield the DB from injection attempts, etc. etc. You get better, cheaper, more secure performance this way. It also allows for greater flexibility when it comes to upgrades, redundancy or maintenance.
Configuring the user's access rights seems like a more robust solution to me. Also your DataAccess layer should have some security built in, too. Adding this additional layer could end up being a performance hit but it really depends on what mechanism you're using to move data from "WebServer1" to "WebServer2." Without more specific information in that regard, it's not possible to give a more solid answer.

Resources