I am currently building an web app which also utilizes websockets. (Rails for webserver and Nodejs for socket.io)
I have structured my application to use subdomains to separate between connection to the Nodejs server and the Rails webserver. I have "socket.mysite.com" redirected to the Node server and everything else to the webserver.
I am able to test this functionality on localhost. I simply modified my /etc/hosts to include the following:
127.0.0.1 socket.mysite.com
127.0.0.1 mysite.com
I know that on production I simply have to generate a CNAME record for socket.mysite.com and this will also work on my users' computers.
However, I am accustomed to testing my application by passing an IP address around. My team typically set up the server on our own machines and do development. When we want to test our individual servers, we just pass around an IP like "http://123.45.123.45".
With the new subdomain hack, this is no longer possible without modifying each of my tester's /etc/hosts. I honestly don't expect my testers to modify their /etc/hosts on the spot. What I can do is have each member of my team have their own domain and create the appropriate CNAME records for each individual team member.
Is there an easier way to allow me to run my app on an IP and just pass that IP around?
It sounds like your needs have scaled beyond the days of just simply editing a host file. While you could continue to have everyone on your team continue to edit host files, there are two main risks that I see here:
For your idea to just use IP Addresses, you risk missing something in testing that you wouldn't see unless you were on production, as the issue may be dependent on something in the domain configuration.
For using host entries, you introduce a lot of complexity and unnecessary changes to each developer and tester's configuration, which of course leaves the door open for mistakes, and it also takes time that will add-up over the long term.
Setting up a DNS server may be helpful in your case. You could map a set of domains for each developer that match a certain pattern so that your application will still run correctly. This would allow you to share the URLS without having to constantly reconfigure each person's computer. Additionally, marketing and sales stakeholders can easily view product demos as well, without needing to learn what the elusive host file is for.
If you have an IT department, they can help you setup the DNS. However, if you are a small team without a real IT department, some users have found success using DNS systems designed for home or small office networks.
Related
I have a webapp setup with Wordpress with a specific IP address (which is also pointed towards a custom domain).
The problem is, when I add a new webapp (also with Wordpress), it also gets allocated the same IP address as the first webapp causing it to redirect to the first webapp.
I have setup the second webapp with the same subscription plan and am using the same database for both.
Also, the first time I made a second (ever) webapp, it has its own seperate IP, but due to some issue, I deleted it and made a new webapp with the same name. Now whatever I do and no matter how many new webapps I make, they all have the allocated IP the same as the first webapp. Any solutions?
Thanks!
Azure Web Apps are created behind a set of load balancers that differentiate between Web Apps based on the incoming request.
if your two Web Apps are located at example.com and example.org and you have configured both in DNS to point to the same IP address, then the load balancers at the front should decide where to send the request based on what is requested.
This is going to be a problem of using the same backend database for two different wordpress sites. (unfortunately I'm not a wordpress expert, so I can't comment on what that might be - but this answer will hopefully help those who do know about wordpress, to clear up that this is not likely to be an Azure issue)
As Michael indicated the Azure Web Apps site behind Load Balancer or ARR Front ends. However there is more to this.
When you create a site in Web App, you actually create an App Service Plan as well (this corresponds to a VM)
So when you create the second site you will get an option to either choose the same or a new app service plan.
If you choose the same app service plan, then both of your sites will sit on the same VM and as a result will be behind the same ARR FE.
If you choose to create a new app service plan, there might be a possibility that the VM will be allocated either behind the same or a different Front end. This cannot be controlled. The Fabric controller makes the allocation based on availability.
Eitherways, this shouldn't be a problem. It is okay for 2 sites to share the IP Addresses. However if you wish to have separate IP Addresses for your sites, then you can use one of the options:
Create the site in a different data centre
Create the second site under new app service plan. There is a high chance that the app service plan might be allocated under a different ARR FE.
Scale the site to Standard or higher tier and use IP Based SSL. This will allocate a dedicated IP for your site. There is additional cost associated with getting a dedicated IP. Refer the Azure App Service pricing for this.
First, let me explain why. I've had some rough luck with third party meteor hosting providers. But I'd really rather not run my own servers (I have a meteor app running with SSL on digital ocean, so I know how to do that, I just would rather dedicated professionals run as much of my infrastructure as possible). From what I can see, meteor.com hosting is wonderful, with the caveat of not being able to have a custom domain with ssl.
So, would it make sense to put up an nginx server that just proxied https://example.com to https://example.meteor.com? For starters, would that work, and if it did, would it be performant?
For your info, Meteor has a roadmap to roll out Galaxy (managed "meteor deploy" to your own servers) in list Under consideration for 1.1+. And it should be a perfect choice for you. Here is their Trello
This is MDG's commercial product -- a managed cloud platform for
deploying Meteor apps. You have control of the underlying hardware
(you own the servers or the EC2 instances, and Galaxy manages them for
you).
General Availability for Galaxy will be sometime after 1.0, since we
want to focus on Meteor 1.0 and get it out as quickly as possible.
So in the mean time if you just care about using your own domain, you can use something like Domain name forwarding which lets you automatically direct your domain name's visitors to a different website. And Masking prevents visitors from seeing your domain name forwarding by keeping your domain name in the Web browser's address bar.
Also in your case, you don't necessarily need to add SSL as Meteor has already got one when you deploy your apps. Just try input the url in your browser with https://yourappnamehere.meteor.com and you can see a SSL certificate is already in place.
We have a website deployed on AWS EC2 running on ubuntu,Apache, MYSQL. We have been getting continous requestes from below IP
"195.154.105.219"
"88.150.242.243". Requesting for xmlrpc.php file using POST method. As a result our website has become really slow and our clients work has been effected. As of now we have blocked these IP values by dropping them from iptables. We would like to know how to safegaurd our site from any future attacks like this.
The question is very general, and depending to your application's requirements, your budget and other factors, there are several techniques you can use, separately or together to mitigate DDOS and SPAM attacks.
Use Auto Scaling and an Elastic Load Balancer, to let AWS scale your infrastructure depending on traffic : http://aws.amazon.com/autoscaling/
Use S3 to serve static content. S3 is designed is scaling automatically for incoming traffic. All content served by S3 directly allows to offload your EC2 based web server : http://aws.amazon.com/s3/
Use CloudFront to distribute and server your content from AWS' edge location. This mitigates DDOS by distributing attackers' request to the network of edge locations instead of sending the traffic to your web server : http://aws.amazon.com/cloudfront/
All these three options have a cost associated, be sure to understand the pricing structure before deciding to implement any of these.
If you have a relatively short and stable list of IP addresses you want to block, you can customise either your EC2 instance's Security Group (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html) either your VPC Subnet ACL (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) to deny traffic from these IP addresses. This approach is not very scalable and, most of the time, you will play a mouse / cat game trying to catchup with whatever new addresses are used by your attackers
Last but not least, using plain old Apache configuration to block certain URL or restrict access to these by IP Addresses is very effective too (http://httpd.apache.org/docs/current/en/mod/mod_authz_core.html#require and File Directive)
Last but not least, I would encourage everyone to watch this re:invent talk about DDOS resiliency for AWS : https://www.youtube.com/watch?v=V7vTPlV8P3U)
Seb
xmlrpc.php is from wordpress. install the Disable xmlrpc pingback plugin, or better yet , in the wordpress site. .htaccess , deny xmlrpc.php file ;). that will fix it. Also checkup the wp-admin/scripts for any wierd script or just . find /var/www/ -type f -mtime -10 , to find the latest modified files.. check for any wierd php script..
I would like to know if it is possible using IIS and ASP.NET (and ideally something that might be employed on a shared hosting account, but this isn't required) to mimic WordPress.com's ability to allow end users to use their own domain names.
WordPress has users who own their own domains change the domain's DNS settings to point to WordPress's own DNS. My guess is this is not something that would be able to be done on a shared hosting account since it would involve adding an entry to the DNS server's table for each custom user domain.
However, for future reference, is this something that might be automated programmatically on perhaps a VPS?
My guess is this is not something that would be able to be done on a shared hosting account
You're nearly correct. The default site in IIS listens to all connections on port 80 for the default IP address.
You can add more sites in 3 ways:
Add new sites listening on different ports. This is not entirely practical if you want "ordinary" sites litening on port 80.
Add more IP addresses to the box (not too eaisly done) and set up new IIS sites to listen to the new IP addresses independently.
Add new sites to the server listening to different "host headers" (domain names to you and I) but on the same (default) IP address .
So called "Shared hosting" usually uses options 3, because a hosting company can get away with only using a single IP address for possibly hundreds of sites.
Therefore you would have to go through the tedious process of adding each host header to the box, and while I'm almost certian this could be done with Wscript, I'm no expert in that area.
If you really wanted to get into it, you could write an ISAPI module to intercept the calls and set up some clever (ish) database/hash table of domain names and target folders to server as the different sites.
Bottom line is, there are various ways to achieve this on Windows. Probably none quite as easy as on a *nix platform where everything is super-scriptable.
What we do is have a wildcard DNS entry set up for our domain. That way, whatever domain the user types will resolve to our website as long as it ends with ".mydomain.com". Then our .Net code just looks at the "HOST" header coming in and serves up the content that matches that domain name.
How to efficiently create subdomains dynamically that are resolved to different IP than the original domain?
Most dynamic subdomain creation solutions I've found here would add a *.domain.com A-record to the DNS server (usually using BIND), but that's not what I want.
Does that mean the zone file needs to be set to always Expire? Wouldn't that tax our DNS server heavily?
However, what if the client ISP doesn't go and fetch the new zone file I just dynamically changed? Wouldn't they not able to resolve our new sub-domain entry?
Would setting up DDNS in BIND be the logical path for implementing such system? DDNS would allow me dynamically insert A-record without restarting BIND, right?
I'm sure there are some way to do this, since most large blogging service that doesn't point all account to the same IP as the blogging engine, are doing something similar to what I need.
Thank you!
Yes, you could use dynamic DNS updates to push zone file changes into your zone without having to put them into a text zone file and reloading BIND each time.
Many large domain name registries use exactly that technique whenever a domain name is registered.
That doesn't mean, though, that it's the right technique for your application. As recommended yesterday to your other question, there's really no reason not to go with the wildcard option.
A low-end server running Apache would be more than enough to front-end reverse proxy your first few thousand sites, and better still you don't even need to deploy it until you get enough users to set up your second partitioned cluster.
I would imagine that most services that do this have their wildcard (*.) DNS entry setup for these accounts, and probabley point it to a load balancer, that distributes requests based on host name etc. They then have the non-standard entrys setup as normal A records in DNS.