This may be a general topic, but I came across the issue while working on some code using the Rook package.
The recent R versions include an http server. You may have seen this while checking for help topics using RGui. It opens a new browser with the IP/Port, etc.
For eg., if I enter ?paste, this brings up,
http://127.0.0.1:31234/library/.../paste.html
But if I use my IP, say 192.168.1.2 in place of 127.0.0.1, the page fails to load, I get an error
While trying to retrieve the URL:http://192....
The following error was encountered:
We can not connect to the server you have requested
I have other apps that have httpd interfaces, and I can go to those app's http interfaces using both 127.0.0.1 and 192.168.1.2 ... etc. So, as far as system/network permissions are concerned - I do not think that is the issue here.
Rather, there is something specific to the R httpd process that disallows it to be accessed using the domain name, etc ?
The above was tested in a corporate network. When I tried the same process from my home network it worked fine. However, since I already access http interfaces of many other locally installed apps from the corporate PC, I think there might be something specific to R's http process that needs to be checked ?
Workstation is running - Windows XP
Please let me know if you have any thoughts on the above,
Regards,
Raj.
Fixed it. The trick is to specify,
s <- Rhttpd$new()
s$start(listen="0.0.0.0",port="20000")
when starting the Rook process. Specifying 0.0.0.0 makes it listen to all the interfaces and now I can access it using my external IP. Thanks a lot for your help nonetheless !
When opening a TCP port, the local IP address may be chosen. For incoming connections, typically INADDR_ANY (-1) is supplied to bind(), which means to open the port on every available interface.
However, it is quite possible to open a port on just one interface on your machine (in this case, 127.0.0.1), simply by supplying the IP address of the interface. Seems that R does just this.
My guess is that you may have a proxy in place on your corporate network. Your browser is probably configured to use that proxy to access the Internet. Most browsers will exclude an address which they know to be local (127.0.0.1 or localhost) from using the proxy, but might not exclude any other IP.
Try disabling the proxy in your browser (even "Auto-Detect", completely turn the proxy off) and see if you're able to connect.
I had the same problem.
If you are using RStudio, this might be a bug in the RStudio. Check out this link:
https://support.rstudio.com/hc/communities/public/questions/202656007-Cryptic-error-on-starting-RStudio-daily-with-R-devel
Updating to the latest version of RStudio with the latest version of R fixes the problem.
Related
I have followed the guides at https://www.azerothcore.org/acore-docker/, and everything installs and works fine. Auth, WorldServer, DB, etc all work. However, when trying to play locally (LAN, main computer with client, the server on a different Windows machine on same LAN), it consistently loops back to realm selection.
So, I searched here and found these two questions/answers:
Azerothcore: Looping on Realm Selection List
How to resolve sticking in "Realm Selection"?
I have followed the guide in the bottom one, and have changed the Address field in the database to my external IP address (assigned by ISP). The LocalAddress is 127.0.0.1 The rest of the information appears to be correct.
When trying to connect via the external IP, it won't connect at all. But when I try setting my realmlist to 127.0.0.1 it will connect and log me in, but continually loops back to the realm selection screen.
To make sure it was updating, I changed the name of the realm and it shows up correctly when I try and log in. So the data appears to be saved to the database, but I cannot get it to connect from the LAN.
Followed the official guides, and changed the IP address in the DB to external IP. Same result, except now it takes a few seconds to connect and try to log into the realm. Then fails, back to realm selection.
Help would be appreciated. Thanks.
It's 99.9% related to your networking. That's what it turns out to be for pretty much everyone asking this question.
Most likely either a port isn't forwarded correctly, or your firewall prevents the connection. Try and use an external service to verify if the port is open. (Do a search for "Port open check"). Also, check your firewall to have the worldserver listed as an exception in the right folder.
Another common mistake is to change the "default" values when using HeidiSQL in the realmlist db instead of changing the actual values in the 'data' tab.
I changed my VM instance from "F1-micro" to "E2-micro". When I then restarted my machine, I couldn't access my webpage using the domain name, the webpage just shows an "Error 521" code - showing that my browser is working, CDN is working but the host has an erorr. When I paste the VMs IP address into my webpage, however, it show's the "Apache2 Debian Default Page".
Can somebody please help me with this?
The Error 521 message is caused by one of two situations:
First, check whether your WordPress site’s server is down. Even if everything else is configured properly, if your WordPress site’s server is offline, Cloudflare simply won’t be able to connect.
Second, your web server might be running fine but blocking Cloudflare’s requests. Because of how Cloudflare works, some server-side security solutions might inadvertently block Cloudflare’s IP addresses.
Cloudflare is a reverse proxy, all the traffic coming to your origin server will appear as if it’s coming from a small range of Cloudflare IPs (rather than each individual visitor’s unique IP address). Because of that, some security solutions will view high traffic from a limited number of IP addresses as an attack and block them.
Please check this link out in order to fix error 521 for Cloudflare and WordPress.
Turns out this problem was caused by my having installed the Debian Apache server package and it is causing collisions between it and the Apache shipped in the stack. Bitnami Stacks are completely self-contained and run independently of the rest of the software or libraries installed on your system.
So to fix this, all I had to do was run the following commands:
sudo systemctl stop apache2
sudo /opt/bitnami/ctlscript.sh restart
I have setup a server on of my PCs and I am running ownCloud on it. Everything is working fine but I wanted to ask a few things just to make the whole process more convenient.
How can I use a dynamic IP address in ownCloud? I have a ddns but, since I have a dyn external IP address, I need to put in the ddns again and modify the account when the IP changes. Is there a way through which ownCloud could work on the ddns and not on external IP address? (I hope you got what I meant) IMAGE: http://s27.postimg.org/r0224wfsz/Untitled_2.jpg
Also, is there a way to use the ddns(xyzz.co) in the same home network in which my server is? instead of the internal IP address(192.168.1.2). Because again, I need to modify the account when I am in the home network and when outside.
My WAMP server shuts down automatically like it would if I manually exit it. Is there a solution to that too? I have kept it on auto start on OS boot-up. But, I think that is not the solution.
Thanks a lot!
Why not let your router attached to your server cope with this? In the most recent routers, you can set your (D)DNS-settings.
You can set port mapping in your router to an external address. Then, when you are at home, you don't need to edit the settings of the sync client since you can always use your external IP-address.
I'd use cron for that. Look it up when you enter crontab --help in the terminal. On most distributions, cron comes with examples in the cron itself. So you can just edit it by entering crontab -e. Enough online, too.
Currently we have a system in place where multiple server backup to a server in house. There are a total of 11 different servers backing up to this one storage server. Without any change(any that we are aware of) one of the servers stopped being able to connect to the storage server. It's weird too because the one that can't connect is actually our DNS server. It can ping the storage server and nslookup returns the appropriate value. However when I tried to browse to the server in windows explore via network I get the following message:
"Check the spelling of the name. Otherwise, there might be a problem with your network. To try to identify and resolve network problems, click Diagnose." - Error Code: 0x800004005 Unspecified error.
If at all possible I would like the solution to not have to restart the server(obviously that's a big request) but we run 24/7 and can't have the DNS server down for the next few weeks.
Thanks in advance!
I am completely guessing here however lets start with this, does it work if you try and connect to the share using IP?
A few things to consider in the mean time? What O.S is it?
-> Is network discovery off?
-> Have any firewalls been accidentally turned on
-> We had a similar sort of problem when the server lost it's trust relationship with AD (required a reboot I am afraid).
Unfortunately this error can relate to a range of problems including network devices, anti-virus, firewalls, shares, user accounts etc etc.
I am trying out a conferencing application (BigBlueButton).
For this I created an Ubuntu virtual machine that functions as the application server. On this machine I can test the application by navigating to the app url (for example http://10.0.2.15).
I also created a second virtual machine that should function as a client. On this machine I want to be able to navigate to the server as well, but that doesn't seem to be working. If I try to navigate from the client to the server by using the app-url I get nothing, followed by a timeout.
To establish a network between the two machines I tried the following solutions:
Create a second network adapter on each virtual machine and attach to "Host-only Adapter" with name "vboxnet0"
Create a second adapter on each machine and attach to "Internal network" named "intnet".
I thought that either of above options would be a good solutions, but none of them works.
Can anyone help me out here?
FYI I am using MacOS X as host system.
EDIT:
I created my second machine by cloning the first one (using the clone utility). Maybe this causes both machines to be identical which makes them indistinguishable on a network. Would this cause a problem? (As a desktop developer I'm a bit of a noob when it comes to I.T.)
I just got this to work. What I did was the internal network with the tasteful name on both VMs, but THEN I went to Advanced and set the Promiscuous Mode to "Allow All". I connect just fine now. Try it!
OK, just looked at the dates and it was last updated 2009, but for anyone looking for the answer, here you go!
IF you cloned the machine and didn't change the ip, they will never connect...
Also - make sure there is something listening on the url that you're trying to reach.
each machine should have a different ip
(but on the same network of-course)
Set the interfaces you created to internal networking. Choose a tasteful and interesting name, like "mynet". Use that name as the network name for both of the virtual machines and they will automatically be able to talk to each other over those interfaces.
Sorry, I see you already did that. In this case just give those two machines static IP addresses on the interfaces of "internal networking" type. Like, 192.168.0.2 and 192.168.0.3.
Also, once you've changed the IPs make sure the server is listening on the right interface.
I realize this is long overdue... But I just got mine set up and am able to ping each virtual machine from one another.
Assuming you're running boot2docker like I am, simply right-click the boot2docker VM in VirtualBox and click clone. In the box that pops up, be sure to check the box that says "Reinitialize the MAC address of all network cards" so that the two virtual machines don't have the same MAC address.
That's it, seems to be working for me. I can ping, scan (via nmap) and even SSH into the virtual machines from one another or from my host machine.