I just started a new AWS EC2 instance. In the instance's security group I added a new rule to open port 8080. I also stopped the iptables service on the instance, per another post. So in theory this port should be wide open.
I started my RESTful service on 8080 and was able to access it locally via curl.
When I come in with curl remotely I get an error saying it couldn't connect to the host.
What else should I check to see if 8080 is truly open?
I started my RESTful service on 8080 and was able to access it locally via curl.
What kind of technology is your RESTful service based upon?
Many frameworks nowadays listen on localhost (127.0.0.1) only, be it by default or by means of their examples, see e.g. the canonical Node.js one (I realize that port 8080 hints towards Java/Tomcat, anyway):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
The log message generated by starting this is Server running at http://127.0.0.1:1337/ - the emphasized part is key here, i.e. the server has been configured to listen on the IP address 127.0.0.1 only, whereas you are trying to connect to it via your public Amazon EC2 IP Address.
Related
We have internal load balancer, where the request is coming to a instance group which is basically the sets of VM.
Load Balancer -> Instance Group (our service is running on the Port 8080)
Since we need to redirect one of the request to some other domain, this is done in order to test our new service
Load Balancer -> Send Request to VM 1 at port 8080 -> Redirect it newservice.com/filter=<with arguments>
-> VM 2 and VM 3 service is running on port 8080
What's the correct way to do that ? Since new service is not in the same gcp project, any help how we should achieve it ?
I have simple nginx configuration :
server {
listen 80;
server_name MY-MACHINE-IP (VM 1)
return 301 https://newService.com$request_uri;;
}
But no redirection is happening
There are only 3 ways to make cross-project connections possible: Shared VPC, Cloud VPN or VPC Peering. You may choose the option that most suits your use-case. I would try VPC Peering first.
Shared VPC
Cloud VPN
VPC Peering
I have a .NET Core 2.2 service running in OpenShift. My service uses SSH.NET to connect to my remote SFTP Server running outside the OpenShift cloud. The SFTP server is configured to provide only SFTP on port 22.
According to SSH.NET, the code to connect to an SFTP server is:
var connectionInfo = new ConnectionInfo("10.1.2.3",
"guest",
new PasswordAuthenticationMethod("guest", "pwd"),
new PrivateKeyAuthenticationMethod("rsa.key"));
using (var client = new SftpClient(connectionInfo))
{
client.Connect();
}
This code works fine when used inside my intranet.
To access a remote resource in OpenShift I have created an egress router that provides a fix IP. All firewalls have been configured to allow accees from OpenShift to my SFTP Server.
My question:
What value shall I use for first parameter in the ConnectionInfo class above? The IP address "10.1.2.3" of my remote server will not work from inside OpenShift because outbound traffic must strictly go through the egress router service.
Note:
I can already access the remote server via HTTPS using an http client access from my POD using URL like this: https://x-myservice-egress.y-myproject-infra-test:4433.
In the ConenctionInfo class I must provide the egress router name and a mapping port, e.g. projectName-egress-xyz:2201. The 2201 is a mapping ID not a physical port, it maps to my real SFTP server host machine IP and port 22.
The code below worked!
var connectionInfo = new ConnectionInfo("projectName-egress-xyz:2201",
"guest",
new PasswordAuthenticationMethod("guest", "pwd"),
new PrivateKeyAuthenticationMethod("rsa.key"));
using (var client = new SftpClient(connectionInfo))
{
client.Connect();
}
Replace it with egress router name and source port mapped to destination port that you defined in your egress configuration.
Ex egress-xyz:source-port
I try to get remote (client) IP addres:
var ip = httpContext.Features.Get<IHttpConnectionFeature>()?.RemoteIpAddress
But it works only for local requests (it will return ::1 value)
When I load page from remote machine the value is null. I investigated there is no IHttpConnectionFeature in the Features collection in this case.
Why? And how to get remote ip address correctly?
I know that this post is old but I came here looking for the same question and finnaly I did this:
On project.json add dependency:
"Microsoft.AspNetCore.HttpOverrides": "1.0.0"
On Startup.cs, in the Configure method add:
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor |
ForwardedHeaders.XForwardedProto
});
And, of course:
using Microsoft.AspNetCore.HttpOverrides;
Then, I got the ip like this:
Request.HttpContext.Connection.RemoteIpAddress
In my case, when debugging in VS I got always IpV6 localhost, but when deployed on an IIS I got always the remote IP.
Some useful links:
How do I get client IP address in ASP.NET CORE? and RemoteIpAddress is always null
The ::1 may be because:
Connections termination at IIS, which then forwards to Kestrel, the v.next web server, so connections to the web server are indeed from localhost.
(https://stackoverflow.com/a/35442401/5326387)
Just try this:
var ipAddress = HttpContext.Connection.RemoteIpAddress;
And if you have another computer in same LAN, try to connect with this pc but use user ip instead of localhost. Otherwise you will get always ::1 result.
I've been trying to stream flv content from my openshift cartridge using nginx + rtmp module.
On my local machine, with the attached configuration, everything works just fine (I use ffplay for testing, e.g. ffplay rtmp://localhost:8080/test/streamkey)
When I try with the same configuration on openshift, I get the following error:
HandShake: Type mismatch: client sent 3, server answered 60 f=0/0
RTMP_Connect1, handshake failed.
However, if I enable port-forwarding and test the stream server using ffplay rtmp://127.0.0.1:8080/test/streamkey, everything works fine. here are my port forwardings:
rhc port-forward myappname
Checking available ports ... done
Forwarding ports ...
To connect to a service running on OpenShift, use the Local address
Service Local OpenShift
------- -------------- ---- -----------------
nginx 127.0.0.1:8080 => 127.10.103.1:8080
My cartridge is a "diy-0.1" cartridge. nginx 1.7.6 (also tested 1.4.4) + rtmp-module.
I suspect there are some issues with some proxy (apache?) that uses openshift for handling gears, maybe it does not allow rtmp headers(?)?
NB: Configuring nginx http-only works fine.
Can anybody help? I'm stuck, I think this is the first time I ask something on stackoverflow :-)
The nginx configuration (NB: the "play" path and the IP:PORT are taken using the openshift environment variables.):
rtmp {
server {
listen 127.10.103.1:8080;
chunk_size 8192;
application test {
play /var/lib/openshift/54da37644382ece45c000139/app-root/runtime/repo/public;
}
}
}
There is an apache proxy in front of your application on OpenShift Online, and it is possible that the content is trying to be streamed as HTTP traffic instead of RTMP traffic, that is why you are getting the content mismatch, but if you do it through the port-forward, you are gaining direct access to your application and bypassing the proxy. That is why it works fine with the port forward. There is currently no way to bypass the apache reverse proxy through the public ip, please see this developer portal article for more information about how requests are routed to your application: https://developers.openshift.com/en/managing-port-binding-routing.html
I just installed my node.js app in a windows micro instance with security group quick-start and with http port enabled.
I opened the firewall in the instance and opened port 80, 443 for inbound and outbound both.
In spite of that, my http requests are not being honored by the node.js app.
From log I see that the app is connected to redis and mongo and socket.io is also started.
What's wrong ? why http requests are blocked ?
Have you by chance built your app on top of the Example Webserver as currently shown on the Node.js home page as well? The sample currently looks like so:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
Either way your Node.js server might simply not be listening on the correct port/hostname combination - the Example Webserver listens on port 1337 (rather than the regular HTTP port 80) and on localhost only (rather than on the private/internal IP address which has been assigned to your EC2 instance) for example.
If these assumptions apply, you could achieve you goal by adjusting the listen() statement accordingly, see my answer to the related question Node.js Amazon EC2 example webserver - no result for an extended discussion, including a couple of variations regarding the flexible use of the server.listen(port, [hostname], [callback]).
Good luck!
Finally found the problem.
I introduced log4js and integrated this with express, as,
app.use(log4js.connectLogger(logger, { level: log4js.levels.ERROR }));
This created the problem. Somehow this was failing. Looks like it works only with DEBUG. After commenting this, it started working. Strange !!