Uncaught CurlException: 7: couldn't connect to host thrown in /xxx/base_facebook.php on line 886 - facebook-php-sdk

I am getting the above error when I am using Facebook PHP-SDK to implement FB login for my website. I have tried changing the iptables to open ports 80 and 443. Currently my iptables reads like this:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere multiport dports www,https state NEW,ESTABLISHED
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:www
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:www
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https

Facebook recently started accepting requests to the Graph API servers over IPV6.
Check your system's IPV6 interface is correctly configured, and disable it if it isn't.
This has been the cause of 2/3rds of people asking about 'couldn't connect to host' errors lately.

Related

Access OpenStack API from a VM within the cluster

I am running VHI with OpenStack. I have configured the API to use a DNS name, as defined in the VHI documentation. The change is processed and when I make an API call to the catalog (/v3/auth/catalog) the public interface is returning the correct value using my new DNS value.
However, I am currently unable to interact with the API from Virtual Machines created within OpenStack cluster, either using the DNS name or the public IP address.
curl https://dns.tld:5000/v3
returns
curl: (7) Failed connect to dns.tld:5000; Connection refused
This same command from outside the cluster returns an expected result:
{"version": {"status": "stable", "updated": "2019-01-22T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.12", "links": [{"href": "https://dns.tld:5000/v3/", "rel": "self"}]}}
This happens whether I use the DNS name or the public IP address. I can access other external network resources from the virtual machine, so the issue is not a lack of external connectivity.
I can access other external resources It would seem that the primary issue is actually internal networking. I've checked the iptables and don't see anything unusual, but the iptables are managed internally and use Virtuozzo references that are somewhat opaque to me, such as:
Chain VZ_IN_f2466d11_d10af457 (1 references)
target prot opt source destination
VZ_IN_f2466d11_d10af457_F tcp -- anywhere anywhere tcp dpt:17514
VZ_IN_f2466d11_d10af457_F tcp -- anywhere anywhere tcp dpt:ddi-tcp-3
VZ_IN_f2466d11_d10af457_F tcp -- anywhere anywhere tcp dpt:domain
VZ_IN_f2466d11_d10af457_F udp -- anywhere anywhere udp dpt:domain
VZ_IN_f2466d11_d10af457_F tcp -- anywhere anywhere tcp dpt:commplex-link
VZ_IN_f2466d11_d10af457_F tcp -- anywhere anywhere tcp dpt:pgbouncer
VZ_IN_f2466d11_d10af457_F tcp -- anywhere anywhere tcp dpt:ddi-tcp-1
Even if I am ssh-ed in to one of the bare metal servers (including the management node), I am not able to get the API to respond. I continue to get the same errors as above.
How can I get these API calls to work correctly internally?

EC2 Nginx stuck in SYN_ACK for http requests from single ip [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a single EC2 instance up and running, default amzn2 linux.
I've got HTTP,HTTPS,SSH inbound rules enabled from all ips in the security groups, So it should be accessible from all ip's.
I did
sudo iptables -I INPUT -p tcp --dport 80 -j LOG
and i can see my requests showing up in
sudo tail -f /var/log/messages
The server is running nginx, that is proxying requests to a node cluster via unix sockets.
All requests seem to go through from other IP addresses except mine. And only HTTP/HTTPS.
So everything seems to work fine, but I can't connect via HTTP/HTTPS from my local development machine, SSH works.
iptables is empty
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I've got no clue what might be blocking my requests. And it's driving me insane, I've spend hours figuring it out and got no clue what so ever. Anyone?
Update
Seems the TCP connections are stuck in SYN_RECV state. No idea about the root cause.
$ netstat -atupen
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 vm.vm.vm.vm:80 my.my.my.my:8857 SYN_RECV 0 0 -
vm.vm.vm.vm -> aws internal ip address
my.my.my.my -> my current ip address
By any means, is your instance in a VPC with some custom ACLs?
SYN_RECV means that the first SYNC and the SYN/ACK passed the firewalls properly, so you might want to investigate why your client cannot send the last ACK.
Do you have some kind of Firewall on your Computer / router, or connectivity issues with your network?
Kind Regards

k8s external-ip SNAT?

I add one external ip 1.1.1.1 for one of my pod, and I can access pod's udp port 1234 via 1.1.1.1:1234 from external network, but I found that k8s do SNAT for the request and from my pod, the source ip is k8s's ip 10.244.0.1:1234. When my pod response udp pkt to 10.244.0.1:1234, k8s does not do DNAT for me, so external network can not receive response at all. What ip and port should my pod response? Any ideas?
You don't need an explicit DNAT rule since there is a conntrack table created for the SNAT rule which has the 4-tuple(src ip, dst ip, src port, dst port) to identify a connection for the response packets. Do you send multiple udp requests with the same 4-tuple, I do remember there is a conntract table race condition for udp. You can take a look at the details of the race condition from the following link.

iptables NAT not applied to packets from a TAP interface

The iptables MASQUERADE NAT rule is not being applied to packets that have come from a TAP interface.
I have an application tied to two TAP interfaces that is used for some packet manipulation during routing.
I am using iptables to apply a netfilter mark to packets received on one of two physical interfaces and ip rules to route the packets into one of my TAP interfaces. When the packet comes out my application it goes back into the main routing table and out the appropriate physical interface.
I have a MASQUERADE NAT rule on one of the two physical interfaces, but when the packet is transmitted the NAT is not applied. I think this is because it has already passed through iptables already.
Can you mark a packet as "new" in iptables so it traverses the full iptables chains a second time?

netstat -na : udp and state established?

In an application (voip rtp media server), netstat -na on the server (172.16.226.3 bound to udp port 1286) gives the following line :
udp 0 0 172.16.226.3:1286 172.25.14.11:10000 ESTABLISHED
As an udp connection can not be really "established", it strikes me to see such a line. netstat documentation says that this field is used for tcp connection states, but I am sure that this really is an udp network flow. So : what does it means ? I know (wireshark dump) that my server sends back udp packets from 173.16.226.3:1286 to 172.25.14.11:10000, but I don't see why it should matter...
Os is debian 6.
A UDP socket can be connected via the connect(2) system call, so that the socket will only accept packets from the named peer.
I expect this is the source of the ESTABLISHED column.

Resources