I'm trying to install packages in Atom editor, but it always fails, just like if I coudn't get a connexion to the server.
For instance, apm install split-diff returns Request for package information failed: getaddrinfo ENOTFOUND atom.io atom.io:443 (ENOTFOUND)
I'm running Atom 1.32.2 on Linux Mint 19.
I don't use a proxy.
Check your DNS servers.
I ran into this problem randomly this afternoon when initially everything was working on my Mac.
I can reach the Internet fine. Github is up and reporting no issues, Atom.io is up...
Clues re:etc/hosts from other comments here pointed me to my my network settings anyway.
Checked and I have my DNS servers configured for VPN access, once I added OpenDNS servers as well, Atom installs starting working again.
Finally, I found out where the bug was!
For some reasons of personal convenience, I replaced /etc/hosts with a symlink (towards some place in my ~/ folder). THIS is what apm didn't like. (No idea why. I'd be glad to know...) Switching back to a real file for /etc/hosts made me able to install packages again.
I just installed split-diff and it loaded fine. Open Atom and under the Atom menu item select Preferences. This opens a new window and on the left side of the pane is a row of actions starting with Core and followed by Editor, URI Handling, and 5 other actions. Click on the Install action. This is where you can find and install extensions. Once you clicked on Install, the pane changes and there is a search box at the top. In the search box type split-diff and the name of your extension should appear. There should be a blue install button for the script. Click install and it should work.
I am on Ubuntu 16.04 and I was having this problem. I had a directory called /etc/hosts/ which was a cloned version of this repo.
Clearly having a directory named the same as a file isn't exactly a smart move, but I was able to solve the problem through moving the directory and running the install script for the repo again. The install script calls a which flushes the DNS file, found in line 1193 of this file here.
I extracted the script/function which should do the trick;
#!/usr/bin/env python3
# Script by Ben Limmer
# https://github.com/l1m5
#
# This Python script will combine all the host files you provide
# as sources into one, unique host file to keep you internet browsing happy.
import argparse
import fnmatch
import json
import locale
import os
import platform
import re
import shutil
import socket
import subprocess
import sys
import tempfile
import time
from glob import glob
import lxml # noqa: F401
from bs4 import BeautifulSoup
# Detecting Python 3 for version-dependent implementations
PY3 = sys.version_info >= (3, 0)
if PY3:
from urllib.request import urlopen
else:
raise Exception("We do not support Python 2 anymore.")
# Syntactic sugar for "sudo" command in UNIX / Linux
if platform.system() == "OpenBSD":
SUDO = ["/usr/bin/doas"]
else:
SUDO = ["/usr/bin/env", "sudo"]
# Project Settings
BASEDIR_PATH = os.path.dirname(os.path.realpath(__file__))
def flush_dns_cache():
"""
Flush the DNS cache.
"""
print("Flushing the DNS cache to utilize new hosts file...")
print(
"Flushing the DNS cache requires administrative privileges. You might need to enter your password."
)
dns_cache_found = False
if platform.system() == "Darwin":
if subprocess.call(SUDO + ["killall", "-HUP", "mDNSResponder"]):
print_failure("Flushing the DNS cache failed.")
elif os.name == "nt":
print("Automatically flushing the DNS cache is not yet supported.")
print(
"Please copy and paste the command 'ipconfig /flushdns' in "
"administrator command prompt after running this script."
)
else:
nscd_prefixes = ["/etc", "/etc/rc.d"]
nscd_msg = "Flushing the DNS cache by restarting nscd {result}"
for nscd_prefix in nscd_prefixes:
nscd_cache = nscd_prefix + "/init.d/nscd"
if os.path.isfile(nscd_cache):
dns_cache_found = True
if subprocess.call(SUDO + [nscd_cache, "restart"]):
print_failure(nscd_msg.format(result="failed"))
else:
print_success(nscd_msg.format(result="succeeded"))
centos_file = "/etc/init.d/network"
centos_msg = "Flushing the DNS cache by restarting network {result}"
if os.path.isfile(centos_file):
if subprocess.call(SUDO + [centos_file, "restart"]):
print_failure(centos_msg.format(result="failed"))
else:
print_success(centos_msg.format(result="succeeded"))
system_prefixes = ["/usr", ""]
service_types = ["NetworkManager", "wicd", "dnsmasq", "networking"]
for system_prefix in system_prefixes:
systemctl = system_prefix + "/bin/systemctl"
system_dir = system_prefix + "/lib/systemd/system"
for service_type in service_types:
service = service_type + ".service"
service_file = path_join_robust(system_dir, service)
service_msg = (
"Flushing the DNS cache by restarting " + service + " {result}"
)
if os.path.isfile(service_file):
dns_cache_found = True
if subprocess.call(SUDO + [systemctl, "restart", service]):
print_failure(service_msg.format(result="failed"))
else:
print_success(service_msg.format(result="succeeded"))
dns_clean_file = "/etc/init.d/dns-clean"
dns_clean_msg = "Flushing the DNS cache via dns-clean executable {result}"
if os.path.isfile(dns_clean_file):
dns_cache_found = True
if subprocess.call(SUDO + [dns_clean_file, "start"]):
print_failure(dns_clean_msg.format(result="failed"))
else:
print_success(dns_clean_msg.format(result="succeeded"))
if not dns_cache_found:
print_failure("Unable to determine DNS management tool.")
You need to use a filter breaker. I had the same problem installing the file_icons package, and the package was installed when the Siphon filter breaker was connected.
Related
Imma try and be careful, my first question on here got me blocked lol. Testing out my scraping skills on a random church website and I keep getting the error on the title. Can someone see what I'm doing wrong? Updated my CV's, installed like 10 packages(based on some past answers) and still nothing.
import subprocess
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
subprocess.call(['chromedriver_win32.zip'], shell=True)
website = "https://www.bethanyfga.org/"
path = "C:/Users/calde/OneDrive/Desktop/chromedriver_win32.zip"
service = Service(executable_path=path)
driver = webdriver.Chrome(service=service)
driver.get(website)
installed several openCV's, uninstalled and reinstalled other, unrelated, packages. Imported subprocess.
Using Apache airflow tool, how can I implement a DAG for the following Python code. The task accomplished in the code is to get a directory from GPU server to local system. Code is working fine in Jupyter notebook. Please help to implement in Airflow...I'm very new to this. Thanks.
import pysftp
import os
myHostname = "hostname"
myUsername = "username"
myPassword = "pwd"
with pysftp.Connection(host=myHostname, username=myUsername, password=myPassword) as sftp:
print("Connection successfully stablished ... ")
src = '/path/src/'
dst = '/home/path/path/destination'
os.mkdir(dst)
sftp.get_d(src, dst, preserve_mtime=True)
print("Fetched source images from GPU server to local directory")
# connection closed automatically at the end of the with-block```
For SFTP duties, Airflow provides SFTOperator that you can use directly.
Alternatively it's corresponding SFTPHook can be used with a simple PythonOperator
I acknowledge there aren't many examples, but this might be helpful
For SSH-connection, see this
I want to reload a url with increasing numerical values so:
www.example.com/page/1/
turns into
www.example.com/page/2/and then www.example.com/page/3/
and so on.
Using Python
Need to install requests
pip install requests
get_ulrs.py:
import requests
url = 'http://www.example.com/page'
for i in range(10):
r = requests.get(
f'{url}/{i}',
)
print(r.text)
run file
python3 ./get_ulrs.py
Trying to copy files from a remote desktop to my local.
Here is the code that tried...
import os
import os.path
import shutil
import sys
import win32wnet
def netcopy(host, source, dest_dir, username=None, password=None, move=False):
""" Copies files or directories to a remote computer. """
wnet_connect(host, username, password)
dest_dir = covert_unc(host, dest_dir)
# Pad a backslash to the destination directory if not provided.
if not dest_dir[len(dest_dir) - 1] == '\\':
dest_dir = ''.join([dest_dir, '\\'])
# Create the destination dir if its not there.
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
else:
# Create a directory anyway if file exists so as to raise an error.
if not os.path.isdir(dest_dir):
os.makedirs(dest_dir)
if move:
shutil.move(source, dest_dir)
else:
shutil.copy(source, dest_dir)
Trying to figure out how to establish a connection and copy files over to my local.
New to python here...
Are you using an RDP client?
Is this windows linux or mac ?
Which app are you using?
Is this a code you wrote ?
Do you know what virtual channels are ?
Is NLA on?
THere is very little information you provided.
can you even connect ? Can you ping the server ?
I have installed both requests and requests_ntlm modules using "sudo python3 -m pip install requests" (and requests_ntlm respectively) and both installs were successful.
When I then attempt to do "from requests import HttpNtlmAuth", I get an error stating "cannot import name 'HttpNtlmAuth'. I do not get this error on my "import requests" line.
When I do a "sudo python3 -m pip list", I see both are installed and are the latest versions.
I've not encountered this error before, only "cannot import module", so I'm unfamiliar with how to resolve this.
EDIT 1: Additional information. When I run this script from command line as "sudo", it works. Because I am running my python script from within a PHP file using "exec", I don't particularly want to run this as a root user. Is there a way around this, or possibly running the exec statement with sudo?
the HttpNtlmAuth class is in the requests_ntlm package so you'll need to have:
import requests
from requests_ntlm import HttpNtlmAuth
Then you'll be able to instantiate your authentication
session = requests.Session()
session.auth = HttpNtlmAuth('domain\\username','password')
session.get(url)