Is there a way to reload web pages with increasing numerical values? - reload

I want to reload a url with increasing numerical values so:
www.example.com/page/1/
turns into
www.example.com/page/2/and then www.example.com/page/3/
and so on.

Using Python
Need to install requests
pip install requests
get_ulrs.py:
import requests
url = 'http://www.example.com/page'
for i in range(10):
r = requests.get(
f'{url}/{i}',
)
print(r.text)
run file
python3 ./get_ulrs.py

Related

vscode jupyter notebook error Error: Session cannot generate requests

Error message in vs code when using jupyter extension connected to remote server using ssh.
Error: Session cannot generate requests
Error: Session cannot generate requests
at w.executeCodeCell (/root/.vscode-server/extensions/ms-toolsai.jupyter-2021.8.1236758218/out/client/extension.js:90:327199)
at w.execute (/root/.vscode-server/extensions/ms-toolsai.jupyter-2021.8.1236758218/out/client/extension.js:90:326520)
at w.start (/root/.vscode-server/extensions/ms-toolsai.jupyter-2021.8.1236758218/out/client/extension.js:90:322336)
at async t.CellExecutionQueue.executeQueuedCells (/root/.vscode-server/extensions/ms-toolsai.jupyter-2021.8.1236758218/out/client/extension.js:90:336863)
at async t.CellExecutionQueue.start (/root/.vscode-server/extensions/ms-toolsai.jupyter-2021.8.1236758218/out/client/extension.js:90:336403)
I got this error after running the code below.
import pandas as pd
from itertools import product
pd.DataFrame(product(item_table, user_table), columns = ['item_id', 'user_id'])
product function outputs all combinations of the given tables.
item_table has 39729 number of items(39729 by 1)
user_table has 251350 users(251350 by 1).
And the above code outputs 251350 x 39729 combination table.
Therefore I guess this is because of the large computation but I want to know the meaning of error messages and want to know how to solve the problem.
I have encountered the same problem (so so).
It happened when I tried to import tensorflow.keras.
I was no longer able to import packages.
I just changed of conda environment, and then got back to the one I was working in and it worked (but trying to import keras still caused the same problem).

How can I save VoD recordings without stopping streaming?

How can I stop recording intervally without stopping the stream to save the VoD in Ant Media Server in my stream sources and IP cameras?
You can achieve that with a python script. Assuming you have installed python3 and pip.
Following script stops and starts the recording again in user-defined intervals:
import sys
import sched, time
try:
import requests
print("requests library already installed!")
except ImportError:
try:
import pip
print("requests library is being installed")
pip.main(['install', '--user', 'requests'])
import requests
except ImportError:
print("pip is not installed, so it needs to be installed first to proceed further.\nYou can install pip with the following command:\nsudo apt install python3-pip")
slp=sched.scheduler(time.time,time.sleep)
def startStopRecording(s):
print("Stopping the recording of "+sys.argv[2])
response=requests.put(sys.argv[1]+"/rest/v2/broadcasts/"+sys.argv[2]+"/recording/false")
if response.json()["success"]:
print("recording of "+sys.argv[2]+" stopped successfully")
print(response.content)
print("starting the recording of "+sys.argv[2])
response=requests.put(sys.argv[1]+"/rest/v2/broadcasts/"+sys.argv[2]+"/recording/true")
print(response.content)
if response.json()["success"]:
print("recording of "+sys.argv[2]+" started successfully")
s.enter(int(sys.argv[3]),1,startStopRecording,(s,))
else:
print("Couldn't start the recording of "+sys.argv[2])
print("content of the response:\n"+response.content)
sys.exit()
else:
print("Couldn't stop the recording of "+sys.argv[2])
print("content of the response:")
print(response.content)
sys.exit()
slp.enter(int(sys.argv[3]),1,startStopRecording,(slp,))
slp.run()
Example usage would be like: python3 file.py https://domain/{Application} streamId interval
First parameter is the domain you are going to use like: https://someexample.com:5443/WebRTCAppEE
Second parameter is the stream id you want to use. Ex. stream123.
Third parameter is the duration of the interval you want to restart the recording. Duration unit is seconds. So 60 would be equal to 1 minute.

E-mail (or similar) notification when code execution is finished

I am currently doing several simulations in R that each take quite a long time to execute and the time it takes for each to finish varies from case to case. To use the time in between more efficiently, I wondered if it would be possible to set up something (like a e-mail notification system or similar) that would notify me as soon a a chunk of simulation is completed.
Does somebody here have any experience with setting up something similar or does someone know a resource that could teach me to implement a notification system via R?
I recently saw an R package for this kind of thing: pushoverr. However didn't use it myself - so not tested how it works. But seems like it might be useful in your case.
I assume you run the time consuming simulations on a server, correct? If these run own you own PC, your PC will be slow as hell anyway and I would not see something beneficial in sending a mail to myself.
For long calculations: Run them on a virtual machine, I use the following workflow for my own calculations.
Write your R script. Important: Write a .txt file when the calculation file in the end. The shell script will search in a loop for the file to exist.
Copy that code an save it as Python script. I tried one day to get MailR running a Linux and it did not work. This code worked on the first try.
#!/usr/bin/env python3
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.mime.base import MIMEBase
from email import encoders
email_user = 'youownmail#gmail.com'
email_password = 'password'
email_send = 'theothersmail.com'
subject = 'yourreport'
msg = MIMEMultipart()
msg['From'] = email_user
msg['To'] = email_send
msg['Subject'] = subject
body = 'Calculation is done'
msg.attach(MIMEText(body,'plain'))
part = MIMEBase('application','octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
msg.attach(part)
text = msg.as_string()
server = smtplib.SMTP('smtp.gmail.com',587)
server.starttls()
server.login(email_user,email_password)
server.sendmail(email_user,email_send,text)
server.quit()
Make sure you are allowed to run the script.
sudo chmod 777 /path/script.R sudo chmod 777 /path/script.py
Run both your script.R and script.py inside a script.sh file. It looks the the following:
R < /path/script.R --no-save
while [ ! -f /tmp/finished.txt ]
do
sleep 2
done
python path/script.py
This may sound a bit overwhelming if you are not familiar with these technologies, but think this is a pretty much automated workflow, which relieves your own resources and can be used "in production". (I use this workflow to send me my own stock reports).

Atom editor fails installing packages

I'm trying to install packages in Atom editor, but it always fails, just like if I coudn't get a connexion to the server.
For instance, apm install split-diff returns Request for package information failed: getaddrinfo ENOTFOUND atom.io atom.io:443 (ENOTFOUND)
I'm running Atom 1.32.2 on Linux Mint 19.
I don't use a proxy.
Check your DNS servers.
I ran into this problem randomly this afternoon when initially everything was working on my Mac.
I can reach the Internet fine. Github is up and reporting no issues, Atom.io is up...
Clues re:etc/hosts from other comments here pointed me to my my network settings anyway.
Checked and I have my DNS servers configured for VPN access, once I added OpenDNS servers as well, Atom installs starting working again.
Finally, I found out where the bug was!
For some reasons of personal convenience, I replaced /etc/hosts with a symlink (towards some place in my ~/ folder). THIS is what apm didn't like. (No idea why. I'd be glad to know...) Switching back to a real file for /etc/hosts made me able to install packages again.
I just installed split-diff and it loaded fine. Open Atom and under the Atom menu item select Preferences. This opens a new window and on the left side of the pane is a row of actions starting with Core and followed by Editor, URI Handling, and 5 other actions. Click on the Install action. This is where you can find and install extensions. Once you clicked on Install, the pane changes and there is a search box at the top. In the search box type split-diff and the name of your extension should appear. There should be a blue install button for the script. Click install and it should work.
I am on Ubuntu 16.04 and I was having this problem. I had a directory called /etc/hosts/ which was a cloned version of this repo.
Clearly having a directory named the same as a file isn't exactly a smart move, but I was able to solve the problem through moving the directory and running the install script for the repo again. The install script calls a which flushes the DNS file, found in line 1193 of this file here.
I extracted the script/function which should do the trick;
#!/usr/bin/env python3
# Script by Ben Limmer
# https://github.com/l1m5
#
# This Python script will combine all the host files you provide
# as sources into one, unique host file to keep you internet browsing happy.
import argparse
import fnmatch
import json
import locale
import os
import platform
import re
import shutil
import socket
import subprocess
import sys
import tempfile
import time
from glob import glob
import lxml # noqa: F401
from bs4 import BeautifulSoup
# Detecting Python 3 for version-dependent implementations
PY3 = sys.version_info >= (3, 0)
if PY3:
from urllib.request import urlopen
else:
raise Exception("We do not support Python 2 anymore.")
# Syntactic sugar for "sudo" command in UNIX / Linux
if platform.system() == "OpenBSD":
SUDO = ["/usr/bin/doas"]
else:
SUDO = ["/usr/bin/env", "sudo"]
# Project Settings
BASEDIR_PATH = os.path.dirname(os.path.realpath(__file__))
def flush_dns_cache():
"""
Flush the DNS cache.
"""
print("Flushing the DNS cache to utilize new hosts file...")
print(
"Flushing the DNS cache requires administrative privileges. You might need to enter your password."
)
dns_cache_found = False
if platform.system() == "Darwin":
if subprocess.call(SUDO + ["killall", "-HUP", "mDNSResponder"]):
print_failure("Flushing the DNS cache failed.")
elif os.name == "nt":
print("Automatically flushing the DNS cache is not yet supported.")
print(
"Please copy and paste the command 'ipconfig /flushdns' in "
"administrator command prompt after running this script."
)
else:
nscd_prefixes = ["/etc", "/etc/rc.d"]
nscd_msg = "Flushing the DNS cache by restarting nscd {result}"
for nscd_prefix in nscd_prefixes:
nscd_cache = nscd_prefix + "/init.d/nscd"
if os.path.isfile(nscd_cache):
dns_cache_found = True
if subprocess.call(SUDO + [nscd_cache, "restart"]):
print_failure(nscd_msg.format(result="failed"))
else:
print_success(nscd_msg.format(result="succeeded"))
centos_file = "/etc/init.d/network"
centos_msg = "Flushing the DNS cache by restarting network {result}"
if os.path.isfile(centos_file):
if subprocess.call(SUDO + [centos_file, "restart"]):
print_failure(centos_msg.format(result="failed"))
else:
print_success(centos_msg.format(result="succeeded"))
system_prefixes = ["/usr", ""]
service_types = ["NetworkManager", "wicd", "dnsmasq", "networking"]
for system_prefix in system_prefixes:
systemctl = system_prefix + "/bin/systemctl"
system_dir = system_prefix + "/lib/systemd/system"
for service_type in service_types:
service = service_type + ".service"
service_file = path_join_robust(system_dir, service)
service_msg = (
"Flushing the DNS cache by restarting " + service + " {result}"
)
if os.path.isfile(service_file):
dns_cache_found = True
if subprocess.call(SUDO + [systemctl, "restart", service]):
print_failure(service_msg.format(result="failed"))
else:
print_success(service_msg.format(result="succeeded"))
dns_clean_file = "/etc/init.d/dns-clean"
dns_clean_msg = "Flushing the DNS cache via dns-clean executable {result}"
if os.path.isfile(dns_clean_file):
dns_cache_found = True
if subprocess.call(SUDO + [dns_clean_file, "start"]):
print_failure(dns_clean_msg.format(result="failed"))
else:
print_success(dns_clean_msg.format(result="succeeded"))
if not dns_cache_found:
print_failure("Unable to determine DNS management tool.")
You need to use a filter breaker. I had the same problem installing the file_icons package, and the package was installed when the Siphon filter breaker was connected.

How to resolve: ImportError: cannot import name 'HttpNtlmAuth' in python3 script?

I have installed both requests and requests_ntlm modules using "sudo python3 -m pip install requests" (and requests_ntlm respectively) and both installs were successful.
When I then attempt to do "from requests import HttpNtlmAuth", I get an error stating "cannot import name 'HttpNtlmAuth'. I do not get this error on my "import requests" line.
When I do a "sudo python3 -m pip list", I see both are installed and are the latest versions.
I've not encountered this error before, only "cannot import module", so I'm unfamiliar with how to resolve this.
EDIT 1: Additional information. When I run this script from command line as "sudo", it works. Because I am running my python script from within a PHP file using "exec", I don't particularly want to run this as a root user. Is there a way around this, or possibly running the exec statement with sudo?
the HttpNtlmAuth class is in the requests_ntlm package so you'll need to have:
import requests
from requests_ntlm import HttpNtlmAuth
Then you'll be able to instantiate your authentication
session = requests.Session()
session.auth = HttpNtlmAuth('domain\\username','password')
session.get(url)

Resources