pip install on Jupyter kerner venv (virtual environment) is not updated - jupyter-notebook

I'm creating the Jupyter kernel within the virtual environment but the pkg in Juypter notebook is still missing after pip install

First create your virtual env using this post and activate it!
Create the kernel using:
ipython kernel install --name=gen-py3
Now the kernel details would be in:
/usr/local/share/jupyter/kernels/gen-py3
note: you can cd to this folder and open the file kernel.json and see the details of your env, for example mine is:
"argv": [
"/usr/local/opt/python/bin/python3.7",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "gen-py3",
"language": "python"
Which means is taken out of the global Python env b/c of this "/usr/local/opt/python/bin/python3.7"
In some cases when you're running in your venv it's constantly taken out of you global env and not your venv. Not try to run in your venv:
virtualenv -p python3 venv
source <env_name>/bin/activate ; in our case venv
pip install ipython
pip install ipykernel
and make sure you run
ipython kernel install --name=venv-test
from your venv ipython.
for example if your virtual env is venv and the location is
~/venv
run:
/Users/nathanielkohn/venv/bin/ipython kernel install --name=venv-test
And your kernel.json file is with the venv:
{
"argv": [
"/Users/<your_user_name>/venv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "venv-test",
"language": "python",
"metadata": {
"debugger": true
}

Related

Sagemaker training job Fatal error: cannot open file 'train': No such file or directory

I am trying work on bring your own model. I have R code. when i try to run the job its failing.
Training Image:
FROM r-base:3.6.3
MAINTAINER Amazon SageMaker Examples <amazon-sagemaker-examples#amazon.com>
RUN apt-get -y update && apt-get install -y --no-install-recommends \
wget \
r-base \
r-base-dev \
apt-transport-https \
ca-certificates \
python3 python3-dev pip
ENV AWS_DEFAULT_REGION="us-east-2"
RUN R -e "install.packages('reticulate', dependencies = TRUE, warning = function(w) stop(w))"
RUN R -e "install.packages('readr', dependencies = TRUE, warning = function(w) stop(w))"
RUN R -e "install.packages('dplyr', dependencies = TRUE, warning = function(w) stop(w))"
RUN pip install --quiet --no-cache-dir \
'boto3>1.0<2.0' \
'sagemaker>2.0<3.0'
ENTRYPOINT ["/usr/bin/Rscript"]
Source code:
rcode
└── train.R
└── train.tar.gz
Build
- aws s3 cp $CODEBUILD_SRC_DIR/rcode/ s3://${self:custom.deploymentBucket}/${self:service}/code/training --recursive
Serverless.com yaml
SagemakerRCodeTrainingStep:
Type: Task
Resource: ${self:custom.sageMakerTrainingJob}
Parameters:
TrainingJobName.$: "$.sageMakerTrainingJobName"
DebugHookConfig:
S3OutputPath: "s3://${self:custom.deploymentBucket}/${self:service}/models/rmodel"
AlgorithmSpecification:
TrainingImage: ${self:custom.sagemakerRExecutionContainerURI}
TrainingInputMode: "File"
OutputDataConfig:
S3OutputPath: "s3://${self:custom.deploymentBucket}/${self:service}/models/rmodel"
StoppingCondition:
MaxRuntimeInSeconds: ${self:custom.maxRuntime}
ResourceConfig:
InstanceCount: 1
InstanceType: "ml.m5.xlarge"
VolumeSizeInGB: 30
RoleArn: ${self:custom.stateMachineRoleARN}
InputDataConfig:
- DataSource:
S3DataSource:
S3DataType: "S3Prefix"
S3Uri: "s3://${self:custom.datasetsFilePath}/data/processed/train"
S3DataDistributionType: "FullyReplicated"
ChannelName: "train"
HyperParameters:
sagemaker_submit_directory: "s3://${self:custom.deploymentBucket}/${self:service}/code/training/train.tar.gz"
sagemaker_program: "train.R"
sagemaker_enable_cloudwatch_metrics: "false"
sagemaker_container_log_level: "20"
sagemaker_job_name: "sagemaker-r-learn-2022-02-28-09-56-33-234"
sagemaker_region: ${self:provider.region}
I am not sure which TrainingImage you are using and all the files in your container.
That being said, I suspect you are using a custom container.
SageMaker Training Jobs look for a train file and run your container as follows:
docker run image train
You can change this behavior by setting the ENTRYPOINT in your Dockerfile. Please see this example Dockerfile from the r_byo_r_algo_hpo example.

SageMaker fails when trying to add Lifecycle Configuration for keeping custom environments persistent after restart

I want to create environment in SageMaker on AWS with miniconda, and make it available as kernels in Jupyter when I restart the session. But the SageMaker keep failing.
I followed the instructions found in here:
https://aws.amazon.com/premiumsupport/knowledge-center/sagemaker-lifecycle-script-timeout/
basically it says:
"Create a custom, persistent Conda installation on the notebook instance's Amazon Elastic Block Store (Amazon EBS) volume: Run the on-create script in the terminal of an existing notebook instance. This script uses Miniconda to create a separate Conda installation on the EBS volume (/home/ec2-user/SageMaker/). Then, run the on-start script as a lifecycle configuration to make the custom environment available as a kernel in Jupyter. This method is recommended for more technical users, and it is a better long-term solution."
I run this on-create.sh script on the terminal on Jupyter:
on-create.sh:
#!/bin/bash
set -e
sudo -u ec2-user -i <<'EOF'
unset SUDO_UID
# Install a separate conda installation via Miniconda
WORKING_DIR=/home/ec2-user/SageMaker/custom-environments
mkdir -p "$WORKING_DIR"
wget https://repo.anaconda.com/miniconda/Miniconda3-4.6.14-Linux-x86_64.sh -O "$WORKING_DIR/miniconda.sh"
bash "$WORKING_DIR/miniconda.sh" -b -u -p "$WORKING_DIR/miniconda"
rm -rf "$WORKING_DIR/miniconda.sh"
# Create a custom conda environment
source "$WORKING_DIR/miniconda/bin/activate"
KERNEL_NAME="conda-test-env"
PYTHON="3.6"
conda create --yes --name "$KERNEL_NAME" python="$PYTHON"
conda activate "$KERNEL_NAME"
pip install --quiet ipykernel
# Customize these lines as necessary to install the required packages
conda install --yes numpy
pip install --quiet boto3
EOF
and it creates the "conda-test-env" environment as expected.
Then I add the on-start.sh as lifestyle configuration:
#!/bin/bash
set -e
sudo -u ec2-user -i <<'EOF'
unset SUDO_UID
source "/home/ec2-user/SageMaker/custom-environments/miniconda/bin/activate"
conda activate conda-test-env
python -m ipykernel install --user --name "conda-test-env" --display-name "conda-test-env"
# Optionally, uncomment these lines to disable SageMaker-provided Conda functionality.
# echo "c.EnvironmentKernelSpecManager.use_conda_directly = False" >> /home/ec2-user/.jupyter/jupyter_notebook_config.py
# rm /home/ec2-user/.condarc
EOF
then I update the instance with the new configuration,
and when I start my notebook instance, after few minutes it fails.
I'll appreciate any help.

Application logs to stdout with Shiny Server and Docker

I have a Docker container running a shiny app (Dockerfile here).
Shiny server logs are output to stdout and application logs are written to /var/log/shiny-server. I'm deploying this container to AWS Fargate and logging applications only display stdout which makes debugging an application when deployed challenging. I'd like to write the application logs to stdout.
I've tried a number of potential solutions:
I've tried the solution provided here, but have had no luck.. I added the exec xtail /var/log/shiny-server/ to my shiny-server.sh as the last line in the file. App logs are not written to stdout
I noticed that writing application logs to stdout is now the default behavior in rocker/shiny, but as I'm using rocker/verse:3.6.2 (upgraded from 3.6.0 today) along with RUN export ADD=shiny, I don't think this is standard behavior for the rocker/verse:3.6.2 container with Shiny add-on. As a result, I don't get the default behavior out of the box.
This issue on github suggests an alternative method of forcing application logging to stdout by way of an environment variable SHINY_LOG_STDERR=1 set at runtime but I'm not Linux-savvy enough to know where that env variable needs to be set to be effective. I found this documentation from Shiny Server v1.5.13 which gave suggestions in which file to set the environment variable depending on Linux distro; however, the output from my container when I run cat /etc/os-release is:
which doesn't really line up with any of the distributions in the Shiny Server documentation, thus making the documentation unhelpful.
I tried adding adding the environment variable from the github issue above in the docker run command, i.e.,
docker run --rm -e SHINY_LOG_STDERR=1 -p 3838:3838 [my image]
as well as
docker run --rm -e APPLICATION_LOGS_TO_STDOUT=true -p 3838:3838 [my image]
and am still not getting the logs to stdout.
I must be missing something here. Can someone help me identify how to successfully get application logs to stdout successfully?
You can add the line ENV SHINY_LOG_STDERR=1 to your Dockerfile (at least, this works with rocker/shiny, not sure about rocker/verse), such as with your Dockerfile:
FROM rocker/verse:3.6.2
## Add shiny capabilities to container
RUN export ADD=shiny && bash /etc/cont-init.d/add
## Install curl and xtail
RUN apt-get update && apt-get install -y \
curl \
xtail
## Add pip3 and other Python packages
RUN sudo apt-get update -y && apt-get install -y python3-pip
RUN pip3 install boto3
## Add R packages
RUN R -e "install.packages(c('shiny', 'tidyverse', 'tidyselect', 'knitr', 'rmarkdown', 'jsonlite', 'odbc', 'dbplyr', 'RMySQL', 'DBI', 'pander', 'sciplot', 'lubridate', 'zoo', 'stringr', 'stringi', 'openxlsx', 'promises', 'future', 'scales', 'ggplot2', 'zip', 'Cairo', 'tinytex', 'reticulate'), repos = 'https://cran.rstudio.com/')"
## Update and install
RUN tlmgr update --self --all
RUN tlmgr install ms
RUN tlmgr install beamer
RUN tlmgr install pgf
#Copy app dir and theme dirs to their respective locations
COPY iarr /srv/shiny-server/iarr
COPY iarr/reports/interim_annual_report/theme/SwCustom /opt/TinyTeX/texmf-dist/tex/latex/beamer/
#Force texlive to find my custom beamer theme
RUN texhash
EXPOSE 3838
## Add shiny-server information
COPY shiny-server.sh /usr/bin/shiny-server.sh
COPY shiny-customized.config /etc/shiny-server/shiny-server.conf
## Add dos2unix to eliminate Win-style line-endings and run
RUN apt-get update -y && apt-get install -y dos2unix
RUN dos2unix /usr/bin/shiny-server.sh && apt-get --purge remove -y dos2unix && rm -rf /var/lib/apt/lists/*
# Enable Logging from stdout
ENV SHINY_LOG_STDERR=1
RUN ["chmod", "+x", "/usr/bin/shiny-server.sh"]
CMD ["/usr/bin/shiny-server.sh"]

jupyter not found even after creating .zshrc file and copying anaconda installer code from bash profile

I've recently updated to macOS 10.15.1 (Catalina) and switched to zsh. I then couldn't launch jupyter notebook from the command line. zsh couldn't find jupyter. There wasn't a zshrc file either. So I created one, added the code below from my bash_profile and saved it. After executing source .zshrc I still get the error - zsh: command not found: jupyter. Where am I going wrong?
# added by Anaconda3 2019.03 installer
# >>> conda init >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$(CONDA_REPORT_ERRORS=false '/anaconda3/bin/conda' shell.bash hook 2> /dev/null)"
if [ $? -eq 0 ]; then
\eval "$__conda_setup"
else
if [ -f "/anaconda3/etc/profile.d/conda.sh" ]; then
# . "/anaconda3/etc/profile.d/conda.sh" # commented out by conda initialize
CONDA_CHANGEPS1=false conda activate base
else
\export PATH="/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda init <<<

Docker Theme Jupyter

I recently installed the data science notebook for Jupyter but i cant seem to install any themes on it.
Using the local version i have installed a dark theme and i am used to it.
Following this guide and the install section I tried making /custom/ folder and added a cascading style sheet into the mounted volume. But it doesn't seem to work.
Is there anyway i can install a custom theme on the docker image?
My workaround is: add the custom.css file locally under '~/.jupyter/custom/'. When the docker container runs, it will automatically adopt the theme.
You can actually install jupyter-themes and opt for a theme on Dockerfile itself. Here is a example for reference
Install jupyter-themes using pip
RUN pip3 install jupyterthemes
Opt for a theme
CMD ["bash", "-c", "jt -t solarizedd -T -N && jupyter notebook --port=8888 --no-browser --ip=0.0.0.0 --allow-root --notebook-dir=/home/user/workdir"]
Here is a complete docker file sample for reference
FROM ubuntu:20.04
# Install Python and other dependencies
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
wget
# Install Jupyter
RUN pip3 install jupyter
RUN pip3 install jupyterthemes
# Create a user with a home directory
RUN useradd --create-home --home-dir /home/user user
USER user
# Mount a volume for the working directory
VOLUME /home/user/workdir
# Set the default command to launch Jupyter
CMD ["bash", "-c", "jt -t solarizedd -T -N && jupyter notebook --port=8888 --no-browser --ip=0.0.0.0 --allow-root --notebook-dir=/home/user/workdir"]
# CMD ["jupyter", "notebook", "--no-browser", "--ip=0.0.0.0", "--allow-root", "--notebook-dir=/home/user/workdir"]
You can also include the said command on a docker-compose file, if you are using a docker image
version: "3"
services:
notebook:
image: jupyter/datascience-notebook
ports:
- "8888:8888"
environment:
JUPYTER_ENABLE_LAB: "yes"
volumes:
- .:/home/user/workdir
command: bash -c "pip install jupyterthemes && jt -t solarizedd -T -N"
Here i have chosen the solarizedd theme, just tweak and include the theme that best suits for you.
Hope this helps

Resources