I've mounted a GCS bucket within a Vertex AI notebook using the following commands:
MY_BUCKET=cloud-ai-platform-a013866a-a18a-470f-9d35-f485abb17e82
cd ~/
mkdir -p gcs
gcsfuse --implicit-dirs --rename-dir-limit=100 --disable-http2 --max-conns-per-host=100 $MY_BUCKET "/home/jupyter/gcs"
Within the terminal I can do ls gcs/ and get a list of the directories within the mounted bucket (test uncorrupted_split_heightmaps), but when I try to access these directories from within a Jupyter Notebook, they cannot be found.
Running the following code within a Jupyter Notebook:
import os
print(os.listdir('../gcs'))
gives the output:
[]
instead of the expected output:
[test, uncorrupted_split_heightmaps]
And
from tensorflow.keras.preprocessing.image import ImageDataGenerator
idg = ImageDataGenerator()
heightmap_iterator = idg.flow_from_directory('../gcs/test',
target_size = (256, 256),
batch_size = 8,
color_mode = 'grayscale',
classes = [''])
gives the output:
Found 0 images belonging to 1 classes.
instead of the expected output:
Found 732458 images belonging to 1 classes.
How can I access the mounted GCS bucket from within a Jupyter Notebook?
Related
I try to run a code using Jupyter Notebook, and then use nbconvert to export the created notebooks to pdf, but it does not work.
For reference my code is like this (But I don't think it is related to my code directly?):
# Import the necessary libraries
import pandas as pd
import matplotlib.pyplot as plt
from nbconvert import PDFExporter
from nbformat import v4 as nbf
# Read the Excel file using Pandas
excel_file = pd.ExcelFile('survey_responses.xlsx')
# Use a loop to iterate over the sheets and generate pie charts and PDF files for each one
for sheet_name in excel_file.sheet_names:
# Read the sheet using Pandas
df = excel_file.parse(sheet_name)
# Create a new Jupyter notebook for the current sheet
notebook = nbf.new_notebook()
# Add a text cell to the Jupyter notebook
text = 'The following pie charts show the results of the survey for Class {}.'.format(sheet_name)
notebook['cells'].append(nbf.new_markdown_cell(text))
# Use the subplots function to create a figure with multiple subplots
fig, ax = plt.subplots(nrows=len(df.columns), ncols=1, figsize=(8, 6))
# Use a loop to iterate over the columns and generate a pie chart for each one
for i, question in enumerate(df.columns[1:], start=1):
responses = df[question].value_counts()
# Add a code cell to the Jupyter notebook
code = 'ax[{}].pie(responses, labels=responses.index)\nax[{}].set_title("{}")'.format(i, i, question)
notebook['cells'].append(nbf.new_code_cell(code))
# Use nbconvert to convert the Jupyter notebook to a PDF file
exporter = PDFExporter()
pdf, _ = exporter.from_notebook_node(notebook)
with open('{}.pdf'.format(sheet_name), 'wb') as f:
f.write(pdf)
Jupyter Note send me a pop-up window with title "Package Installation" and content enter image description here:
The required file tex\latex\pgf\basiclayer\pgf.sty is missing. It is a part of the following package: pgf. The package will be installed from.................
I click install, then it shows: enter image description here:
[I 14:00:51.966 NotebookApp] Kernel started: 4529aa41-ab04-45c9-ab04-a723aafffe41, name: python3
[IPKernelApp] CRITICAL | x failed: xelatex notebook.tex -quiet
Sorry, but C:\Users\EconUser\anaconda3\Library\miktex\texmfs\install\miktex\bin\xelatex.exe did not succeed.
The log file hopefully contains the information to get MiKTeX going again:
C:/Users/EconUser/AppData/Local/MiKTeX/2.9/miktex/log/xelatex.log
You may want to visit the MiKTeX project page, if you need help.
[I 14:02:52.109 NotebookApp] Saving file at /(Step By Step) Export PDF with texts, codes, and multiple plots.ipynb
I tried web browsing and followed the method in this guide: https://github.com/microsoft/vscode-jupyter/issues/10910, but it did not work either.
I also try to install pandoc and MikTex again in Jupyter Notebook at the beginning of the code
!pip install --upgrade pip
!pip install pandoc
!pip install MikTex
it shows:
Requirement already satisfied: pip in c:\users\econuser\anaconda3\lib\site-packages (22.3.1)
Requirement already satisfied: pandoc in c:\users\econuser\anaconda3\lib\site-packages (2.3)
Requirement already satisfied: ply in c:\users\econuser\anaconda3\lib\site-packages (from pandoc) (3.11)
Requirement already satisfied: plumbum in c:\users\econuser\anaconda3\lib\site-packages (from pandoc) (1.8.0)
Requirement already satisfied: pywin32 in c:\users\econuser\anaconda3\lib\site-packages (from plumbum->pandoc) (227)
ERROR: Could not find a version that satisfies the requirement MikTex (from versions: none)
ERROR: No matching distribution found for MikTex
I have no idea at all. Is Miketex outdated or why?
I am having S3 bucket named "Temp-Bucket". Inside that I am having folder named "folder".
I want to read file named file1.xlsx. This file is present inside the S3 bucket(Temp-Bucket) under the folder (folder). How to read that file ?
If you are using the R Kernel on the SageMaker Notebook Instance you can do the following:
library("readxl")
system("aws s3 cp s3://Temp-Bucket/folder/file1.xlsx .", intern = TRUE)
my_data <- read_excel("file1.xlsx")
Operating system
nbgrader --version: 0.6.1
jupyterhub --version (if used with JupyterHub): 1.0.0 (Using littlest Jupterhub)
jupyter notebook --version
jupyter core : 4.6.3
jupyter-notebook : 6.0.3
qtconsole : 4.7.2
ipython : 7.13.0
ipykernel : 5.2.0
jupyter client : 6.1.2
jupyter lab : 1.2.8
nbconvert : 5.6.1
ipywidgets : 7.5.1
nbformat : 5.0.4
traitlets : 4.3.3
Expected behavior: When used
nbgrader release_assignment ps1 --force --debug
It should release the assignment in /srv/nbgrader/exchange shared folder.
Actual behavior
I am facing an issue when I try to release the assignment:
nbgrader release_assignment ps1 --force --debug
It releases the assignment without errors but to the location (/home/jupyter-tljh-admin/course_id/outbound/ps1) but not to the shared location /srv/nbgrader/exchange:
[ReleaseAssignmentApp | INFO] Overwriting files: /home/jupyter-tljh-admin/course_id ps1
[ReleaseAssignmentApp | INFO] Source: /home/jupyter-tljh-admin/course_id/release/./ps1
[ReleaseAssignmentApp | INFO] Destination: /home/jupyter-tljh-admin/course_id/outbound/ps1
[ReleaseAssignmentApp | INFO] Released as: /home/jupyter-tljh-admin/course_id ps1
The folder /srv/nbgrader/exchange has write permissions.
Please suggest, what could be the issue?
I faced a similar problem. When I opened the Formgrader there was a notification saying that the directory /srv/nbgrader/exchange does not exist or could not be created. I simply created the directory on my own, but not directly. First I created the directory /srv/nbgrader with sudo. Then I cd into that directory and created the subdirectory exchange, also with sudo. I also added a nbgrader_config.py in /etc/jupyter with the following content:
from nbgrader.auth import JupyterHubAuthPlugin
c = get_config()
c.Exchange.path_includes_course = True
c.Authenticator.plugin_class = JupyterHubAuthPlugin
This solved the issue for me.
Create the /srv/nbgrader/exchange directory and add the permissions like this: chmod ugo+rw /srv/nbgrader/exchange
Open the nbgrader_config.py that was created after running nbgrader quickstart <course-id>
Make sure these two lines are present and uncommented:
c.CourseDirectory.course_id = "<course-id>"
c.IncludeHeaderFooter.header = "source/header.ipynb"
Search for the specific line that says: c.CourseDirectory.root = '', uncomment it, and set it to c.CourseDirectory.root = /full/path/to/your/course-id/
Search for the specific line that says: c.Exchange.assignment_dir = '.' and actually set it to c.Exchange.assignment_dir = '/srv/nbgrader/exchange'
Copy this exact nbgrader_config.py into .jupyter or any other directory that apprear in jupyter --paths
Stop and restart your server
I'm currently working with Jupyter IPython Notebook.I would like to put my notebook under version control.
That's why, when I save and checkpoint a Notebook (.ipynb file), I would like the changes to also be saved and synchronized in the corresponding python script (.py file) in the same folder. (see picture below)
my_files
Does it have something to do with the version of Jupyter I am using? Or do I have to edit a config_file?
Thanks
You need to create jupyter_notebook_config.py in your config-dir.
if doesn't exists, execute below command from home directory :
./jupyter notebook --generate-config
Add paste following code in this file :
import os
from subprocess import check_call
c = get_config()
def post_save(model, os_path, contents_manager):
"""post-save hook for converting notebooks to .py and .html files."""
if model['type'] != 'notebook':
return # only do this for notebooks
d, fname = os.path.split(os_path)
check_call(['ipython', 'nbconvert', '--to', 'script', fname], cwd=d)
check_call(['ipython', 'nbconvert', '--to', 'html', fname], cwd=d)
c.FileContentsManager.post_save_hook = post_save
You will then need to restart your jupyter server and there you go!
The export_png() function in the R ggvis package requires us to have the program vg2png installed, from the node.js module vega.
(source: R Documentation )
I have node and npm installed on windows, and I ran npm install vega.
This is the output that I got on running npm install vega:
C:\Users\username>npm install -g trifacta/vega
\
> canvas#1.1.6 install C:\Users\username\AppData\Roaming\npm\node_modules\vega\node_modules\canvas
> node-gyp rebuild
|
C:\Users\username\AppData\Roaming\npm\node_modules\vega\node_modules\canvas>node "C:\Program Files\nodejs\node_modules\npm\bin\node-gyp-bi
n\\..\..\node_modules\node-gyp\bin\node-gyp.js" rebuild
Node Commands
Syntax:
node {operator} [options] [arguments]
Parameters:
/? or /help - Display this help message.
list - List nodes or node history or the cluster
listcores - List cores on the cluster
view - View properties of a node
online - Set nodes or node to online state
offline - Set nodes or node to offline state
pause - Pause node [deprecated]
resume - Resume node [deprecated]
For more information about HPC command-line tools,
see http://go.microsoft.com/fwlink/?LinkId=120724.
C:\Users\username\AppData\Roaming\npm\vg2svg -> C:\Users\username\AppData\Roaming\npm\node_modules\vega\bin\vg2svg
C:\Users\username\AppData\Roaming\npm\vg2png -> C:\Users\username\AppData\Roaming\npm\node_modules\vega\bin\vg2png
vega#1.5.0 C:\Users\username\AppData\Roaming\npm\node_modules\vega
├── yargs#2.3.0 (wordwrap#0.0.2)
├── canvas#1.1.6 (nan#1.2.0)
├── d3#3.5.5
├── request#2.53.0 (caseless#0.9.0, json-stringify-safe#5.0.0, forever-agent#0.5.2, aws-sign2#0.5.0, stringstream#0.0.4, oauth-sign#0.6.0, tunnel-agen
t#0.4.0, isstream#0.1.2, node-uuid#1.4.3, qs#2.3.3, combined-stream#0.0.7, form-data#0.2.0, tough-cookie#0.12.1, mime-types#2.0.10, http-signature#0.1
0.1, hawk#2.3.1, bl#0.9.4)
├── topojson#1.6.18 (queue-async#1.0.7, rw#0.1.4, optimist#0.3.7, shapefile#0.3.0)
└── d3-geo-projection#0.2.14 (brfs#1.4.0)
I'm not sure if it has installed correctly. Now, when I run:
mtcars %>%
ggvis(x = ~wt) %>%
export_png()
I get the following error:
Writing to file plot.png
Error in vega_file(vis, file = file, type = "png") :
Conversion program vg2pngnot found.
Is there a specific way in which vega has to be installed on Windows, which would resolve this error?