I was trying to find a way to install modules permanently. I came to this post which teaches how to install packages on google drive, then mounting the drive and then using "sys.path.append" to tell the python where to look for the new package.
this method works as expected when a module directly is imported when you code in the notebook itself.
However, when I tried to run a project that I already had and wanted to run the .py file (by using"!python myCode.py"), the "sys" module can't append the path of the modules that have been installed in google drive.
in short, when you use the approach in the link above, you can only import packages when you directly code in the notebook itself. the approach did not work for me when I tried to use it on my .py files. i.e., when I used "!python myCode.py"
any suggestion on how to solve this problem? do you have the same problem as well?
thanks,
Related
I forked a Github Repo and added a module in a new branch.
Now I wanted to show my changes by adding a jupyter notebook.
When I run jupyter though, the newly created module cannot be imported / found.
Now Iam wondering what Iam doing wrong?
Thanks in advance for any advice,
cheers,
Michael
Generally, there are two options. Either, you have the imports in a .py file and are able to import them via the import statement as usual or you copy the contents to be shown into respective cells in the notebook itself and then run these cells to have the classes available. The latter approach is either a work-around or a good way to present everything you have done nicely on the same page. But generally, if the paths to your import files are correctly specified, imports should work as usual.
I would start checking whether the files are actually all in the folders where they are supposed to be. And make sure that all required files (containing import contents) are present on the same branch on which you also want to store and run your notebook.
How to load CSV files in google colab for R?
For python, there are many answers but can someone guide how file can be imported in R for google colab.
Assuming you mean "get a CSV file from my local system into the Colaboratory Environment" and not just importing it from inside the Colab file paths as per Korakot's suggestion, since your question wasn't very clear, I think you have two main options:
1. Upload a file directly through the shortcut in the side menu thingy.
Just click the icon there and upload your file to drive. Then, you can run normal r import functions by following the internal path like korakot put in this answer.
2. Connect your google drive
Assuming you're using a notebook like the one created by Thong Nguyen, you can use a python call to mount your own google drive, like this one:
cat(system('python3 -c "from google.colab import drive\ndrive.mount()"', intern=TRUE), sep='\n', wait=TRUE)
... which will initiate the login process to Google Drive and will allow you to access your files from google drive as if they were folders in colab. There's more info about this process here.
In case you use the Colab with R as runtime type (and Python code would not work therefore), you could also simply upload the file as MAIAkoVSky suggested in step 1 and then import it with
data <- read.csv('/content/your-file-name-here.csv')
The filepath can also be accessed by right clicking on the file in the interface.
Please be aware that the files will disappear once you disconnected from Colab. You would need to upload them again for the next session.
You can call the read.csv function like
data = read.csv('sample_data/mnist_test.csv')
I am writing a user-friendly function to import Access tables using R. I have found that most of the steps will have to be done outside of R, but I want to keep most of this within the script if possible. The first step is to download a Database driver from microsoft: https://www.microsoft.com/en-US/download/details.aspx?id=13255.
I am wondering if it is possible to download software from inside R, and what function/package I can use? I have looked into download.file but this seems to be for downloading information files rather than software.
Edit: I have tried
install_url(https://download.microsoft.com/download/2/4/3/24375141-E08D-
4803-AB0E-10F2E3A07AAA/AccessDatabaseEngine_X64.exe)
But I get an error:
Downloading package from url: https://download.microsoft.com/download/2/4/3/24375141-E08D-4803-AB0E-10F2E3A07AAA/AccessDatabaseEngine_X64.exe
Installation failed: Don't know how to decompress files with extension exe
I am using Google Colaboratory & github.
I create a new Google Colab notebook, and I clone my github project into it using a simple !git clone <github_link> in the notebook.
Now, I have a Jupyter notebook in my github project that I need to run on Google Colab. How do I do that?
There is not a real need of downloading the notebook. If you already have your Notebook in a GitHub repo, the only thing you need to do is:
Open your Notebook file on GitHub in any browser (So the URL ends in .ipynb).
Change the URL from https://github/full_path_to_your_notebook to https://colab.research.google.com/github/full_path_to_your_notebook
And that should work.
You can upload the notebook to google drive first, then open it from there.
go to drive.google.com
go into directory “Colab Notebooks”
choose “New” > File upload
After uploading, click the new file
Chose “Open with Colaboratory” at the top
The two most practical ways are both through the Google Drive webinterface.
The first method is what #Korakot Choavavanich described.
The advantage of this method is that it provides a Search window to search for your file in your google drive storage.
The second method is even more convenient - and maybe more appropriate for your case:
In the Google Drive webinterface, you navigate to your folder where your file is located - in your case within the cloned github repository.
Then (see screenshot):
right-click on the file | Open with | Colaboratory
Your file is then converted into a colabo notebook automatically (it takes at least half a minute for that).
The advantage with this method is that you can create the colabo file directly in the folder.
My tip is to create a copy of the original jupyter file (I added "COLABO" in the file name) as you will have different code to sync your google drive and save files than in a local jupyter notebook.
One of the way could be that you can connect your google drive with the Colaboraty notebook using the following link:
Link to images within google drive from a colab notebook
Post which you can download your github repo in your google drive location. Then browse through your google drive and open the notebook using Colaboratory itself.
import sys, os
sys.path.append('models/research')
sys.path.append('models/research/object_detection')
It helped me. I was also looking for it, and found it in this COLAB work
https://colab.research.google.com/drive/1EQ3Lt_ez-oKTtVMebh6Tm3XSyPPOHAf3#scrollTo=oC-_mxCxCNP6
The better option I have found is copying the code from each cell and executing the code in colab, if you clone the Github and containing ipynb file in that. By doing this you won't face any difficulties.
Upload the .ipynb file directly in colab. Just go to colab, in the tabs above there should be upload. choose the file and upload there.
It may be a new feature not mentioned in other answers.
But right now Colab allows running jupyter notebooks directly from github, even from private repos.
Login to your google account
Access colab.research.google.com
Select the GitHub tab.
Choose include private repository if needed.
Go through the authentication process in the new opened window
Select from your repos and notebooks
And clone your repo from inside the opened notebook.
I have an iPad Pro as my primary computer, and am running R Studio from a cloud server for a class. I am trying to figure out how to import data sets, since my local working directory is on the server. I have been trying to download the package repmis, since I have been reading that that package allows for data set import from Dropbox. However, when I try to download the package, I get "Error:configuration failed for openssl" and a similar one for curl. I tried to install openssl but instead it says I need to install "deb" for ubuntu operating systems, but I can't find that in R Studio in the package database. (And I can't install curl without openssl either) Any suggestions?
If it's a relatively straightforward data set like a CSV, XML, JSON or even an .RData file you can use a Dropbox sharing URL to read it. Here's an example (it's a live URL) for reading in a CSV directly from a shared Dropbox link:
read.csv("https://www.dropbox.com/s/7xg5u0z1gtjcuol/mtcars.csv?dl=1")
The dl=1 won't be the default from a "share this link" copy (it'll probably be dl=0.
urla<- 'https://www.dropbox.com/s/fakecode/xyz.txt?raw=1'
bpdata<-read.table(urla, header=TRUE)
scode<-'https://www.dropbox.com/s/lp6fleehcsb3wi9/protoBP.R?raw=1'
source(scode)
readAndPlotZoomForYear(urla, 2019)
Basically I wanted to source code from a file in Dropbox and to use the read.table() function on a tab delineated data file. I found this could be done by replacing the string after the question mark with raw=1 in the Dropbox file links.