How to access local files from AzureML File Share? - azure-machine-learning-studio

Earlier when using AzureML from the Notebooks blade of Azure ML UI, we could access the local files in AzureML using simple relative paths:
For example, in the above image to access the CSV from the test.ipynb we could just mention the relative path:
df = pandas.read_csv('WHO-COVID-19-global-data.csv')
However, we are not able to do that anymore.
Also when we run
import os
os.getcwd()
We see the output as
'/mnt/batch/tasks/shared/LS_root/mounts/clusters/<cluster-name>'.
Hence, we are unable to access the files in the FileStore which was not the case earlier.
When we run the same from the JuyterLab environment of the compute environment we get:
'/mnt/batch/tasks/shared/LS_root/mounts/clusters/<cluster-name>/code/Users/<current-user-name>/temp'.
We can easily solve it by adding the path '/code/Users/<current-user-name>/temp' at the base and use that instead. But this is not recommended as with a change in the environment we are using the code needs to change every time. How do we resolve this issue without going through this path appending method.

I work on the Notebooks team in AzureML, I just tried this. Did this just start happening today?
It seems like things are working as expected:

Related

Using cronR on GCP virtual machine doesn't work

I'm tryng to import a csv with a scheduled task with cronR. Unfortunatly even if I don't receive any error from log file I don't see any dataframe in global environment.
Below the super easy script
#Test_cronR
require("readr")
setwd("~/projects/my_proj")
df <- read.csv("~/projects/my_proj/df.csv")
I tried to insert also the absolute path but it's still not working.
I'm working on Rstudio Server in a VM on Google Cloud Platform,
Can somebody help me ?
Consulting the web actully I was not able to find anything, in particular on importing such a kind of file.Only some tips regarding the full path vs relative one.
My expecting is having simply a csv in my global environment.

Azure Databricks: How do we access R Scripts present on DBFS?

I'm new to DataBricks. I am trying to access a .R file that is present in the DBFS storage but I cannot figure out how to do so. Any help is really appreciated.
I can read data from the storage using the file path /dbfs and also source code from the script but I want to make edits to the script.
You need some editor to do that - for example, you can setup RStudio on your cluster and connect to it via RStudio UI - in this case you can edit R files directly on DBFS.
But really, the simplest for you would be to use Databricks CLI fs command to copy the file to your local machine, make changes in the editor of your choice, and upload file back.

Unable to use correct file paths in R/RStudio

Disclaimer: I am very new here.
I am trying to learn R via RStudio through a tutorial and very early have encountered an extremely frustrating issue: when I am trying to use the read.table function, the program consistently reads my files (written as "~/Desktop/R/FILENAME") as going through the path "C:/Users/Chris/Documents/Desktop/R/FILENAME". Note that the program is considering my Desktop folder to be through my documents folder, which is preventing me from reading any files. I have already set and re-set my working directory multiple times and even re-downloaded R and RStudio and I still encounter this error.
When I enter the entire file path instead of using the "~" shortcut, the program is successfully able to access the files, but I don't want to have to type out the full file path every single time I need to access a file.
Does anyone know how to fix this issue? Is there any further internal issue with how my computer is viewing the desktop in relation to my other files?
I've attached a pic.
Best,
Chris L.
The ~ will tell R to look in your default directory, which in Windows is your Documents folder, this is why you are getting this error. You can change the default directory in the RStudio settings or your R profile. It just depends on how you want to set up your project. For example:
Put all the files in the working directory (getwd() will tell you the working directory for the project). Then you can just call the files with the filename, and you will get tab completion (awesome!). You can change the working directory with setwd(), but remember to use the full path not just ~/XX. This might be the easiest for you if you want to minimise typing.
If you use a lot of scripts, or work on multiple computers or cross-platform, the above solution isn't quite as good. In this situation, you can keep all your files in a base directory, and then in your script use the file.path function to construct the paths:
base_dir <- 'C:/Desktop/R/'
read.table(file.path(base_dir, "FILENAME"))
I actually keep the base_dir assignemnt as a code snippet in RStudio, so I can easily insert it into scripts and know explicitly what is going on, as opposed to configuring it in RStudio or R profile. There is a conditional in the code snippet which detects the platform and assigns the directory correctly.
When R reports "cannot open the connection" it means either of two things:
The file does not exist at that location - you can verify whether the file is there by pasting the full path echoed back in the error message into windows file manager. Sometimes the error is as simple as an extra subdirectory. (This seems to be the problem with your current code - Windows Desktop is never nested in Documents).
If the file exists at the location, then R does not have permission to access the folder. This requires changing Windows folder permissions to grant R read and write permission to the folder.
In windows, if you launch RStudio from the folder you consider the "project workspace home", then all path references can use the dot as "relative to workspace home", e.g. "./data/inputfile.csv"

How to modify the target folder for generated pact files while doing Contract Driven Testing using ScalaPact

I am using scalapact for CDC test.
My tests are running fine and the pact file is generated under target>pacts folder.
I have another folder "files" where I want those pact files to be generated after running the pact tests.
Is there any way I configure the default path for pact files?
This is an area that needs some attention in Scala-Pact, however, someone kindly did a PR for us a while ago that lets you set an environment variable called pact.rootDir.
In practice, on linux/mac that variable is a bit tricky to set because of the ., so exporting it or just using -Dpact.rootDir="<my desired path>" in the command arguments doesn't seem to work. Instead, you need to do this:
env "pact.rootDir=<my desired path>" bash. I haven't tried this on Windows so I don't know if you'd have the same issue.
I've just raised an issue to try and make this easier in the future:
https://github.com/ITV/scala-pact/issues/101
As an alternative, note that the pact directory is really kind of a scratch/tmp area to allow Scala-Pact to compile it's output. If you're running this as part of a build script, you may just want to add a step to copy the assets to a new location once they've been generated.
Also, for some reason we made reading from a directory way easier than writing to one. If you need to read from a dir such as during verification, you can just add --source <my desired path> on the command line.
Hope that helps.

Use a different setParameters.xml file?

So I've got my deploy working on a build and I've set up my build to create a deployment package and execute the package on the target server. Great so far.
Now however the application is expanding and I need to have different configurations per machine (account names and such like),
Can I specify what the file name of "setParameters.xml" for example to "Server1.SetParameters.xml" or similar ?
I've got it copying the files over the SetParameters.xml before each deploy for now but is seems in-elegant and should a file get locked for what ever the reason it would deploy the wrong settings to the wrong server.
Since you are using the WPP-generated deploy.cmd file, the simplest choice is to set %_DeploySetParametersFile% to a full path to your setParmeters file before you execute the deploy script.
SET _DeploySetParametersFile=c:\full\path\to\setParmaeters.xml
call Website.deploy.cmd
Alternatively, if you want to use msdeploy directly, you can specify -setParamFile:c:\full\path\to\setParmaeters.xml. For more information, see Web Deploy Operation Settings

Resources