Multiple organisation repos and keys - Mac OSX Sierra - morea-framework

I am trying to creating 3 morea courses as individual organisation repos e.g. course 1, course 2, course 3 in org x.
I do not have a server available to me, using MacBook Pro.
How do I set up multiple keys, 1 for each repo, on Sierra OSX that will enable the morea-publish script to work properly?

As you're probably aware, there is GitHub documentation on deploy keys here:
https://developer.github.com/guides/managing-deploy-keys/#deploy-keys
The Morea publish script simply invokes the 'git' command a few times:
https://raw.githubusercontent.com/morea-framework/scripts/master/morea-publish.sh
So, I don't think the script needs modification. I think your problem is somehow related to a mistake in the way you set up deploy keys. I looked around and here is some supplemental documentation on using deploy keys:
https://gist.github.com/zhujunsan/a0becf82ade50ed06115
Good luck!

Related

Cant connect to workspace

Im trying to complete the very first training module offered by MS. Something Im missing that isn't detailed on the documentation of the training.
These are the instructions I'm following
https://github.com/MicrosoftDocs/mslearn-aml-labs/blob/master/labdocs/Lab01.md
All good until I have to run the second command defined on the notebook called
"01-Getting_Started_with_Azure_ML.ipynb".
And yes I entered the device login code as the instructions indicate.
Look at the attached screenshot of the error returned after running the command of the notebook.
Opened a case with Microsoft. They noticed is an issue affecting their VM servers.
This is their reply:
Hi Marbin,
Hope you are doing good. I had discussed this with our team as well. This was a known issue with workspace names with capital letters. So, the workspace name ‘ML_Battlefield’ was creating the issue. This is fixed in SDK version 1.3.0.
In the compute instance notebook, we can update the SDK version to: pip install --upgrade azureml-sdke

Using Julia package manager offline

We recently decided to make Julia Language available on our cluster systems. The cluster system is not able to connect to the internet.
Is there any way to download all Julia packages and make them available for our different users to install and use them offline?
Another option that we have is a system that can connect to the internet temporarily, but it is always connected to the main cluster system. Is there any way to use this system as a mirror for the Julia packages or not?
We want to use "Julia 1.0.1".
our cluster operation system is: "CentOS 5.5
notes: I have seen the question asked before here, but it is for Julia 0.6 and a single package that will be copied by hand. I want that user uses the Pkg.add <pkgName> command but instead of the internet, the package manager gets the packages from our offline system.
Thank you for your help and time.
Caution:
Side effects are not known!
May please be tested properly before put into production!
a) Collect the required packages along with their dependent packages in compiled form, put them in folder, stdlib (for example: /opt/julia/julia-1.1.0/shared/julia/stdlib/v1.1/)
b) add stdlib path to environment variables, JULIA_DEPOT_PATH and JULIA_LOAD_PATH
The following is a crosspost of https://stackoverflow.com/a/74800608/18431399
PackageCompiler.jl seems like the best tool for using modern Julia (v1.8) on secure systems. The following approach requires a build server with the same architecture as the deployment server, something your institution probably already uses for developing containers, etc.
Build a sysimage with PackageCompiler's create_sysimage()
Upload the build (sysimage and depot) along with the Julia binaries to the secure system
Alias a script to julia, similar to the following example:
#!/bin/bash
set -Eeu -o pipefail
unset JULIA_LOAD_PATH
export JULIA_PROJECT=/Path/To/Project
export JULIA_DEPOT_PATH=/Path/To/Depot
export JULIA_PKG_OFFLINE=true
/Path/To/julia -J/Path/To/sysimage.so "$#"
I've been able to run a research pipeline on my institution's secure system, for which there is a public version of the approach.

Is there a convention for git version control between multiple operating systems in R?

Apologies if this isn't an appropriate question for SO - if not please let me know and I'll delete/move it. I just haven't found any resources on this myself. Anything I google related to "multiple operating systems git" gives me pages for applications that work on multiple OS's like GitHub or Tower.
I currently work regularly between two operating systems - PC at the office, Mac at home. I've been managing this with with git by using my master branch for PC/Windows R code, while using a OSXversion branch for Mac R code. This is fine for whenever I'm updating Windows or Mac specific code on each branch (such as package installation instructions in the comments). Where this gets tricky is for general improvement in my code that applies to both Mac & PC. What I've been doing is manually copy-pasting any general improvements between my Mac/PC code or cherry picking my merges. Is there a better way to be doing this?
It's fine to store code that runs on different operating systems in a Git repository. Simply check out the repository on each of the operating systems you're going to be working on.
The only thing you need to watch for is the line endings, which. Unless you're dealing with files that specifically require native CRLF / LF end-of-line styles, you're best to turn the automatic conversion off.
This can be done with:
git config --global core.autocrlf false
Further notes on autocrlf can be found on the GitHub help page itself.
As for your actual committing, you'll want to be following Git Flow, and simply have a develop branch that bases off of master. From here, you'll want to create individual feature branches. These feature branches can be worked on whilst on either Windows or Mac. The package installation instructions should really go in your README.md file.

MVFS error in a snapshot view after upgradation of ClearCase

I created a snapshot view using Rational ClearCase explorer.
After creating it, I set the config specs, environmental variables and later tried compiling my code and got an MVFS error which says:
Unable to determine if the current working directory is in MVFS - no such device or address
When I searched the IBM website for the sake of eliminating this error, I found out that a snapshot view does not use the MVFS !
Why am I getting this error when Snapshot view does not use MVFS?
When this issue got triggered: Actually in our project we were using a ClearCase (8.0.0.7 version). We never had problems when we tried to build our code on the 8.0.0.7 version. It was only after upgrading this version to 8.0.0.15 that the build issue has arisen. The legacy of both old and new ClearCases are baseClearcase
Some more specifications regarding the issue:
The server which we are using is a Windows 2003 server. I am creating a snapshot view in H drive (NTFS drive) as C drive is not available for use in our project, cleaning the previously built files by running the shell script clean_view.sh and then compiling our C code with the ClearCase command clearmake.exe all. Previously we used to follow the same procedure where build used to succeed, but now the same has become an issue.
This question is an extension to the question which I have asked previously. I am re-posting this question as a whole thing again in order to give more clarity about the issue and also for more number of ClearCase experts to chime-in. Kindly do not treat this as a duplicate one or force close it as my issue has not yet been resolved. Also please note that this is the first time I am working with ClearCase.
LINK FOR THE PREVIOUS QUESTION: MVFS error in a snapshot view
Recently there was a development in the solution of this issue !! We escalated this issue to IBM with the help of our client. They suggested us to use Dynamic views and we used them. To our surprise it was working fine and we are able to generate the executables. But the fact still remains that we are not able to use snapshot views !!
NOTE: This comment is just to share my knowledge and experience regarding this issue. :)
While a snapshot view isn't in the MVFS, clearmake has MVFS-specific functionality for build auditing.
You mentioned that the "H" drive contained the snapshot view, is H:
A local or network drive?
A drive letter created via SUBST? In this case, is the parent drive local?
Do builds in dynamic views still work?
Does the C drive exist? Is it remapped in a Terminal Server/Citrix environment?
A caveat: Windows Server 2003 is nearly a year past MICROSOFT'S end of extended support. I would recommend updating the server environment as soon as possible.
Truthfully, issues where a process fails, and the ONLY change is the ClearCase version are usually best handled by contacting IBM instead of using this venue. Not trying to shill or anything, but if it's a clearmake bug, it has to go there anyway...
Additional questions:
If the C: drive is inaccessible on the system, which is what "can't even get the properties" in the comment seems to infer, where is the OS installed? Where does %SYSTEMROOT% point?
If it worked on a different drive, what's different between those 2 drives (H: Failed and R: worked)

Create R Windows Binary from .tar.gz linux

This is sort of related to a previous post of mine. I have the need to use the bigmemory library on my 32bit Windows PC to do some ugly matrix calculations. Unfortunately, it appears that the maintainers have temporarily ceased production of Windows binaries. I have Ubuntu on my home PC. I would really like to take the .tar.gz file and build it into a Windows binary that I can actually run at work. I realize there are more efficient ways, like installing RTools on the Windows device. However, our IT keeps our admin rights on lockdown, so I can never edit my PATH enviro variable. Could anyone provide some general guidance for doing this? Are there any tools I need to install on my Ubuntu PC above and beyond R?
I found similar questions, but nothing that thoroughly answered my questions.
Unless the package source is incompatible with current versions of R, you could use the R project's win-builder site to build a Windows binary. Quoting from the linked site, win-builder is a service:
intended for useRs who do not have Windows available for checking and building Windows binary packages.
As a convenience, Hadley Wickham's devtools package includes a utility function, build_win(), that you can use for this purpose. From ?build_win:
Works by building source package, and then uploading to http://win-builder.r-project.org/>. Once building is complete you'll receive a link to the built package in the email address listed in the maintainer field. It usually takes around 30 minutes.
Windows has four sets of environment variables (system, user, volatile and process sets). The first three sets are stored in the registry but the process set is not so even if they have locked down the registry its typically still possible to set the process environment variables (including the PATH) in a local process, i.e. on a temporary basis, so you might double check your assumptions that you can't modify anything. Its more likely that you can't modify the system variables and registry but can still modify the set in your local process. To check this from the Windows cmd line enter this:
set mytest=123
set mytest
and if the second line shows that mytest has the value 123 then you likely have all the permissions you need.
Furthermore anything you need to set is all handled automatically for you by R.bat in the batchfiles distribution so you don't have to set anything yourself.
Just ensure that Rtools and R are installed into the standard locations (you can tell them to skip the setting of any registry keys during the installation process), ensure R.bat is on your path or in current directory and run:
R.bat CMD INSTALL mypackage.tar.gz
without setting environment variables, registry keys or path.
If that does not work try Rpathset.bat also from the batchfiles which is not automatic like R.bat but on the other hand is extremely flexible since you must modify the SET statments in it to whatever you want.
There is a PDF document that comes with the batchfiles which gives more info.

Resources