how to execute the hg clone command from windows - google-code

I have to download some source code from google code. But unfortunately the source code is available within the hg clone command:
hg clone https://code.google.com/p/fr-belote/
How I download this source code from windows. The hg clone command is very new for me.

You need to install "Tortoise Hg Workbench" and then from command line you can execute commands like "hg clone https://code.google.com/p/ics-openvpn/"

Related

Cloning Pintos with Ubuntu

I am trying to start doing the Pintos Stanford project on Ubuntu. I downloaded the tar file that the Stanford website provides but when I try and run
pintos -- run alarm-multiple
I get the following error:
Unrecognized character \x16; marked by <-- HERE after if ($<-- HERE near column 7 at /home/adambomb/src/pintos/src/utils/pintos line 911
I found on another stackoverflow post that I should pull from latest version of pintos:
git clone git://pintos-os.org/pintos-anon pintos
But doing this gets me an error:
Cloning into 'pintos'...
fatal: read error: Connection reset by peer
I'm not really sure where to go from here and could use some insight to fix either of these problems.
I don't really know where to go from here.
I ran into the same issues as you trying various guides on the internet (eg. this guide) and looking through StackOverflow. However, this youtube video helped me the most.
Steps below can be found here. I'm using Ubuntu 18.04.
Run sudo apt-get install qemu
Get latest pintos source code from pintos public git repository or download older version with this link
2a. Under heads, find master and click the tree hyperlink
2b. Click snapshot and download the .tar.gz file to your directory
Run tar -xvzf pintos-anon-master-{value}.tar.gz where {value} is the commit-id
Open /utils/pintos-gdb with vim and edit GDBMACROS variable to point to the full path for pintos directory
Open Makefile with vim and edit LOADLIBES variable name to LDLIBS
Compile utils directory by navigating to /src/utils and running make
Edit /src/threads/Make.vars (line 7): change bochs to qemu
Compile threads directory by navigating to /src/threads and running make
Edit /utils/pintos (line 103): replace bochs with qemu
Edit /utils/pintos (~line 257): replace kernel.bin with the full path to kernel.bin
Edit /utils/pintos (~line 621): replace qemu with qemu-system-x86_64
Edit /utils/Pintos.pm (line 362): replace loader.bin with the full path to loader.bin
Open ~/.bashrc and add export PATH=/home/.../pintos/src/utils:$PATH to the last line.
Reload terminal by running source ~/.bashrc
Run pintos with pintos run alarm-multiple

Prep line in rpm spec causes duplicate directory inside rpm

I have this spec file for my open source shell scripting sdk https://github.com/icasimpan/shcf/blob/packagebuilds/packagebuilds/rpm/shcf.spec
I build it as follows:
rpmbuild --target noarch -bb shcf.spec
Now, this builds fine, however, the output rpm's content has duplicated path "shcf/shcf", like:
/opt/icasimpan/shcf/shcf/***
This is the prep area
%prep
echo "BUILDROOT = $RPM_BUILD_ROOT"
mkdir -p $RPM_BUILD_ROOT/opt/icasimpan/shcf
cd $RPM_BUILD_ROOT/opt/icasimpan/shcf
git clone --branch 0.3.1 https://github.com/icasimpan/shcf.git
exit
At first sight, it's obviously due to the clone done to "$RPM_BUILD_ROOT/opt/icasimpan/shcf". However, if I modify the clone line to say
git clone --branch 0.3.1 https://github.com/icasimpan/shcf.git .
rpm build will fail due to unpackaged files.
Is there anything I'm missing?
Thanks in advance.

Unable to install spark 2.2 in Cloudera Quickstart VM (5.10)

I have followed the blog (Below mentioned) here and downloaded the parcel and put as per required.
Please let me know if any one has installed and the steps.
(https://www.cloudera.com/documentation/spark2/latest/topics/spark2_installing.html)
/opt/cloudera/csd/SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658-el5.parcel
But service cloudera-scm-server restart is not executing.
To use Cloudera Express (free), run:
sudo /home/cloudera/cloudera-manager --express
This requires at least 8 GB of RAM and at least 2 virtual CPUs.
SPARK 2.2 Installation Setup on Cloudera VM
Step 1: Download a quickstart_vm from the link:
Prefer a vmware platform as it is easy to use, anyways all the options are viable.
Size is around 5.4gb of the entire tar file. We need to provide the business email id as it won’t accept personal email ids.
Step 2: The virtual environment requires around 8gb of RAM, please allocate sufficient memory to avoid performance glitches.
Step 3: Please open the terminal and switch to root user as:
su root
password: cloudera
Step 4: Cloudera provides java –version 1.7.0_67 which is old and does not match with our needs. To avoid java related exceptions, please install java with the following commands:
(a). Downloading Java:
wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
(b). Switch to /usr/java/ directory with “cd /usr/java/” command.
(c). cp the java download tar file to the /usr/java/ directory.
(d). Untar the directory with “tar –zxvf jdk-8u31-linux-x64.tar.gz”
(e). Open the profile file with the command “vi ~/.bash_profile”
(f). export JAVA_HOME to the new java directory.
“export JAVA_HOME=/usr/java/jdk1.8.0_131”
Save and Exit.
(g). In order to reflect the above change, following command needs to be executed on the shell:
source ~/.bash_profile
Step 5: The Cloudera VM provides spark 1.6 version by default. However, 1.6 API’s are old and do not match with production environments. In that case, we need to download and manually install Spark 2.2.
(a). Switch to /opt/ directory with the command:
“cd /opt/”
(b). Download spark with the command:
wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgz
(c). Untar the spark tar with the following command:
tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz
(d). We need to define some environment variables as default settings:
Please open a file with the following command:
vi /opt/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh
Paste the following configurations in the file:
SPARK_MASTER_IP=192.168.50.1
SPARK_EXECUTOR_MEMORY=512m
SPARK_DRIVER_MEMORY=512m
SPARK_WORKER_MEMORY=512m
SPARK_DAEMON_MEMORY=512m
Save and exit
(e). We need to start spark with the following command:
/opt/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh
Export spark_home :
export SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.7/
(f). Change the permissions of the directory:
chmod 777 -R /tmp/hive
(g). Try “spark-shell”, it should work.
Please follow below video it has all the necessary step required in order to install Sprak2 in Clouedra VM.
youtubue link - https://www.youtube.com/watch?v=lQxlO3coMxM
Also for for starting Cloudera Express (free) your VM should have at-least 8Gb RAM allocated or if you have default 4GB RAM allocated then you can forcefullly start ysing below command and then follow the above video.
sudo /home/cloudera/cloudera-manager --force --express
Try this command
sudo /home/cloudera/cloudera-manager --express --force
I gave up on this, nothing works well with parcel and non-parcel installation.
As soon as cloudera express is started numerous errors and Java 7 instead of Java 8.
I got a mapr VM install with Spark 2.x. No issues. Works first time.
That works well. This is my advice # 1.
If you want KUDU, then I would install centos and install things oneself. This is advice # 2. OK, you may miss Impala, but if for pure research and development then not so much of an issue.
With following two command my spark2.2 was automatically updated to spark 2.4:
(i) sudo yum update
It might be that your java, home path is screwed, in that case please export the java home path in bash file.
(a) vi ~/.bash_profile
(b)
(c) source ~/.bash_profile
Just download the right version of spark that you need say 'spark-2.2.0-bin-hadoop2.6'
open bashrc_profile through vi editor
vi ~/.bash_profile. Paste the below 2 lines
SPARK_HOME=/home/cloudera/Downloads/spark-2.2.0-bin-hadoop2.6
PATH=$PATH:$HOME/bin:$SPARK_HOME/bin
Save it
Then run the command : source ~/.bash_profile
Now start spark-shell .
Note : Make sure you have JDK 1.8 installed
SnPARK 2.2 Installation Setup on Cloudera VM
Step 1: Download a quickstart_vm from the link:
Prefer a vmware platform as it is easy to use, anyways all the options are viable.
Size is around 5.4gb of the entire tar file. We need to provide the business email id as it won’t accept personal email ids.
Step 2: The virtual environment requires around 8gb of RAM, please allocate sufficient memory to avoid performance glitches.
Step 3: Please open the terminal and switch to root user as:
su root
password: cloudera
Step 4: Cloudera provides java –version 1.7.0_67 which is old and does not match with our needs. To avoid java related exceptions, please install java with the following commands:
(a). Downloading Java:
wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
(b). Switch to /usr/java/ directory with “cd /usr/java/” command.
(c). cp the java download tar file to the /usr/java/ directory.
(d). Untar the directory with “tar –xvzf jdk-8u31-linux-x64.tar.gz”
(e). Open the profile file with the command “vi ~/.bash_profile”
(f). export JAVA_HOME to the new java directory.
“export JAVA_HOME=/usr/java/jdk1.8.0_131”
Save and Exit.
(g). In order to reflect the above change, following command needs to be executed on the shell:
source ~/.bash_profile
Step 5: The Cloudera VM provides spark 1.6 version by default. However, 1.6 API’s are old and do not match with production environments. In that case, we need to download and manually install Spark 2.2.
(a). Switch to /opt/ directory with the command:
“cd /opt/”
(b). Download spark with the command:
wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgz
(c). Untar the spark tar with the following command:
tar -xvzf spark-2.2.0-bin-hadoop2.7.tgz
(d). We need to define some environment variables as default settings:
Please open a file with the following command:
vi /opt/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh
Paste the following configurations in the file:
SPARK_MASTER_IP=192.168.50.1
SPARK_EXECUTOR_MEMORY=512m
SPARK_DRIVER_MEMORY=512m
SPARK_WORKER_MEMORY=512m
SPARK_DAEMON_MEMORY=512m
SPARK_LOCAL_IP=127.0.0.1
Save and exit
(e). We need to start spark with the following command:
/opt/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh
Export spark_home :
export SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.7/
(f). Change the permissions of the directory:
chmod 777 -R /tmp/hive
(g). Try “spark-shell”, it should work.
Same answeras swapnil shashank with small modification below
SPARK_LOCAL_IP=127.0.0.1
tar -xvzf spark-2.2.0-bin-hadoop2.7.tgz

How to execute sqlite test file?

I downloaded the sqlite3 source file (not amalgamation version)
There are test folder and many test files (journal1.test , pager1.test ... etc)
How to execute these test files?
Go to the SQLite's download page and download the Snapshop of the complete (raw) source tree on the Alternative Source Code Formats section.
Unzip it, cd into the folder and run:
sudo apt install tcl-dev zlib1g-dev
./configure
make test
You can run the quick test instead (less than 3 minutes):
make quicktest
Or just the tcl tests (aka veryquick):
make tcltest
On Mac
brew install tcl-tk
./configure --with-tcl=/usr/local/opt/tcl-tk/lib
make test

'RM' is not recognized as an internal or external command while using Meteor on Windows

i am currently having problem with 'meteor' and i am currently new to this learning this stuff. So, after installing 'Meteor' i opened command prompt on Windows and typed :
meteor create goodboy
and then,
cd goodboy
But to delete the live and already running example app, i used :
rm goodboy.*
But the command prompt, gave this error :
rm is not recognized as an internal or external command, operable
program or batch file.
Is there anyway i can fix this error, thank you.
Use del on Windows.
Also, this has nothing to do with Meteor. You can also delete a Meteor project by going to the folder and dragging it to the trash.
If you are on windows, git bash may run such commands.
If you are using Mac then we can simply use
rm -f src/*
and For windows we can use command for this is
del -f "src/*"
Hope this works fine for you.
Download and Extract PortableGit.
This has most of commonly used Linux based tools ported to windows.
Add [PortableGit Path]\usr\bin to PATH variable of Windows
You can also use your system's Git installation instead of PortableGit.
This should solve the problem
I'm running Git shell prompt and for some reason it doesn't have it any more. I ended up using Cygin to get it working:
https://www.cygwin.com/
My penny's worth.
You could potentially add rm to powershell. In your (or a) profile.ps1 (or other if your powershell is not core).
rm {
del
}
or as an alias
Set-Alias rm del
or (and this is a tricky one), run WSL, bind the target folder and run via the linux interface.
PS: running the command via the Git Bash (MINGW64) terminal as suggested above, did the trick for me.
I guess you are not using bash terminal. Try this..
1- Go to the folder that you want to remove its contents lets call it my-app folder.
2- Right click in the empty space, then choose get Bash here.
3- Paste the command rm -f A_folder/* (I'm about to remove the content inside A_folder folder which is a sub-folder inside my-app).
4- Hit enter.
That should remove all content from A_folder folder.
Hope that helps.
I guess you are not using the Git Bash terminal but the normal command prompt.
Do try the same on the Git Bash terminal and you would not face this error anymore.
first, install linux clients for windows, I use Ubunto LTS
then install node.js and run your command again.
here, you find good instructions to do it so, as well as how to install cool new Windows Terminal
you should add
"remove-build": "rmdir /s /q build",
"create-build": "mkdir build",
"clean": "npm run remove-build && npm run create-build",
in package.json

Resources