I can't install curl on my MR-3020 because there is not enough space. This is a problewm for all devices with small boards
Even when I remove all the non essential packages, it does not have space for the dependent packages. Common solution is to use an external USB drive but that's not an option for me because I use the USB drive for a modem.
After much struggle I found a solution here that I thought I would share with the community:
Edit /etc/rc.local with:
opkg update
opkg install curl -d ram
rm /tmp/opkg-lists/*
So that it would just install it to ram on reboot so it would be available. You may need to edit your exports so it can find it in the ram destination.
export LD_LIBRARY_PATH='/lib:/usr/lib:/tmp/lib:/tmp/usr/lib'
export PATH='/bin:/sbin:/usr/bin:/usr/sbin:/tmp/usr/bin:/tmp/usr/sbin'
This way I am able to use it as if it were installed locally and not just re-installed on reboot.
Related
I am trying to remove all Macports packages concerning QT on MacOS 11.2 Big Sur (I switch to brew packages manager).
A simple question:
Which Macport packages allow to remove /opt/local/libexec/qt4/ and /opt/local/libexec/qt5/ directories ?
Craig's answer explains how to uninstall all ports, but if you only want to uninstall specific ports that had installed certain files, but you don't know which port installed them, port provides answers that question. It doesn't operate on directories, so you have to pick a file, but for example:
$ port provides /opt/local/libexec/qt5/bin/moc
/opt/local/libexec/qt5/bin/moc is provided by: qt5-qtbase
Now we know that we can get rid of that file by running:
$ sudo port uninstall qt5-qtbase
If any further items remain in /opt/local/libexec/qt5 that we want to get rid of, we can run port provides on another file and repeat the process.
MacPorts recommends to select one package manager and install only from the one. Having more than one active can lead to very difficult-to-debug problems.
Therefore if you wish to continue with Homebrew, remove all installed MacPorts ports with:
sudo port uninstall installed
I need my application to run from a USB stick and perform the installation from there.
The application is eventually installed on a Linux/Debian.
For the application installation I need a DB to be installed on that USB. I also need the DB data (tables, etc.) to be kept on that USB stick.
I read that SQLite is a good candidate to be used for such a purpose. However, I could no find the steps needed for installing it on the USB stick.
I did download the sqlite-snapshot-202002271621.tar.gz from the sqlite.org site, placed it in one of my Debian directories and used the 3 commands to install it (./configure,make,make install).
That installed SQLite on my hard disk.
What should I do in order to achieve the same on the USB stick?
Mount the USB to the Debian box, place the tar.gz file there, and run the commands from there?
Will that install SQLite on the USB?
Thanks
So the answer is indeed:
Mount the USB to the Debian box
Place the sqlite-snapshot-202002271621.tar.gz file there
Run the commands from there
./configure
make
make install
Notice just that I had to tar the sqlite-snapshot-202002271621.tar.gz file using:
tar -zvxf sqlite-snapshot-202002271621.tar.gz -C /media/usbstick/ --no-same-owner
In order to avoid the error:
"Cannot change ownership to uid 1000, gid 1000: Operation not permitted"
I'm new to R and I decided to put R on a machine I have and see if I can remotely run code that is on my desktop computer.
While searching for "how to do" that, I came across the names "Rserve" and "RStudio". As far as I could tell, RServe is a package (actually, it seems to be the package) which I can use to configure the server, while RStudio is an IDE.
My question is: does RStudio use RServe "under the hood"? And, if it doesn't, then how does RStudio compare to RServe? (I.e., which one is better and why?)
[I figured out that this question could possibly be a duplicate, but I couldn't find any similar question]
Rserve is a client server implemenation written in pure c that starts a server and spawns multiple processes each with it's own R workspace. This is not threads but processes due to R's limitation on multithreading. It uses a QAP packing protocol as it's primary form of transport between the client and the server. You execute commands via the client (PHP, Java, C++) to the server and it returns you REXP objects that are essentially mappings to R's underlying SEXP data objects. Rserve also offers a websockets version that does will can transmit data through websockets but the api is not well documented. It also supports basic authentication through a configuration file.
Rstudio is a C++ and gwt application that provides a web based front end to R. AFAIK it uses json as it's primary transport and supports authentication through pam. Each user has a workspace configured in their home directory. It runs a server very similar but not the same as Rserve to communicate with R using RCPP. It also has it's own plotting driver used to wrap the plot device so that it can pickup the plots to be served to the ui. It has much more functionality such as stepping through your code from the ui and viewing workspace variables.
Functionally they are similar in that they provide a client/server connection to R but IMHO the comparison stops there.
I believe they are separate projects (though I could be wrong). I've never heard of RServe and there does not appear to be any mention of it in the documentation for RStudio. I have used and would recommend RStudio Server. It is relatively easy to set up and super easy to use once it is set up. This is a helpful guide to setting up a server on Amazon EC2:
#Create a user, home directory and set password
sudo useradd rstudio
sudo mkdir /home/rstudio
sudo passwd rstudio
#Enter Password
sudo chmod -R 0777 /home/rstudio
#Update all files from the default state
sudo apt-get update
sudo apt-get upgrade
#Be Able to get R 3.0
sudo add-apt-repository 'deb http://cran.rstudio.com/bin/linux/ubuntu precise/'
#Update files to use CRAN mirror
#Don't worry about error message
sudo apt-get update
#Install latest version of R
#Install without verification
sudo apt-get install r-base
#Install a few background files
sudo apt-get install gdebi-core
sudo apt-get install libapparmor1
#Change to a writeable directory
#Download & Install RStudio Server
cd /tmp
wget http://download2.rstudio.org/rstudio-server-0.97.551-amd64.deb
sudo gdebi rstudio-server-0.97.551-amd64.deb
#Once you’ve installed the above commands, you can now access RStudio through your local browser. Navigate to the Public DNS of your image on port 8787, similar to:
#http://ec2-50-19-18-120.compute-1.amazonaws.com:8787
The earlier answer about 3 years old provide old information, such as here.
Updated correction
RStudio is a firm that provides the open source RStudio IDE for R. They also sell commercial services such as RStudio Server Pro that markets itself with load balancing and related things. Apparently, the successuful open source project has lead the way to markets.
You may also mean Microsoft R Server, which is now called Microsoft Machine Learning Server?
There is also RServer by RStudio.
Anyway how to install both can be found here.
I am trying to start working with OpenCL. I have two NVidia graphics card, I installed "developer driver" as well as SDK from NVidia website. I compiled the demos but when I run
./oclDeviceQuery
I see:
OpenCL SW Info:
Error -1001 in clGetPlatformIDs Call
!!!
How can I fix it? Does it mean my nvidia cards cannot be detected? I am running Ubuntu 10.10 and X server works properly with nvidia driver.
I am pretty sure the problem is not related to file permissions as it doesn't work with sudo either.
In my case I have solved it by installing nvidia-modprobe package available in ubuntu (utopic/multiverse). And the driver itself (v346) was installed from https://launchpad.net/~mamarley/+archive/ubuntu/nvidia
Concretely, I have installed nvidia-opencl-icd-346, nvidia-libopencl1-346, nvidia-346-uvm, nvidia-346 and libcuda1-346. Not sure if they are all needed for OpenCL.
This is a result of not installing the ICD portion of Nvidia's openCL runtime. The ICD profile will instruct your application of the different openCL implementations installed on the system as multiple implementations from different vendors can coexist. Whe your application does not find the ICD information it gives the Error -1001.
Run your program as root. In case of success: you have trouble with cl_khr_icd- extension to load the vendor driver.
If you not running X11, you have to create device files manually or by (boot-)script:
ERROR: clGetPlatformIDs -1001 when running OpenCL code (Linux)
Same problem for me on a Linux system. Solution is to add the user to the video group:
# sudo usermod -aG video your-user-name
Since I just spend a couple of hours on this, I thought I would share:
I got the error because I was connected to the machine per remote desktop (mstsc). On the machine itself everything worked fine.
I have been told that it should work with TeamViewer by the way.
Dont know if you ever solved this problem, but I had the same issue and solved it in this post: ERROR: clGetPlatformIDs -1001 when running OpenCL code (Linux)
Hope it helps!
I have solved it in Ubuntu 13.10 saucy for intel opencl by created link:
sudo ln -s /opt/intel/opencl-1.2-3.2.1.16712/etc/intel64.icd /etc/OpenCL/vendors/nvidia.icd
I just ran into this problem on ubuntu 14.04 and I could not find ANY working answers anywhere online including this thread (though this was the first to show up on google). What ended up working for me was to remove ALL previous nvidia software and then to reinstall it using the .run file provided on the nvidia website. Installing the components through apt-get seems to fail for some reason.
1) Download CUDA .run file: https://developer.nvidia.com/cuda-downloads
2) Purge all previous nvidia packages
sudo apt-get purge nvidia-*
3) Install all run file components (you will likely have to stop X or restart in recovery mode to run this)
sudo sh cuda_X.X.XX_linux.run
This is because OpenCL has the same brain damaged one library per vendor setup that OpenGL has. A likely reason for the -1001 error is that you have compiled with a different library than the linker is trying to dynamically load.
So see if this is the problem run:
$ ldd oclDeviceQuery
...
libOpenCL.so.1 => important path here (0x00007fe2c17fb000)
...
Does the path point towards the NVidia-provided libOpenCL.so.1 file? If it doesn't, you should recompile the program with an -L parameter pointing towards the directory containing NVidia's libOpenCL.so.1. If you can't do that, you can override the linker's path like this:
$ LD_LIBRARY_PATH=/path/to/nvidias/lib ./oclDeviceQuery
For me, I was missing the CUDA OpenCL library, Running sudo apt install cuda-opencl-dev-12-0 solved it.
You should get number of platforms, allocate the memory for platforms, again get this platforms and then create context from this platform. There is good example:
http://developer.amd.com/support/KnowledgeBase/Lists/KnowledgeBase/DispForm.aspx?ID=71
This might be due to querying clGetPlatformIDs by multiple threads at the same time
I want to install rsync on windows xp.
I have searched the web, but most of the solutions suggest using cygwin, but is there any other way to do this?
I don't want to install cygwin because it takes lot of space. Moreover, I need to make it communicate with a rsync daemon on Linux, therefore alternatives to rsync on windows won't help.
Thanks
First hit on Google for "rsync win32".