Erase logs of kite (Atom plugin) is safe? - atom-editor

Recently I found a large log file of kite's Atom plugin. The file C:\Users\cmcuervol\AppData\Local\Kite\logs\client.log.2019-10-10_11-55-44-AM.bak, have a size of 2.7 GB, the OS is Windows 10. Moreover, in the folder C:\Users\ccuervo\AppData\Local\Kite\logs\, are locate two other files client.log and client.log.2019-10-25_08-23-42-AM.bak, with sizes of 48.3 and 25.7 MB respectively.
I want free the storage of the large file, but conserving the kite functionlaties, I don't know if this file is important for kite performance.

Related

How to solve issue loading large Rdata file which is actually a bit smaller than RAM?

I have a large *.Rdata file of size 15 Gb (15'284'533'248 Bytes) created in RStudio on a MacBook Pro with 8 Gb RAM, containing several lists of dataframes.
Now I want to load() the file into RStudio on my PC with 32 Gb RAM, but only the RAM swells beyond all measure and at the end I get this:
Error: cannot allocate vector of size 78 Kb
The comic is, when I reload it on the Mac it works totally fine.
What's going wrong?
[Edit1] RStudio 1.0.136 on Mac, RStudio 1.1.383 on PC. Both R 3.4.2.
[Edit2] Screenshot of Mac which has 8GB RAM

Extract files from Chrome OS / Chromebook recovery image

My Problem: I am trying get hold of the official Chrome WideVine CDM plugin for an ARM architecture.
My Understanding So Far: Given ARM-based Chromebooks can stream Netflix (and Netflix uses the WideVine CRM plugin), I am lead to believe a Chrome OS installation should contain the files I'm after. As I don't have access to an ARM-based Chromebook, my next best is a Chromebook recovery image.
Where I'm up to: I have downloaded a HP Chromebook 11 recovery image, chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin, from here (the HP Chromebook 11 is ARM-based)
What I'd like to do next: Extract two files from the recovery image.
Note: I don't have access to an ARM based Chromebook to just copy the files from :/
Does anyone know how I could do such a thing?
The .bin file is just a disk image that contains many partitions. You can "load" the image by running sudo kpartx -av chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin (the -v is for verbose mode). This will load 12 partitions (from /dev/mapper/loop0p1 to /dev/mapper/loop0p12) and make them available for mounting, and you should see some additional drives in your file manager.
In this case, the partition you're looking for is labelled ROOT-A, and corresponds to the third partition (/dev/mapper/loop0p3). For some reason, opening it in my file manager directly didn't work, so I had to mount it manually by running sudo mount -t ext2 /dev/mapper/loop0p3 -o ro /media/saikrishna/chromeos/. This will mount the ext2 partition in read-only mode in the /media/saikrishna/chromeos directory (change the last part to an existing empty directory on your system).
To remove the mappings, run sudo kpartx -dv chromeos_6812.88.0_daisy-skate_recovery_stable-channel_skate-mp.bin. If that doesn't print out anything (which was the case for me), run sudo kpartx -dv /dev/loop0.

Memory limit in R Studio

I tried to decrease and give some limit to R studio but it didn't work.
I've read this article and this question.
My computer is running Windows 8 64 Bit with RAM 8 GB. I just want to give memory limit to R Studio only 4 GB.
The easiest would be to just use:
memory.limit()
[1] 8157
memory.limit(size=4000)
If you are running a server version of RStudio, it will be a bit different. You will have to change the file /etc/rstudio/rserver.conf and add rsession-memory-limit-mb=4000 to it.
If you do find that RStudio is resetting the memory limit with every new instance, you could try and add memory.limit(size=4000) to your .Rprofile file that sets your with every start

Does Lasso 8.6 have a means of extracting an uploaded zip file to a specified path?

I am trying to provide a means of allowing people to upload zips and have them extracted to a particular file path. It seems like zip functionality has been added in Lasso 9 but I'm curious if there is in fact a method for doing this in 8.6 or if anyone has any suggestions.
There are a couple of options (besides upgrading to 9):
First, you could use [os_process] to call the unzip command-line utility and have it do it for you
In 8.5, there was an example for the LJAPI documentation that created a [zip] custom type that you should be able to use. (I'm not sure if the 8.6 installer has it, but for OS X, after installing 8.5 you could find it here: /Applications/Lasso Professional 8/Documentation/3 - Language Guide/Examples/LJAPI/Tags/ZipType/) Chapter 67 of the Language Guide has documentation on how to get it installed and working.
Further expounding option 1 in bfad's answer: You might like the Lasso 8 shell tag from TagSwap to make this even easier. Here's an example where I extract tar'd and gzip'd archives:
// authenticate with user that has file permissions in this directory
inline(-username='username', -password='password');
// load shell tag from TagSwap
library_once('shell.inc');
// call tar from bash shell
shell('tar -zxf myfile.tgz');
/inline;

Is that possible to link a huge library within small memory?

I've tried several times to cross-compile Qt4.8.2 with mingw32 4.7 on my poor little linux box , which only has 2GB mem,all failed because of the memory limitation. The mingw ld just keeps swallowing the memory until its belly explode. I just want to ask if it is even possible to link such big-ass lib within such a small memory. If it's definitely a no-go, I'd have to resort to some other approaches. Thanks in advance.
~~~~~
Haha! Finally, I found the answer. I just need to temporarily increase my swap memory by creating a swap file on the harddrive. The specific steps will be shown as below:
sudo dd if=/dev/zero of=/mnt/swapfile bs=1M count=2048 #create a 2GB big emptyfile
sudo mkswap /mnt/swapfile #format it as a swapfile
sudo swapon /mnt/swapfile #turn on the newly created swap
#... build your big-ass package ...
sudo swapoff /mnt/swapfile #turn off the swap. That means your swap space will be reassigned to your originally arranged swap partion on your HD
Happy hacking! ;)

Resources