I'm using MariaDB 10.6.8 and I can't seem to get the innodb_ft_max_token_size to go any higher than 84. The documentation states the range is between 10 and 252 (innodb_ft_max_token_size). Do I need a newer version of MariaDB?
The documentation was (I just fixed it) incorrect. #NovaDenizen was right in the 3 factor which is part of the current code base HA_FT_MAXCHARLEN which defines the maximum value of innodb_ft_max_token_size.
Related
I used LP_solve under R using lpSolveAPI and used its semi-continuous variable type which is one in which it has a bound (ie. 100 to 300) but can also take a value of 0.
I'm looking at the ROI package, in particular the symphony plugin. Under ROI I don't see anything about semi-continuous variables nor do I see anything under the symphony plugin docs. I perused the actual Symphony documentation but I didn't see anything there suggesting it also had this feature.
I'm thinking the best work around would be to make a binary variable which is
binary * max - variable * 1 >=0
and then
binary * min - variable *1 >=0
Is this the best approach or are there better ones?
I'm trying to join two tables that are ~100 MB more than a previous successful attempt.
This is what I tried:
left_join(A, B, by = c("col_1","col_2","col_3"))
And I get
Error in left_join_impl(x, y, by$x, by$y, suffix$x, suffix$y, check_na_matches(na_matches)) :
std::bad_alloc
Meaning that I'm out of RAM.
Have you bypassed a similar issue like using swap instead of RAM?
Yes.
I increased swap partion size in linux and that prevented before common R crashes with error message "bad_alloc" caused by RAM shortage.
I increased the partion from 8 GB to 16 GB size in Ubuntu using gparted application (Before I had created linux-swap also using gparted). Increasing only the partion size was not enough but also following steps were required to make changes in effect:
I had to right click the created partition and click "swapon".
I had check the UUID identity code clicking information in the window of previous step (Running blkid on command line also gives the information).
I had to edit line with text "swap" in the file /etc/fstab to include the UUID code found out in the previous step.
I rebooted system.
WARNING: This solution might be at cost of a somewhat shorter lifespan of the SSD (source). So if it is possible to increase physical RAM memory in the system increasing RAM might be preferred solution but also with a cost, monetary one.
I just discovered wid <- options()$width in RStudio, and it seems to be the source (or rather, much closer to the source) of much irritation in my everyday console usage. I should say up front that I'm currently on R 3.2.2, RStudio 0.99.491, on Linux Mint 17.3 (built over Ubuntu 14.04.3 LTS)
As I understand it, wid should be measured in characters -- if wid is equal to 52, say, then one should be able to fit the alphabet on the screen twice (given the fixed-width default font), but this doesn't appear to be the case:
As you can see, despite having wid equal to 52, I am unable to fit the alphabet twice -- I come up 6 characters short. I also note that this means it is not simply due to the presence of the command prompt arrow and space (> ).
The problem seems somewhat proportional -- if I have wid up to 78, I can only fit 70 characters; up to 104, 93, so wid is about 88% off pretty consistently (side note: this also suggests my assumption wid is measured in characters is probably right).
The problem that this engenders is that oftentimes console output overflows beyond its intended line, making the output ugly and hard to digest; take, for example, the simple snipped setDT(lapply(1:30, function(x) 1:3))[] which produces for me:
It seems clear to me that the output was attempted on a screen width which was not available in practice -- that internally, a larger screen width than actually exists was used for printing.
This leaves me with three questions:
How is options()$width determined?
Why is it so consistently wrong?
What can we do to override this error?
Found a post about this on Rstudio support and it seems the issue has to do with high DPI Displays; there is a claimed bug fix in RStudio version 0.99.878 (released just today! as luck would have it), according to the release notes:
Bug Fixes
...
Correct computation of getOption(“width”) on high DPI displays
Hope this helps anyone else experiencing this! I'm tempted to post about this on /r/oddlysatisfying B-)
Would love to see the relevant commit on the RStudio GitHub page if anyone can track it down (I had no luck).
HI all,
I was trying to load a certain amount of Affymetrix CEL files, with the standard BioConductor command (R 2.8.1 on 64 bit linux, 72 GB of RAM)
abatch<-ReadAffy()
But I keep getting this message:
Error in read.affybatch(filenames = l$filenames, phenoData = l$phenoData, :
allocMatrix: too many elements specified
What's the general meaning of this allocMatrix error? Is there some way to increase its maximum size?
Thank you
The problem is that all the core functions use INTs instead of LONGs for generating R objects. For example, your error message comes from array.c in /src/main
if ((double)nr * (double)nc > INT_MAX)
error(_("too many elements specified"));
where nr and nc are integers generated before, standing for the number of rows and columns of your matrix:
nr = asInteger(snr);
nc = asInteger(snc);
So, to cut it short, everything in the source code should be changed to LONG, possibly not only in array.c but in most core functions, and that would require some rewriting. Sorry for not being more helpful, but i guess this is the only solution. Alternatively, you may wait for R 3.x next year, and hopefully they will implement this...
If you're trying to work on huge affymetrix datasets, you might have better luck using packages from aroma.affymetrix.
Also, bioconductor is a (particularly) fast moving project and you'll typically be asked to upgrade to the latest version of R in order to get any continued "support" (help on the BioC mailing list). I see that Thrawn also mentions having a similar problem with R 2.10, but you still might think about upgrading anyway.
I bumped into this thread by chance. No, the aroma.* framework is not limited by the allocMatrix() limitation of ints and longs, because it does not address data using the regular address space alone - instead it subsets also via the file system. It never hold and never loads the complete data set into memory at any time. Basically the file system sets the limit, not the RAM nor the address space of you OS.
/Henrik
(author of aroma.*)
I'm working with large numbers that I can't have rounded off. Using Lua's standard math library, there seem to be no convenient way to preserve precision past some internal limit. I also see there are several libraries that can be loaded to work with big numbers:
http://oss.digirati.com.br/luabignum/
http://www.tc.umn.edu/~ringx004/mapm-main.html
http://lua-users.org/lists/lua-l/2002-02/msg00312.html (might be identical to #2)
http://www.gammon.com.au/scripts/doc.php?general=lua_bc (but I can't find any source)
Further, there are many libraries in C that could be called from Lua, if the bindings where established.
Have you had any experience with one or more of these libraries?
Using lbc instead of lmapm would be easier because lbc is self-contained.
local bc = require"bc"
s=bc.pow(2,1000):tostring()
z=0
for i=1,#s do
z=z+s:byte(i)-("0"):byte(1)
end
print(z)
I used Norman Ramsey's suggestion to solve Project Euler problem #16. I don't think it's a spoiler to say that the crux of the problem is calculating a 303 digit integer accurately.
Here are the steps I needed to install and use the library:
Lua needs to be built with dynamic loading enabled. I use Cygwin, but I changed PLAT in src/Makefile to be linux. The default, none, doesn't enable dynamic loading.
The MAMP needs to be built and installed somewhere that your C compiler can find it. I put libmapm.a in /usr/local/lib/. Next m_apm.h and m_apm_lc.h went to /usr/local/include/.
The makefile for lmamp needs to be altered to the correct location of the Lua and MAMP libraries. For me, that means uncommenting the second declaration of LUA, LUAINC, LUALIB, and LUABIN and editing the declaration of MAMP.
Finally, mapm.so needs to be placed somewhere that Lua will find it. I put it at /usr/local/lib/lua/5.1/.
Thank you all for the suggestions!
The lmapm library by Luiz Figueiredo, one of the authors of the Lua language.
I can't really answer, but I will add LGMP, a GMP binding. Not used.
Not my field of expertise, but I would expect the GNU multiple precision arithmetic library to be quite a standard here, no?
Though not arbitrary precision, Lua decNumber, a Lua 5.1 wrapper for IBM decNumber, implements the proposed General Decimal Arithmetic standard IEEE 754r. It has the Lua 5.1 arithmetic operators and more, full control over rounding modes, and working precision up to 69 decimal digits.
There are several libraries for the problem, each one with your advantages
and disadvantages, the best choice depends on your requeriments. I would say
lbc is a good first pick if it
fulfills your requirements or any other by Luiz Figueiredo. For the most efficient one I guess would be any using GMP bindings as GMP is a standard C library for dealing with large integers and is very well optimized.
Nevertheless in case you are looking for a pure Lua one, lua-bint
library could be an option for dealing with big integers,
I wouldn't say it's the best because there are more efficient
and complete ones such the ones mentioned above, but usually they requires compiling C code
or can be troublesome to setup. However when comparing pure Lua big integer libraries
and depending in your use case it could perhaps be an efficient choice. The library is documented,
code fully covered by tests and have many examples. But take this recommendation with grant of
salt because I am the library author.
To install you can use luarocks if you already have it in your computer or simply download the
bint.lua
file in your project, as it has no other dependencies other than requiring Lua 5.3+.
Here is a small example using it to solve the problem #16 from Project Euler
(mentioned in previous answers):
local bint = require 'bint'(1024)
local n = bint(1) << 1000
local digits = tostring(n)
local sum = 0
for i=1,#digits do
sum = sum + tonumber(digits:sub(i,i))
end
print(sum) -- should output 1366