is there a maximum variable size in GLPK? - glpk

I'm looking into using GLPK to solve a ILP. Some of my constraints have the following form
I * W <= A
Where I is the variable and W and A are constants. W though, could be very, very large. An example value might be 2251799813685248, and may be even larger. Therefore, if GLPK uses standard primitives under the hood, there could be an issue.
So my question is, is GLPK subject to machine precision (i.e. 32 bit) or does GLPK use variable precision (i.e. mathematical ints without memory bound limits)? If not, are there any other open source packages that support variable precision?

If you configure --with-gmp when you build GLPK, it'll use the
GNU Multiple-Precision library
GMP; download, build that first. (configure -h:
--with-gmp: use GNU MP bignum library [[default=no]]
I don't know of a problem where GMP make a difference; please let us know.

Related

Julia power operator ^ returns different value than python pow()

i made a rsa-encryption demo to learn julia but ran into a problem.
this should be no issue of overflow and all values fit rsa criteria when i check with python code.
any pointers are welcome. julia is an awesome language and i would like to figure this out.
check these images to see my problem:
You need BigInt(message)^used_e, and similar. The problem you are seeing is integer overvflow before you convert to BigInt. Note that powermod(BigInt(message), used_e, used_N) will be much faster since it will keep all the intermediate numbers smaller.
Note that in Julia x % y is a synonym for the rem(x, y) function “from Euclidean division, returning a value of the same sign as x”, whereas for an RSA implementation, you need the mod function instead, where the result has the same sign as y. (But you really actually want powermod over BigInt here for performance.)

Optimal NEON vector structure for processing vectors of uint8_t type with Arm Cortex-A8 (32-bit)

I am doing some image processing on an embedded system (BeagleBone Black) using OpenCV and need to write some code to take advantage of NEON optimization. Specifically, I would like to write a NEON optimized thresholding function and then a NEON optimized erosion/dilation function.
This is my first time writing NEON code and I don't have experience writing assmbly code, so I have been looking at examples and resources for the C-style NEON intrinsics. I believe that I can put some working code together, but am not sure how I should structure the vectors. According to page 2 of the "ARM NEON support in the ARM compiler" white paper:
"These registers can hold "vectors" of items which are 8, 16, 32 or 64
bits. The traditional advice when optimizing or porting algorithms
written in C/C++ is to use the natural type of the machine for data
handling (in the case of ARM 32 bits). The unwanted bits can then be
discarded by casting and/or shifting before storing to memory."
What exactly does this mean? Do I need to to restrict my NEON code to using uint32x4_t vectors rather than uint8x16_t? How would I go about loading the registers? Or does this mean than I need to take some special steps when using vst1q_u8 to store the data to memory?
I did find this example, which is untested but uses the uint8x16_t type. Does it adhere to the "32-bit" advice given above?
I would really appreciate it if someone could please elaborate on the above quotation and maybe provide a very simple working example.
The next sentence from the document you linked gives your answer.
The ability of NEON to specify the data width in the instruction and
hence use the whole register width for useful information means
keeping the natural type for the algorithm is both possible and
preferable.
Note, the document is distinguishing between the natural type of the machine (32-bit) and the natural type of the algorithm (in your case uint8_t).
The document is saying that in the past you would have written your code in such a way that it used 32-bit integers so that it could use the efficient machine instructions suited for 32-bit operations.
With Neon, this is not necessary. It is more useful to use the data type you actually want to use, as Neon can efficiently operate on those data types.
It will depend on your algorithm as to the optimal choice of register width (uint8x8_t or uint8x16_t).
To give a simple example of using the Neon intrinsics to add two sets of uint8_t:
#include <arm_neon.h>
void
foo (uint8_t a, uint8_t *b, uint8_t *c)
{
uint8x16_t t1 = vld1q_u8 (a);
uint8x16_t t2 = vld1q_u8 (b);
uint8x16_t t3 = vaddq_u8 (a, b);
vst1q_u8 (c, t3);
}

math.h functions in fixed-point (32,32) format (64-bit) library for C

I'm looking for a 64-bit fixed-point (32,32) library for one of my C implementations.
Similar to this one http://code.google.com/p/libfixmath/
Need support for standard math.h operation.
Did anyone see such implementations?
fixedptc seems to be what you look for. It is a C, header-only and integer-only library for fixed point operations, located at http://www.sourceforge.net/projects/fixedptc
Bitwidth is settable through defines. In your case, you want to compile with -DFIXEDPT_BITS=64 -DFIXED_WBITS=32 to get a (32,32) fixed point number format.
Implemented functions are conversion to string, multiplication, division, square root, sine, cosine, tangent, exponential, power, natural logarithm and arbitrary-base logarithm.

CUDA Thrust library: How can I create a host_vector of host_vectors of integers?

In C++ in order to create a vector that has 10 vectors of integers I would do the following:
std::vector< std::vector<int> > test(10);
Since I thought Thrust was using the same logic with the STL I tried doing the same:
thrust::host_vector< thrust::host_vector<int> > test(10);
However I got too many confusing errors. I tried doing:
thrust::host_vector< thrust::host_vector<int> > test;
and it worked, however I can't add anything to this vector. Doing
thrust::host_vector<int> temp(3);
test.push_back(temp);
would give me the same errors(too many to paste them here).
Also generally speaking when using Thrust, does it make a difference between using host_vector and the STL's vector?
Thank you in advance
Thrust's containers are only designed for POD (plain old data) types. It isn't possible to create multidimensional vectors by instantiating "vectors of vectors" in thrust, mostly because of the limitations on the GPU side which make it impossible to pass and use in the device code path.
There is some level of compatibility between C++ standard library types and algorithms and the thrust host implementation of those STL derived models, but you should really stick with host vectors when you want to work both with the host and device library back ends.

What is the standard (or best supported) big number (arbitrary precision) library for Lua?

I'm working with large numbers that I can't have rounded off. Using Lua's standard math library, there seem to be no convenient way to preserve precision past some internal limit. I also see there are several libraries that can be loaded to work with big numbers:
http://oss.digirati.com.br/luabignum/
http://www.tc.umn.edu/~ringx004/mapm-main.html
http://lua-users.org/lists/lua-l/2002-02/msg00312.html (might be identical to #2)
http://www.gammon.com.au/scripts/doc.php?general=lua_bc (but I can't find any source)
Further, there are many libraries in C that could be called from Lua, if the bindings where established.
Have you had any experience with one or more of these libraries?
Using lbc instead of lmapm would be easier because lbc is self-contained.
local bc = require"bc"
s=bc.pow(2,1000):tostring()
z=0
for i=1,#s do
z=z+s:byte(i)-("0"):byte(1)
end
print(z)
I used Norman Ramsey's suggestion to solve Project Euler problem #16. I don't think it's a spoiler to say that the crux of the problem is calculating a 303 digit integer accurately.
Here are the steps I needed to install and use the library:
Lua needs to be built with dynamic loading enabled. I use Cygwin, but I changed PLAT in src/Makefile to be linux. The default, none, doesn't enable dynamic loading.
The MAMP needs to be built and installed somewhere that your C compiler can find it. I put libmapm.a in /usr/local/lib/. Next m_apm.h and m_apm_lc.h went to /usr/local/include/.
The makefile for lmamp needs to be altered to the correct location of the Lua and MAMP libraries. For me, that means uncommenting the second declaration of LUA, LUAINC, LUALIB, and LUABIN and editing the declaration of MAMP.
Finally, mapm.so needs to be placed somewhere that Lua will find it. I put it at /usr/local/lib/lua/5.1/.
Thank you all for the suggestions!
The lmapm library by Luiz Figueiredo, one of the authors of the Lua language.
I can't really answer, but I will add LGMP, a GMP binding. Not used.
Not my field of expertise, but I would expect the GNU multiple precision arithmetic library to be quite a standard here, no?
Though not arbitrary precision, Lua decNumber, a Lua 5.1 wrapper for IBM decNumber, implements the proposed General Decimal Arithmetic standard IEEE 754r. It has the Lua 5.1 arithmetic operators and more, full control over rounding modes, and working precision up to 69 decimal digits.
There are several libraries for the problem, each one with your advantages
and disadvantages, the best choice depends on your requeriments. I would say
lbc is a good first pick if it
fulfills your requirements or any other by Luiz Figueiredo. For the most efficient one I guess would be any using GMP bindings as GMP is a standard C library for dealing with large integers and is very well optimized.
Nevertheless in case you are looking for a pure Lua one, lua-bint
library could be an option for dealing with big integers,
I wouldn't say it's the best because there are more efficient
and complete ones such the ones mentioned above, but usually they requires compiling C code
or can be troublesome to setup. However when comparing pure Lua big integer libraries
and depending in your use case it could perhaps be an efficient choice. The library is documented,
code fully covered by tests and have many examples. But take this recommendation with grant of
salt because I am the library author.
To install you can use luarocks if you already have it in your computer or simply download the
bint.lua
file in your project, as it has no other dependencies other than requiring Lua 5.3+.
Here is a small example using it to solve the problem #16 from Project Euler
(mentioned in previous answers):
local bint = require 'bint'(1024)
local n = bint(1) << 1000
local digits = tostring(n)
local sum = 0
for i=1,#digits do
sum = sum + tonumber(digits:sub(i,i))
end
print(sum) -- should output 1366

Resources