Does there exist a utf8 code for x(4), functional for cross browser/os.
According to http://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts
^4 should be at U+2074 but can't get it to work (on xp).
Works for me, but your mileage may vary by font. http://www.fileformat.info/info/unicode/char/2074/browsertest.htm
If you want reliable cross-browser rendering of more complex maths, you'll need to use mathjax
Related
What do following snippets of code do in Math ML files? I removed those lines and it still worked fine for me.
<mo></mo>
<mo></mo>
<mo></mo>
Answering to any of them or just letting me know what they are would be very much appreciated.
The first two are function application and invisible times. They help indicate semantic information, see this Wikipedia entry
The last one, , could be anything since it lies in the Unicode Private Use Area which is provided so that font developers can store glyphs that do not correspond to regular Unicode positions. (Unless it's a typo and really 6349 in which case it's a a Han character.)
I plan to be changing the color of a few hundred thousand divs a second and was wondering what the fastest way to do it was.
What are the best formats in terms of performance? rgb triples? hex codes? color words(black, chartreuse)?
I've run this jsPerf, and these are the general results:
basic color keywords is quite fast, and it's the fastest for Chrome. The extended list is a lot slower in some browsers though.
hsl is just the worst, except for IE, where it is actually the fasted (but then again, IE) (apparently this was just a single case, I couldn't reproduce it afterwards)
#RGB or #RRGGBB are both relatively fast in every browser (#RGB is slightly faster in general)
rgb() is generally slow, in every browser
In general, I think #RGB is the fastest format for every browser (on average).
Hex codes would be the fastest. When you say for instance "black", it is read then changed to its hex code #000000
I'm trying to use tput to set foreground and background colors in my terminal in a device independent way.
If the whole purpose of termcap/terminfo/tput is to become device independent, why are there both versions that explicitly use ANSI controls (setaf/setab) and versions that do not (should not)?
This discussion quotes terminfo(5) which in turn quotes standards that explicitly says that those are to be implemented using ANSI and not ANSI, respectively.
Why isn't there just setf/setb and they always set the foreground and background colors. I don't care how it's done, that's why I use tput!
Why isn't there just setf/setb and they always set the foreground and background colors
are actually two questions!
The first part, why there are ANSI and non-ANSI terminal commands takes too long to exaplin, and it's unnecessary as the history is quite well explained on Wikipedia.
The second part could perhaps be freely rephrased to "what's the difference?" or "what can I do about it?".
Difference:
ANSI type terminals use another mapping between colour number and colours than non-ANSI terminals. For example, the code for yellow on one would be cyan on the other. There are simply two different mapping tables. Those things are described quite well on Wikipedia.
What you can do about it:
Discover which type of terminal you have, and use corresponding command.
Or modify your termcap.
None of these solutions are fully generic though, unfortunately.
Hello this is my question:
I am currently working on an introductory course on R programming for people with zero background on programming (this is people studying biology, veterinary, medicine, economics, ...), so they tend to be not very tech savvy and to use Windows. After they download and open the R scripts that I prepared, they are going to find every now and then badly encoded characters (as the course is in spanish and has many accents). This happens because my scripts are made with UTF-8 encoding and is not supported by default in Windows.
The options to avoid this nuisance are:
change all my scripts to the encoding WINDOWS-1252
instruct everyone to change their encoding to UTF-8
The first option is more annoying for me and helps prevents the students to be distracted with a quite minor detail.
The second option has no clear advantages from the pedagogic point of view, so I'd like to ask which virtues do you think it has...
Thanks in advance!
I would highly recommend instructing them to change their encoding to UTF-8. I've had the same issue on numerous occassions with web-app scripting and generally speaking it's alot more hassle to go through the code than to instruct the customer (or in your case, student) to use the UTF-8 encoding.
Afterall the course you're holding is an introductionary course, you might want to consider briefly covering the topic and explain the differences between the two - and more specifically: What happens when it doesn't work?
You have a golden opportunity to save yourself some time later down road, and possibly avoid the "Why is there question marks all over my screen"-question altogether!
Maybe you can avoid non-ASCII characters in your scripts. For example, to represent the greek "mu" character, you could use
> mu <- "\u03BC"
> Encoding(mu) <- "UTF-8"
> mu
[1] "μ"
Now if you print mu on the console, it is displayed correctly. In the script, you did not use any non-ASCII character at all.
I'm working with large numbers that I can't have rounded off. Using Lua's standard math library, there seem to be no convenient way to preserve precision past some internal limit. I also see there are several libraries that can be loaded to work with big numbers:
http://oss.digirati.com.br/luabignum/
http://www.tc.umn.edu/~ringx004/mapm-main.html
http://lua-users.org/lists/lua-l/2002-02/msg00312.html (might be identical to #2)
http://www.gammon.com.au/scripts/doc.php?general=lua_bc (but I can't find any source)
Further, there are many libraries in C that could be called from Lua, if the bindings where established.
Have you had any experience with one or more of these libraries?
Using lbc instead of lmapm would be easier because lbc is self-contained.
local bc = require"bc"
s=bc.pow(2,1000):tostring()
z=0
for i=1,#s do
z=z+s:byte(i)-("0"):byte(1)
end
print(z)
I used Norman Ramsey's suggestion to solve Project Euler problem #16. I don't think it's a spoiler to say that the crux of the problem is calculating a 303 digit integer accurately.
Here are the steps I needed to install and use the library:
Lua needs to be built with dynamic loading enabled. I use Cygwin, but I changed PLAT in src/Makefile to be linux. The default, none, doesn't enable dynamic loading.
The MAMP needs to be built and installed somewhere that your C compiler can find it. I put libmapm.a in /usr/local/lib/. Next m_apm.h and m_apm_lc.h went to /usr/local/include/.
The makefile for lmamp needs to be altered to the correct location of the Lua and MAMP libraries. For me, that means uncommenting the second declaration of LUA, LUAINC, LUALIB, and LUABIN and editing the declaration of MAMP.
Finally, mapm.so needs to be placed somewhere that Lua will find it. I put it at /usr/local/lib/lua/5.1/.
Thank you all for the suggestions!
The lmapm library by Luiz Figueiredo, one of the authors of the Lua language.
I can't really answer, but I will add LGMP, a GMP binding. Not used.
Not my field of expertise, but I would expect the GNU multiple precision arithmetic library to be quite a standard here, no?
Though not arbitrary precision, Lua decNumber, a Lua 5.1 wrapper for IBM decNumber, implements the proposed General Decimal Arithmetic standard IEEE 754r. It has the Lua 5.1 arithmetic operators and more, full control over rounding modes, and working precision up to 69 decimal digits.
There are several libraries for the problem, each one with your advantages
and disadvantages, the best choice depends on your requeriments. I would say
lbc is a good first pick if it
fulfills your requirements or any other by Luiz Figueiredo. For the most efficient one I guess would be any using GMP bindings as GMP is a standard C library for dealing with large integers and is very well optimized.
Nevertheless in case you are looking for a pure Lua one, lua-bint
library could be an option for dealing with big integers,
I wouldn't say it's the best because there are more efficient
and complete ones such the ones mentioned above, but usually they requires compiling C code
or can be troublesome to setup. However when comparing pure Lua big integer libraries
and depending in your use case it could perhaps be an efficient choice. The library is documented,
code fully covered by tests and have many examples. But take this recommendation with grant of
salt because I am the library author.
To install you can use luarocks if you already have it in your computer or simply download the
bint.lua
file in your project, as it has no other dependencies other than requiring Lua 5.3+.
Here is a small example using it to solve the problem #16 from Project Euler
(mentioned in previous answers):
local bint = require 'bint'(1024)
local n = bint(1) << 1000
local digits = tostring(n)
local sum = 0
for i=1,#digits do
sum = sum + tonumber(digits:sub(i,i))
end
print(sum) -- should output 1366