How to store negative numbers in EEPROM (Arduino IDE)? - arduino

I am trying to find a straightforward way to store negative values in EEPROM, integer values ranging from -20 to 20. I have been using EEPROM.write and EEPROM.read functions to store strings one character at a time, but I am having trouble with negative numbers. I figure I only need one byte for this value.

It's just matter of number representation. You just have to use correct data types to print or use:
Version 1: int8_t data = EEPROM.read(addr);
Version 2:
byte data = EEPROM.read(addr);
Serial.print((int8_t)data);
EEPROM.write can be used directly with int8_t: EEPROM.write(int8_value);
Or, if you wan't int, put/get methods can be used for it (even for structs containing POD types only or so)

Related

Converting a Gray-Scale Array to a FloatingPoint-Array

I am trying to read a .tif-file in julia as a Floating Point Array. With the FileIO & ImageMagick-Package I am able to do this, but the Array that I get is of the Type Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}.
I can convert this FixedPoint-Array to Float32-Array by multiplying it with 255 (because UInt8), but I am looking for a function to do this for any type of FixedPointNumber (i.e. reinterpret() or convert()).
using FileIO
# Load the tif
obj = load("test.tif");
typeof(obj)
# Convert to Float32-Array
objNew = real.(obj) .* 255
typeof(objNew)
The output is
julia> using FileIO
julia> obj = load("test.tif");
julia> typeof(obj)
Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
julia> objNew = real.(obj) .* 255;
julia> typeof(objNew)
Array{Float32,2}
I have been looking in the docs quite a while and have not found the function with which to convert a given FixedPoint-Array to a FloatingPont-Array without multiplying it with the maximum value of the Integer type.
Thanks for any help.
edit:
I made a small gist to see if the solution by Michael works, and it does. Thanks!
Note:I don't know why, but the real.(obj) .* 255-code does not work (see the gist).
Why not just Float32.()?
using ColorTypes
a = Gray.(convert.(Normed{UInt8,8}, rand(5,6)));
typeof(a)
#Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
Float32.(a)
The short answer is indeed the one given by Michael, just use Float32.(a) (for grayscale). Another alternative is channelview(a), which generally performs channel separation thus also stripping the color information from the array. In the latter case you won't get a Float32 array, because your image is stored with 8 bits per pixel, instead you'll get an N0f8 (= FixedPointNumbers.Normed{UInt8,8}). You can read about those numbers here.
Your instinct to multiply by 255 is natural, given how other image-processing frameworks work, but Julia has made some effort to be consistent about "meaning" in ways that are worth taking a moment to think about. For example, in another programming language just changing the numerical precision of an array:
img = uint8(255*rand(10, 10, 3)); % an 8-bit per color channel image
figure; image(img)
imgd = double(img); % convert to double-precision, but don't change the values
figure; image(imgd)
produces the following surprising result:
That second "all white" image represents saturation. In this other language, "5" means two completely different things depending on whether it's stored in memory as a UInt8 vs a Float64. I think it's fair to say that under any normal circumstances, a user of a numerical library would call this a bug, and a very serious one at that, yet somehow many of us have grown to accept this in the context of image processing.
These new types arise because in Julia we've gone to the effort to implement new numerical types (FixedPointNumbers) that act like fractional values (e.g., between 0 and 1) but are stored internally with the same bit pattern as the "corresponding" UInt8 (the one you get by multiplying by 255). This allows us to work with 8-bit data and yet allow values to always be interpreted on a consistent scale (0.0=black, 1.0=white).

How to Compare Pointers in LLVM-IR?

I want to analyze the pointer values in LLVM IR.
As illustrated in LLVM Value Class,
Value is is a very important LLVM class. It is the base class of all
values computed by a program that may be used as operands to other
values. Value is the super class of other important classes such as
Instruction and Function. All Values have a Type. Type is not a
subclass of Value. Some values can have a name and they belong to some
Module. Setting the name on the Value automatically updates the
module's symbol table.
To test if a Value is a pointer or not, there is a function a->getType()->isPointerTy(). LLVM also provides a LLVM PointerType class, however there are not direct apis to compare the values of pointers.
So I wonder how to compare these pointer values, to test if they are equal or not. I know there is AliasAnalysis, but I have doubt with the AliasAnalysis results, so I want to validate it myself.
The quick solution is to use IRBuilder::CreatePtrDiff. This will compute the difference between the two pointers, and return an i64 result. If the pointers are equal, this will be zero, and otherwise, it will be nonzero.
It might seem excessive, seeing as CreatePtrDiff will make an extra effort to compute the result in terms of number of elements rather than number of bytes, but in all likelihood that extra division will get optimized out.
The other option is to use a ptrtoint instruction, with a reasonably large result type such as i64, and then do an integer comparison.
From the online reference:
Value * CreatePtrDiff (Value *LHS, Value *RHS, const Twine &Name="")
Return the i64 difference between two pointer values, dividing out the size of the pointed-to objects.

Convert HEX to characters using bitwise operations

Say I've got this value xxx in hex 007800780078
How can I convert back the hex value to characters using bitwise operations?
Can I?
I suppose you could do it using "bitwise" operations, but it'd probably be a horrendous mess of code as well as being totally unnecessary since ILE RPG can do it easily using appropriate built-in functions.
First is that you don't exactly have what's usually thought of as a "hex" value. That is, you're showing a hexadecimal representation of a value; but basic "hex" conversion will not give a useful result. What you're showing seems to be a UCS-2 value for "xxx".
Here's a trivial example that shows a conversion of that hexadecimal string into a standard character value:
d ds
d charField 6 inz( x'007800780078' )
d UCSField1 3c overlay( charField )
d TargetField s 6
d Length s 10i 0
/free
Length = %len( %trim( UCSField1 ));
TargetField = %trim( %char( UCSField1 ));
*inlr = *on;
return;
/end-free
The code has a DS that includes two sub-fields. The first is a simple character field that declares six bytes of memory initialized to x'007800780078'. The second sub-field is declared as data type 'C' to indicate UCS-2, and it overlays the first sub-field. Because it's UCS-2, its size is given as "3" to allow for three characters. (Each character is 16-bits wide.)
The executable statements don't do much, just enough to let you test the converted values. Using debug, you should see that Length comes out to be (3) and TargetField becomes 'xxx'.
The %CHAR() built-in function can be used to convert from UCS-2 to the character encoding used by the program. To go in the opposite direction, use the %UCS2() built-in function.

Data Masking in SAS: Scrambling Sensitive observations at character level

I'm working with client data in SAS with sensitive customer identification information. The challenge is to mask the field in such a way that it remains numeric/alphabetic/alphanumeric. I found a way of using Bitwise function in SAS (BXOR, BOR, BAND) but the output is full of special characters which SAS cant handle/sort/merge etc.
I also thought of scrambling the field itself, based on a key, but haven't been able to see it through. Following are the challenges:
1) It HAS to be key based
2) HAS to be reversible.
3) Masked/scrambled field has to be numeric/alphabetic/alphanumeric only so it can be used in SAS.
4) The field to be masked has both alphabets and numbers but has varying lengths and with millions of observartions.
Any tips on how to achieve this masking/scrambling would be greatly appreicated :(
Here is a simple key-based solution. I present the data step solution here, and then will present a FCMP version in a bit. I keep everything in the range of 48 to 127 (Numbers, letters, and common characters such as # > < etc.); that's not quite alphanumeric but I can't imagine why it would matter in this case. You could reduce it further to only truly alphanumeric using this same method, but it would make the key much worse (only 62 values) and be clunky to work with (as you have 3 noncontiguous ranges).
data construct_key;
length keystr $1500;
do _t = 1 to 1500;
_rannum = ceil(ranuni(7)*80);
*if _rannum=12 then _rannum=-15;
substr(keystr,_t,1)=byte(47+_rannum);
end;
call symput('keystr',keystr);
run;
%put %bquote(&keystr);
data encrypted;
set sashelp.class;
retain key "&keystr";
length name_encrypt $30;
do _t = 1 to length(name);
substr(name_encrypt,_t,1) = byte(mod(rank(substr(name,_t,1)) + rank(substr(key,1,1))-94,80)+47);
key = substr(key,2);
end;
keep name:;
run;
data unencrypted;
set encrypted;
retain key "&keystr";
length name_unenc $30;
do _t = 1 to length(name_encrypt);
substr(name_unenc,_t,1) = byte(
mod(80+rank(substr(name_encrypt,_t,1)) - rank(substr(key,1,1)),80)
+47);
key = substr(key,2);
end;
run;
In this solution, there is a medium level of encryption - a key with 80 possible values is not strong enough to deter a truly sophisticated hacker, but is strong enough for most purposes. You need to pass either the key itself or the seed to the key algorithm in order to unencrypt; if you use this multiple times, make sure to pick a new seed each time (and not something related to the data). If you seed with zero (or a nonpostive integer) you will effectively guarantee a new key each time, but you will have to pass the key itself rather than the seed, which may present some data security issues (obviously, the key itself can be obtained by a malicious user, and would have to be stored in a different location than the data). Passing the key by way of the seed is probably better, as you could pass that verbally over the telephone or through some sort of prearranged list of seeds.
I'm not sure I recommend this sort of approach in general; a superior approach may well be to simply encrypt the entire SAS dataset using a superior encryption method (PGP, for example). Your exact solution may vary, but if you have for example some customer information that isn't actually necessary for most steps of your process, you may be better off separating that information from the rest of the (non-sensitive) data and only incorporating that when it's needed.
For example, I have a process whereby I pull sample for a client for a healthcare survey. I select valid records from a dataset that has no information for the customer except a numeric unique identifier; once I have narrowed the sample down to the valid records, then I attach the customer information from a separate dataset and create the mailing files (which are stored in an encrypted directory). That keeps the data nonsensitive for as long as possible. It's not perfect - the unique numeric identifier still means there is a tie back, even if it's not to anything someone would know outside of the project - but it keeps things safe as long as possible on our end.
Here is the FCMP version:
%let keylength=5;
%let seed=15;
proc fcmp outlib=work.funcs.test;
subroutine encrypt(value $,key $);
length key $&keylength.;
outargs value,key;
do _t = 1 to lengthc(value);
substr(value,_t,1) = byte(mod(rank(substr(value,_t,1)) + rank(substr(key,1,1))-62,96)+31);
key = substr(key,2)||substr(key,1,1);
end;
endsub;
subroutine unencrypt(value $,key $);
length key $&keylength.;
outargs value,key;
do _t = 1 to lengthc(value);
substr(value,_t,1) = byte(mod(96+rank(substr(value,_t,1)) - rank(substr(key,1,1)),96)+31);
key = substr(key,2)||substr(key,1,1);
end;
endsub;
subroutine gen_key(seed,keystr $);
outargs keystr;
length keystr $&keylength.;
do _t = 1 to &keylength.;
_rannum = ceil(ranuni(seed)*80);
substr(keystr,_t,1)=byte(47+_rannum);
end;
endsub;
quit;
options cmplib=work.funcs;
data encrypted;
set sashelp.class;
length key $&keylength.;
retain key ' '; *the missing is to avoid the uninitialized variable warning;
if _n_ = 1 then call gen_key(&seed,key);
call encrypt(name,key);
drop key;
run;
data unencrypted;
set encrypted;
length key $&keylength.;
retain key ' ';
if _n_ = 1 then call gen_key(&seed,key);
call unencrypt(name,key);
run;
This is somewhat more robust; it allows characters from 32 to 127 rather than from 48, meaning it deals with space successfully. (Tab will still not decode properly - it would beocme a 'k'.) You pass the seed to call gen_key and then it uses that key for the remainder of the process.
It goes without saying that this is not guaranteed to function for your purposes and/or to be a secure solution and you should consult with a security professional if you have substantial security needs. This post is not warranted for any purpose and any and all liability arising from its use is disclaimed by the poster.
SAS have an article on their website on how to encrypt specific variables. Hopefully this will help you.
link

Difference between number and integer datatype in oracle dictionary views

I used oracle dictionary views to find out column differences if any between two schema's. While syncing data type discrepancies I found that both NUMBER and INTEGER data types stored in all_tab_columns/user_tab_columns/dba_tab_columns as NUMBER only so it is difficult to sync data type discrepancies where one schema/column has number datatype and another schema/column has integer data type.
While comparison of schema's it show datatype mismatch. Please suggest if there is any other alternative apart form using dictionary views or if any specific properties from dictionary views can be used to identify if data type is integer.
the best explanation i've found is this:
What is the difference betwen INTEGER and NUMBER? When should we use NUMBER and when should we use INTEGER? I just wanted to update my comments here...
NUMBER always stores as we entered. Scale is -84 to 127. But INTEGER rounds to whole number. The scale for INTEGER is 0. INTEGER is equivalent to NUMBER(38,0). It means, INTEGER is constrained number. The decimal place will be rounded. But NUMBER is not constrained.
INTEGER(12.2) => 12
INTEGER(12.5) => 13
INTEGER(12.9) => 13
INTEGER(12.4) => 12
NUMBER(12.2) => 12.2
NUMBER(12.5) => 12.5
NUMBER(12.9) => 12.9
NUMBER(12.4) => 12.4
INTEGER is always slower then NUMBER. Since integer is a number with added constraint. It takes additional CPU cycles to enforce the constraint. I never watched any difference, but there might be a difference when we load several millions of records on the INTEGER column. If we need to ensure that the input is whole numbers, then INTEGER is best option to go. Otherwise, we can stick with NUMBER data type.
Here is the link
Integer is only there for the sql standard ie deprecated by Oracle.
You should use Number instead.
Integers get stored as Number anyway by Oracle behind the scenes.
Most commonly when ints are stored for IDs and such they are defined with no params - so in theory you could look at the scale and precision columns of the metadata views to see of no decimal values can be stored - however 99% of the time this will not help.
As was commented above you could look for number(38,0) columns or similar (ie columns with no decimal points allowed) but this will only tell you which columns cannot take decimals, and not what columns were defined so that INTS can be stored.
Suggestion:
do a data profile on the number columns. Something like this:
select max( case when trunc(column_name,0)=column_name then 0 else 1 end ) as has_dec_vals
from table_name
This is what I got from oracle documentation, but it is for oracle 10g release 2:
When you define a NUMBER variable, you can specify its precision (p) and scale (s) so that it is sufficiently, but not unnecessarily, large. Precision is the number of significant digits. Scale can be positive or negative. Positive scale identifies the number of digits to the right of the decimal point; negative scale identifies the number of digits to the left of the decimal point that can be rounded up or down.
The NUMBER data type is supported by Oracle Database standard libraries and operates the same way as it does in SQL. It is used for dimensions and surrogates when a text or INTEGER data type is not appropriate. It is typically assigned to variables that are not used for calculations (like forecasts and aggregations), and it is used for variables that must match the rounding behavior of the database or require a high degree of precision. When deciding whether to assign the NUMBER data type to a variable, keep the following facts in mind in order to maximize performance:
Analytic workspace calculations on NUMBER variables is slower than other numerical data types because NUMBER values are calculated in software (for accuracy) rather than in hardware (for speed).
When data is fetched from an analytic workspace to a relational column that has the NUMBER data type, performance is best when the data already has the NUMBER data type in the analytic workspace because a conversion step is not required.

Resources