Hardware convertion : written data is different than my read data - data-conversion

I am testing a program executed partially on a MPC603 and partially on a MPC555.
I have to verify that some data is correctly "moved" from one processor to the other via a DPRAM.
I am guessing that at some point "someone" makes a conversion but I don't know how to find what kind of conversion is done.
Here are some examples:
Pt_Dpram->acq1 at 0x8D00008 = 0x3EB2
acq1 = (0xA010538) = 1182451712 = 0x467AC800
Pt_Dpram->acq2 at 0x8D0000A = 0x5528
acq2 = (0xA010540) = 1185566720 = 0x46AA5000
Pt_Dpram->acq3 at 0x8D0000C = 0x416E
acq3 = (0xA010548) = 1107552036 = 0x4203E724
Pt_Dpram->acq4 at 0x8D0000E = 0x413C
acq4 = (0xA010550) = 1107526232 = 0x42038258

I got my answers from a collegue : the values in acqX are in Motorola binary format : http://en.wikipedia.org/wiki/SREC_(file_format)
Here is a small software that does the conversion : http://www.hexworkshop.com/onlinehelp/500/html/idhelp_baseconv.htm

Related

Circuitpython UART

Hello I am new to Circuitpython
I just wanted to ask since strings are not allowed how can I send a Ascii string over Uart and the \n\r (basically ENTER over Uart):
My current code looks like this
def get_psuState(): # read the psuState data
uart.write("psuState")
bytes_psuState = uart.read(173) # Read psuState over UART
string_psuState = ''.join([chr(b) for b in bytes_psuState])
string_psuState_split = string_psuState.split() # string sepperate after space
array_psuState = []
for line in string_psuState_split:
if ':' in line:
i = int(line.split(':')[-1]) # values after ":" save in array
array_psuState.append(i)
BatFault = array_psuState[0] # array values to global Variables
Bat12Fault = array_psuState[1]
Bat24Fault = array_psuState[2]
MainsFault = array_psuState[3]
SensorFault = array_psuState[4]
SelftestFault = array_psuState[5]
InitialCharge = array_psuState[6]
LongtimeTestActive = array_psuState[7]
batteryOvercurrent = array_psuState[8]
TotalOvercurrent = array_psuState[9]
LowBattSwitchOFF = array_psuState[10]
Basically I want to send the command psuState and ENTER to get my values from the other board.
Can anyone help me?

How do I average 20 saved ANN model?

I have created 20 neural network models with the exact same architecture (but different seed numbers) and I have saved them as SM1, SM2, …., SM20. I am basically tying to import these models using the joblib function and use the average model (called NN here) to apply for various analyses. When I run this in Jupyter Notebook , I get the following error. Appreciate any help.
NN1 = joblib.load('SM1.sav')
NN2 = joblib.load('SM2.sav')
NN3 = joblib.load('SM3.sav')
NN4 = joblib.load('SM4.sav')
NN5 = joblib.load('SM5.sav')
NN6 = joblib.load('SM6.sav')
NN7 = joblib.load('SM7.sav')
NN8 = joblib.load('SM8.sav')
NN9 = joblib.load('SM9.sav')
NN10 = joblib.load('SM10.sav')
NN11 = joblib.load('SM11.sav')
NN12 = joblib.load('SM12.sav')
NN13 = joblib.load('SM13.sav')
NN14 = joblib.load('SM14.sav')
NN15 = joblib.load('SM15.sav')
NN16 = joblib.load('SM16.sav')
NN17 = joblib.load('SM17.sav')
NN18 = joblib.load('SM18.sav')
NN19 = joblib.load('SM19.sav')
NN20 = joblib.load('SM20.sav')
NN = (NN1+NN2+NN3+NN4+NN5+NN6+NN7+NN8+NN9+NN10+NN11+NN12+NN13+NN14+NN15+NN16+NN17+NN18+NN19+NN20)/(20)
TypeError: unsupported operand type(s) for +: 'MLPRegressor' and 'MLPRegressor'

how to prepare image data for using in torch

I want to prepare my own image data for training in torch.
I tried to find a good source for this but could not find.
They have given reference to data that has been already prepared in .lua or .t7 formats.
Can you please explain the procedure of preparing raw image data for torch? (training, validation and test sets)
Thanks
you may try to write your own data loader class. store your image paths in a table, read image using
require 'image'
YOUR_RGB_FILE_PATH = "/home/username/image.png"
img = image.load(YOUR_RGB_FILE_PATH, 3)
Write your lua code in a iTorch notebook, it helps you debug quickly.
if you do not know how to start, you can refer to the project here wrote with lua torch.
require 'io'
require 'torch'
require 'image'
------------------------------ Parameters ---------------------------------
file_name = '.../train.txt'
save_name = '.../train.t7'
num_images = 10000*3
num_channels = 3
width = 51
height = 51
---------------------------------------------------------------------------
file = io.open(file_name, 'rb')
data = torch.Tensor(num_images, num_channels, width, height):byte()
label = torch.Tensor(num_images):byte()
counter = 1
for line in file:lines() do
print(counter)
image_name, image_label = line:split(' ')[1], line:split(' ')[2]
data[counter] = image.load(image_name, num_channels, 'byte')
label[counter] = image_label
counter = counter + 1
end
torch.save(save_name, {data = data, label = label})

Image compare autoit [duplicate]

I am looking for a way to find duplicate images using AutoIt. I've looked into PixelSearch and SearchImage but neither do exactly what I need them to do.
I am trying to compare 2 images by filename and see if they are the same image (a duplicate). The best way I've thought to do it would be to:
1) Get both image sizes in pixels
2) Use a while loop to get the color of each pixel and store it in an array
3) Check to see if both arrays are equal to each other.
Does anybody have any ideas on how to achieve this?
I just did some more research on this subject and built a small UDF based on a few answers I read. (Mainly based off of monoceres's answer on AutoItScript.com). I figured I would post my solution here to help any future developers!
CompareImagesUDF.au3
Func _CompareImages($ciImageOne, $ciImageTwo)
_GDIPlus_Startup()
$fname1=$ciImageOne
If $fname1="" Then Exit
$fname2=$ciImageTwo
If $fname2="" Then Exit
$bm1 = _GDIPlus_ImageLoadFromFile($fname1)
$bm2 = _GDIPlus_ImageLoadFromFile($fname2)
; MsgBox(0, "bm1==bm2", CompareBitmaps($bm1, $bm2))
Return CompareBitmaps($bm1, $bm2)
_GDIPlus_ImageDispose($bm1)
_GDIPlus_ImageDispose($bm2)
_GDIPlus_Shutdown()
EndFunc
Func CompareBitmaps($bm1, $bm2)
$Bm1W = _GDIPlus_ImageGetWidth($bm1)
$Bm1H = _GDIPlus_ImageGetHeight($bm1)
$BitmapData1 = _GDIPlus_BitmapLockBits($bm1, 0, 0, $Bm1W, $Bm1H, $GDIP_ILMREAD, $GDIP_PXF32RGB)
$Stride = DllStructGetData($BitmapData1, "Stride")
$Scan0 = DllStructGetData($BitmapData1, "Scan0")
$ptr1 = $Scan0
$size1 = ($Bm1H - 1) * $Stride + ($Bm1W - 1) * 4
$Bm2W = _GDIPlus_ImageGetWidth($bm2)
$Bm2H = _GDIPlus_ImageGetHeight($bm2)
$BitmapData2 = _GDIPlus_BitmapLockBits($bm2, 0, 0, $Bm2W, $Bm2H, $GDIP_ILMREAD, $GDIP_PXF32RGB)
$Stride = DllStructGetData($BitmapData2, "Stride")
$Scan0 = DllStructGetData($BitmapData2, "Scan0")
$ptr2 = $Scan0
$size2 = ($Bm2H - 1) * $Stride + ($Bm2W - 1) * 4
$smallest = $size1
If $size2 < $smallest Then $smallest = $size2
$call = DllCall("msvcrt.dll", "int:cdecl", "memcmp", "ptr", $ptr1, "ptr", $ptr2, "int", $smallest)
_GDIPlus_BitmapUnlockBits($bm1, $BitmapData1)
_GDIPlus_BitmapUnlockBits($bm2, $BitmapData2)
Return ($call[0]=0)
EndFunc ;==>CompareBitmaps
Now to compare imagages, all you have to do is include the CompareImagesUDF.au3 file and call the function.
CompareImagesExample.au3
#Include "CompareImagesUDF.au3"
; Define the two images (They can be different file formats)
$img1 = "Image1.jpg"
$img2 = "Image2.jpg"
; Compare the two images
$duplicateCheck = _CompareImages($img1, $img2)
MsgBox(0,"Is Duplicate?", $duplicateCheck)
If you want to find out if both images are an exact match, regardless if names are the same or different, use the built-in Crypt function _Crypt_HashFile with MD2 or MD5 to make a hash of both files and compare that.

Torch nn. Current error always is nan

I've wrote the following code:
require 'nn'
require 'cunn'
file = torch.DiskFile('train200.data', 'r')
size = file:readInt()
inputSize = file:readInt()
outputSize = file:readInt()
dataset = {}
function dataset:size() return size end;
for i=1,dataset:size() do
local input = torch.Tensor(inputSize)
for j=1,inputSize do
input[j] = file:readFloat()
end
local output = torch.Tensor(outputSize)
for j=1,outputSize do
output[j] = file:readFloat()
end
dataset[i] = {input:cuda(), output:cuda()}
end
net = nn.Sequential()
hiddenSize = inputSize * 2
net:add(nn.Linear(inputSize, hiddenSize))
net:add(nn.Tanh())
net:add(nn.Linear(hiddenSize, hiddenSize))
net:add(nn.Tanh())
net:add(nn.Linear(hiddenSize, outputSize))
criterion = nn.MSECriterion()
net = net:cuda()
criterion = criterion:cuda()
trainer = nn.StochasticGradient(net, criterion)
trainer.learningRate = 0.02
trainer.maxIteration = 100
trainer:train(dataset)
And it must works good (At least I think so), and it works correct when inputSize = 20. But when inputSize = 200 current error always is nan. At first I've thought that file reading part is incorrect. I've recheck it some times but it is working great. Also I found that sometimes too small or too big learning rate may affect on it. I've tried learning rate from 0.00001 up to 0.8, but still the same result. What I'm doing wrong?
Thanks,
Igor

Resources