Is it possible to obtain STEM detector signals? - hardware-interface

I'm writing a DigitalMicrograph script to acquire a mapping data of scattered electron intensities obtained from such as an ADF STEM detector, under various incident-beam conditions being controlled by a handmade script. But, unfortunately I don't know a command to obtain STEM detector signals under non-STEM mode (controlled by DigiScan). What command should I use in this case?
It will be appreciated if you share some wisdom. Thank you very much in advance.
.

As the STEM detector signal is processed by the DigiScan unit, there is no way to read the detector 'signal' independently of it.
Also: You do not get the signal as a "stream" in time, but clocked by the DigiScan. That is, you have to start an acquisition with the DigiScan and can not only "listen" to the detector without one.
However, the DigiScan acquisition is not tied to being in STEM mode. You can start a DigiScan acquisition while being in TEM mode. You can choose the parameters such that acquiring an 'image' is scanning the beam only within a very small area, so that the beam becomes quasi-stationary. Maybe this can help you out?
Here is an example of what I mean: However, I have not tested this on hardware:
// Create "Scan" parameters for an overview
// This image will stay as survey. Its content is not important
// as you're in TEM mode, but we need it as reference
number paramID
number width = 1024 // pixel
number height = 1024 // pixel
number rotation = 0 // degree
number pixelTime= 2 // microseconds
number lSynch = 0 // no-linesync
paramID = DSCreateParameters( width, height, rotation, pixelTime, lSynch )
number signalIndex, dataDepth, selected, imageID
signalIndex = 0 // HAADF (most likely) ?
dataDepth = 4 // 4 byte data
selected = 1 // acquire this signal
imageID = 0 // create new image
DSSetParametersSignal( paramID, signalIndex, dataDepth, selected, imageID )
number continuous = 0 // 0 = single frame, 1 = continuous
number synchronous = 1 // 0 = return immediately, 1 = return when finished
// Capture the "survey" image
DSStartAcquisition( paramID, continuous, synchronous )
image survey := DSGetLastAcquiredImage( signalIndex )
survey.SetName("Survey")
if ( !DSIsValidDSImage( survey ) ) Throw( "Something wrong..")
DSDeleteParameters( paramID ) // remove parameters from memory
// Now we create a "subscan" image for a quasi-stationary beam...
// The size has a minimum size (16x16?) but as we keep the beam
// "stationary" this will rather define your "time-resolution" of
// data. Scan 'speed' is taken from our reference...
number sizeX = 1024
number sizeY = 1024
image Static := IntegerImage( "Static", dataDepth, 0, sizeX, sizeY )
Static.ShowImage()
// defeine "ROI" on the survey. Just the center pixel!
number t,l,b,r
t = height/2
l = width/2
b = t + 1
r = l + 1
DSScanSubRegion( survey, Static, t, l, b, r )

Related

detectron2 diffusioninst: oom-kill during training

I tried to run code for DiffusionInst based on Detectron2 (source code: https://github.com/chenhaoxing/DiffusionInst). During my training, my python process has always been killed (at 10000-20000 iteration epochs, which is insufficient for diffisioninst training).
I only rewrite the code for dataloader, in order to adapt to my own dataset.
My new code for dataloader:
class DiffusionInstDatasetMapper:
"""
A callable which takes a dataset dict in Detectron2 Dataset format,
and map it into a format used by DiffusionInst.
The callable currently does the following:
1. Read the image from "file_name"
2. Applies geometric transforms to the image and annotation
3. Find and applies suitable cropping to the image and annotation
4. Prepare image and annotation to Tensors
"""
def __init__(self, cfg, is_train=True):
if cfg.INPUT.CROP.ENABLED and is_train:
self.crop_gen = [
# T.ResizeShortestEdge([400, 500, 600], sample_style="choice"),
T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE),
]
else:
self.crop_gen = None
self.tfm_gens = build_transform_gen(cfg, is_train)
logging.getLogger(__name__).info(
"Full TransformGens used in training: {}, crop: {}".format(str(self.tfm_gens), str(self.crop_gen))
)
self.img_format = cfg.INPUT.FORMAT
self.is_train = is_train
def __call__(self, dataset_dict):
"""
Args:
dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format.
Returns:
dict: a format that builtin models in detectron2 accept
"""
dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
# image = utils.read_image(dataset_dict["file_name"], format=self.img_format)
## crop roi
'''lst = dataset_dict['file_name'].split('-')
image = sitk.ReadImage('-'.join(lst[:-2]))
image = sitk.GetArrayFromImage(image)
above, below = int(lst[-2]), int(lst[-1])
image = image[:, above:below, :]'''
## no crop roi
image = sitk.ReadImage(dataset_dict["file_name"],sitk.sitkFloat32)
image = sitk.GetArrayFromImage(image)
# print('**********************',image.shape,'************************')
image = (image - image.min()) / (image.max() - image.min()) * 255
#print(image.dtype)
image = image.transpose(1, 2, 0).astype(np.uint8)
image = np.repeat(image, 3, axis=2)
#print(image.dtype)
utils.check_image_size(dataset_dict, image)
#origshape = image.shape
if self.crop_gen is None:
image, transforms = T.apply_transform_gens(self.tfm_gens, image)
else:
image, transforms = T.apply_transform_gens(
self.tfm_gens + self.crop_gen, image
)
#print('orig', origshape, '\t\tresized', image.shape)
image_shape = image.shape[:2] # h, w
# Pytorch's dataloader is efficient on torch.Tensor due to shared-memory,
# but not efficient on large generic data structures due to the use of pickle & mp.Queue.
# Therefore it's important to use torch.Tensor.
dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1)))
del image
gc.collect()
if not self.is_train:
# USER: Modify this if you want to keep them for some reason.
dataset_dict.pop("annotations", None)
return dataset_dict
if "annotations" in dataset_dict:
# USER: Modify this if you want to keep them for some reason.
# import pdb;pdb.set_trace()
for anno in dataset_dict["annotations"]:
# anno.pop("segmentation", None)
anno.pop("keypoints", None)
# USER: Implement additional transformations if you have other types of data
annos = [
utils.transform_instance_annotations(obj, transforms, image_shape)
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image_shape, mask_format="bitmask")
dataset_dict["instances"] = utils.filter_empty_instances(instances)
del instances
gc.collect()
return dataset_dict
And the information about the oom-killer:
[2599547.303018] python invoked oom-killer: gfp_mask=0x24000c0, order=0, oom_score_adj=995
[2599547.303084] [<ffffffff8119bfae>] oom_kill_process+0x1fe/0x3c0
[2599547.303133] Task in /kubepods/burstable/podd09a5032-8b07-11ed-bb60-ac1f6b9ec91e/8b4a8d5c2c1a082f93b1610173beb70bbc19fb1a1c2e28150d2d912ed9b95b10 killed as a result of limit of /kubepods/burstable/podd09a5032-8b07-11ed-bb60-ac1f6b9ec91e
[2599547.305957] Memory cgroup out of memory: Kill process 1041771 (python) score 1198 or sacrifice child
[2599547.307810] Killed process 1041771 (python) total-vm:36436532kB, anon-rss:10288264kB, file-rss:104888kB
[2599718.702250] python invoked oom-killer: gfp_mask=0x24000c0, order=0, oom_score_adj=995
[2599718.702299] [<ffffffff8119bfae>] oom_kill_process+0x1fe/0x3c0
[2599718.702333] Task in /kubepods/burstable/podd09a5032-8b07-11ed-bb60-ac1f6b9ec91e/8b4a8d5c2c1a082f93b1610173beb70bbc19fb1a1c2e28150d2d912ed9b95b10 killed as a result of limit of /kubepods/burstable/podd09a5032-8b07-11ed-bb60-ac1f6b9ec91e
I set IMS_PER_BATCH to 1, and used a dataset which contains only 1 image, but the oom problem still occurred.
I wonder know what should i do to prevent oom problem?

How would you normalize and calculate speed for a 2D vector if it was clamped by different speeds in all four directions? (-x, +x, -y, +y)

My goal here is to improve the user experience so that the cursor goes where the user would intuitively expect it to when moving the joystick diagonally, whatever that means.
Consider a joystick that has a different configured speed for each direction.
e.g. Maybe the joystick has a defect where some directions are too sensitive and some aren't sensitive enough, so you're trying to correct for that. Or maybe you're playing an FPS where you rarely need to look up or down, so you lower the Y-sensitivity.
Here are our max speeds for each direction:
var map = {
x: 100,
y: 200,
}
The joystick input gives us a unit vector from 0 to 1.
Right now the joystick is tilted to the right 25% of the way and tilted up 50% of the way.
joystick = (dx: 0.25, dy: -0.50)
Sheepishly, I'm not sure where to go from here.
Edit: I will try #Caderyn's solution:
var speeds = {
x: 100, // max speed of -100 to 100 on x-axis
y: 300, // max speed of -300 to 300 on y-axis
}
var joystick = { dx: 2, dy: -3 }
console.log('joystick normalized:', normalize(joystick))
var scalar = Math.sqrt(joystick.dx*joystick.dx / speeds.x*speeds.x + joystick.dy*joystick.dy / speeds.y*speeds.y)
var scalar2 = Math.sqrt(joystick.dx*joystick.dx + joystick.dy*joystick.dy)
console.log('scalar1' , scalar) // length formula that uses max speeds
console.log('scalar2', scalar2) // regular length formula
// normalize using maxspeeds
var normalize1 = { dx: joystick.dx/scalar, dy: joystick.dy/scalar }
console.log('normalize1', normalize1, length(normalize1))
// regular normalize (no maxpseed lookup)
var normalize2 = { dx: joystick.dx/scalar2, dy: joystick.dy/scalar2 }
console.log('normalize2', normalize2, length(normalize2))
function length({dx, dy}) {
return Math.sqrt(dx*dx + dy*dy)
}
function normalize(vector) {
var {dx,dy} = vector
var len = length(vector)
return {dx: dx/len, dy: dy/len}
}
Am I missing something massive or does this give the same results as regular vector.len() and vector.normalize() that don't try to integrate the maxspeed data at all?
three solutions :
You can simply multiply each component of the input vector by it's respective speed
you can divide the vector itself by sqrt(dx^2/hSpeed^2+dy^2/vSpeed^2)
you can multiply the vector itself by sqrt((dx^2+dy^2)/(dx^2/hSpeed^2+dy^2/vSpeed^2)) or 0 if the input is (0, 0)
the second solution will preserve the vector's direction when the first will tend to pull it in the direction with the greatest max speed. But if the domain of those function is the unit disc, their image will be an ellipse whose radii are the two max speeds
EDIT : the third method does what the second intended to do: if the imput is A, it will return B such that a/b=c/d (the second method was returning C):

ADC transfer function

I took over the project from someone who had gone a long time ago.
I am now looking at ADC modules, but I don't get what the codes mean by.
MCU: LM3S9B96
ADC: AD7609 ( 18bit/8 channel)
Instrumentation Amp : INA114
Process: Reading volts(0 ~ +10v) --> Amplifier(INA114) --> AD7609.
Here is codes for that:
After complete conversion of 8 channels which stored in data[9]
Convert data to micro volts??
//convert to microvolts and store the readings
// unsigned long temp[], data[]
temp[0] = ((data[0]<<2)& 0x3FFFC) + ((data[1]>>14)& 0x0003);
temp[1] = ((data[1]<<4)& 0x3FFF0) + ((data[2]>>12)& 0x000F);
temp[2] = ((data[2]<<6)& 0x3FFC0) + ((data[3]>>10)& 0x003F);
temp[3] = ((data[3]<<8)& 0x3FF00) + ((data[4]>>8)& 0x00FF);
temp[4] = ((data[4]<<10)& 0x3FC00) + ((data[5]>>6)& 0x03FF);
temp[5] = ((data[5]<<12) & 0x3F000) + ((data[6]>>4)& 0x0FFF);
temp[6] = ((data[6]<<14)& 0x3FFF0) + ((data[7]>>2)& 0x3FFF);
temp[7] = ((data[7]<<16)& 0x3FFFC) + (data[8]& 0xFFFF);
I don't get what these codes are doing...? I know it shifts but how they become micro data format?
transfer function
//store the final value in the raw data array adstor[]
adstor[i] = (signed long)(((temp[i]*2000)/131072)*10000);
131072 = 2^(18-1) but I don't know where other values come from
AD7609 datasheet says The FSR for the AD7609 is 40 V for the ±10 V range and 20 V for the ±5 V range, so I guessed he chose 20vdescribed in the above and it somehow turned to be 2000???
Does anyone have any clues??
Thanks
-------------------Updated question from here ---------------------
I don't get how 18bit concatenated value of data[0] + 16bit concatenated value of data[1] turn to be microvolt after ADC transfer function.
data[9]
+---+---+--- +---+---+---+---+---+---++---+---+---++---+---+---++
analog volts | 1.902v | 1.921v | 1.887v | 1.934v |
+-----------++-----------+------------+------------+------------+
digital value| 12,464 | 12,589 | 12,366 | 12,674 |
+---+---+---++---+---+---++---+---+---++---+---+---++---+---+---+
I just make an example from data[3:0]
1 resolution = 20v/2^17-1 = 152.59 uV/bit and 1.902v/152.59uv = 12,464
Now get thru concatenation:
temp[0] = ((data[0]<<2)& 0x3FFFC) + ((data[1]>>14)& 0x0003) = C2C0
temp[1] = ((data[1]<<4)& 0x3FFF0) + ((data[2]>>12)& 0x000F) = 312D3
temp[2] = ((data[1]<<6)& 0x3FFC0) + ((data[3]>>10)& 0x003F) = 138C
Then put those into transfer function and get microvolts
adstor[i] = (signed long)(((temp[i]*2000)/131072)*10000);
adstor[0]= 7,607,421 with temp[0] !=1.902*e6
adstor[1]= 30,735,321 with temp[1] != 1.921*e6
adstor[2]= 763,549 with temp[2]
As you notice, they are quite different from the analog value in table.
I don't understand why data need to bit-shifting and <<,>> and added up with two data[]??
Thanks,
Please note that the maximum 18-bit value is 2^18-1 = $3FFFF = 262143
For [2] it appears that s/he splits 18-bit word concatenated values into longs for easier manipulation by step [3].
[3]: Regarding adstor[i] = (signed long)(((temp[i]*2000)/131072)*10000);
To convert from raw A/D reading to volts s/he multiplies with the expected volts and divides by the maximum possible A/D value (in this case, $3FFFF) so there seems to be an error in the code as s/he divides by 2^17-1 and not 2^18-1. Another possibility is s/he uses half the range of the A/D and compensates for that this way.
If you want 20V to become microvolts you need to multiply it by 1e6. But to avoid overflow of the long s/he splits the multiplication into two parts (*2000 and *10000). Because of the intermediate division the number gets small enough to be multiplied at the end by 10000 without overflowing at the expense of possibly losing some least significant bit(s) of the result.
P.S. (I use $ as equivalent to 0x due to many years of habit in certain assembly languages)

Graph branch decomposition

Hello,
I would like to know about an algorithm to produce a graph decomposition into branches with rank in the following way:
Rank | path (or tree branch)
0 1-2
1 2-3-4-5-6
1 2-7
2 7-8
2 7-9
The node 1 would be the Root node and the nodes 6, 8 and 9 would be the end nodes.
the rank of a branch should be given by the number of bifurcation nodes up to the root node. Let's assume that the graph has no loops (But I'd like to have no such constraint)
I am electrical engineer, and perhaps this is a very standard problem, but so far I have only found the BFS algorithm to get the paths, and all the cut sets stuff. I also don't know if this applies.
I hope that my question is clear enough.
PS: should this question be in stack overflow?
From your example, I'm making some assumptions:
You want to bifurcate whenever a node's degree is > 2
Your input graph is acyclic
With an augmented BFS this is possible from the root r. The following will generate comp_groups, which will be a list of components (each of which is a list of its member vertices). The rank of each component will be under the same index in the list rank.
comp[1..n] = -1 // init all vertices to belong to no components
comp[r] = 0 // r is part of component 0
comp_groups = [[r]] // a list of lists, with the start of component 0
rank[0] = 0 // component 0 (contains root) has rank 0
next_comp_id = 1
queue = {r} // queues for BFS
next_queue = {}
while !queue.empty()
for v in queue
for u in neighbors(v)
if comp[u] == -1 // test if u is unvisited
if degree(v) > 2
comp[u] = next_comp_id // start a new component
next_comp_id += 1
rank[comp[u]] = rank[comp[v]] + 1 // new comp's rank is +1
comp_groups[comp[u]] += [v] // add v to the new component
else
comp[u] = comp[v] // use same component
comp_group[comp[u]] += [u] // add u to the component
next_queue += {u} // add u to next frontier
queue = next_queue // move on to next frontier
next_queue = {}

GameMaker: How to freeze enemies temporarily?

I tried to set the Enemy.path_speed=0 and then set an alarm[0]=5, when it gets to alarm[0] it simply set Enemy.path_speed=100(the default value) again. But it does not work. enemies are frozen forever. How else can I freeze the enemies temporarily when I hit the space?
path_speed = 0 and path_speed = 100 is not good idea. As example, objects can have different speed. I use speed factor, like speed = normal_speed * k where k is 1 for normal speed and 0 for full stop.
Enemy Create event:
spd = irandom_range(5, 10) // different speed, just as example
path_start(path0, spd, 1, true)
path_position = random(1)
k = 1
Enemy Step event:
path_speed = spd * k
Controller Space key pressed event:
with (o_enemy)
k = 0
alarm[0] = 3 * room_speed
Controller Alarm0 event:
with (o_enemy)
k = 1
And finished gm-project
You just set alarm. You never decrease it. so your statement is never true and hence the objects do not move.

Resources