I have successfully installed 3 weight cells to Particle Photon, using HX711. The fourth weight cell shows large negative values e.g.-69798 when a certain amount of weight is added. The weight cell should be able to measure up to 10kg, as do the other three weight cells.
Is there a possible explanation for this outcome? When lowering the weight (on all four weight cells), which are connected to a single plate, the values look good. When adding about 3-4kg on the plate, the three weight cells shows good values while the fourth shows large negative values.
Simple solution. Just turn the load cell by 180 degree in vertical axis. Sometimes the manufacturer puts the sticker other way around.
This should solve.
Related
I have an experiment where we are going to plant 4 different bushes in two different sized square plantings (8x8 bushes and 12x12 bushes) and we want their placement in the plantings to be randomized. I thought of trying to create a matrix in R for this purpose, since it's the software I'm used to. The issue is that the randomization has to follow a few limitations, and I’m not that experienced in creating matrices. The limitations are:
There has to be one of each species in each corner of the planting.
The two outer rows in the square has to contain an equal amount of each species.
The inner 4x4 quadrant in the square has to contain an equal amount of each species.
Plants of the same species can’t be planted next each other, but if they are placed on the diagonal of each other it’s fine.
I hope the issue is clear, otherwise ask!
I have a large database composed of observations of icebergs. Successive observations of the same iceberg, or observations of pieces that have broken away from the main iceberg, are connected using unique identifiers of each iceberg ("inst") and the preceding observation of that iceberg ("motherinst").
I need to isolate the branches of the iceberg "tree". I need to isolate the branch that follows the iceberg with the largest surface area. If a smaller iceberg breaks away, I need to start following that iceberg too.
Using igraph, I currently have the surface area ("area") assigned as a vertices attribute, but I suspect that this will need to be re-assigned as an edge attribute to weight the edges.
Here is a sample from the database that, with the code here, I have used to produce the plot (below).
g10 <- graph_from_data_frame(edgelist_df)
V(g10)$area=round(edgelist_df$area, digits = 1)[match(V(g10)$name, edgelist_df$inst)]
par(mar=c(0,0,0,0))
plot(g10, vertex.label = V(g10)$area)
In this example plot, the vertices are labelled with the size of each iceberg. Each branch that I am trying to isolate is circled. The red indicates the original icebergs. The blue indicates icebergs that have broken away. The green indicates icebergs that have broken away again. I need to isolate all of these branches]
Thank you for your help!
Can the A* algorithm be efficiently applied to a NxM rectangular grid with varying travel cost involved while moving to any cell and starting location is not a single cell but is composed of multiple closed cells; say a cluster of neighbouring cells where a neighbour of a cell is any of the eight cells surrounding it? (The ending cell is similar to that.)
If so, then can anyone please show the way and if not what can be a good procedure to tackle the problem?
Can the A* algorithm be efficiently applied to a NxM rectangular grid with varying travel cost [...]
Yes, A* works on Graphs of any type as long as edge costs are positive. Instead of phrasing the graph as a grid with regular neighbours, construct it so that grid cells which are connected have edges between them.
Edge costs can also be arbitrary as long as they are non-negative (positive or zero).
Make sure that your heuristic remains admissible.
and starting location is not a single cell but is composed of multiple closed cells;
Yes, this should be possible.
Decide on an initial cost for each of the cells. Zero as costs for all of them would be easiest.
Instead of adding just the single start vertex into the priority queue, add all the multiple cells into the priority queue.
say a cluster of neighbouring cells where a neighbour of a cell is any of the eight cells surrounding it? (The ending cell is similar to that.)
The goal can also be composed of multiple cells.
Just make it such that reaching any of the goal cells terminates the search.
Make sure you to compute the heuristic as the minimum of the individual heuristics to each goal cell. (I.e. compute heuristic for each goal cell, then take minimal value.)
I need a way to measure color correlation of pixels. For example, it's obviously that color correlation between those two pixels chains is higher
comparing to two chains below
.
Ok, I can:
correlate R,G,B values separately but what to do next ? I need to obtain only one figure, not three.
I also can transform RGB to HUE representation, but it looks that for all "grey"-colors (from black to white) H-component = 0, so, correlating different luminance grey-pixels will give same value...
So, I need your suggestions and help :)
thanks
In order to compare the R, G and B composents between two pixels, cosine similarity http://en.wikipedia.org/wiki/Cosine_similarity is certainly the best solution.
After that, if you want to compare the arrays only by comparing couples of pixels that are in the same index, you can just sum the values. You can also sum a rapidly decreasing function of the values as they decrease (for example the square) if you want to discard cases where nearly all the pixels are the same but with one or two big differences. You can take the square root if you want the opposite (privilegiate the number of nearly identical pixels rather than the fewest differences).
You are given an m*n grid, where each cell is marked either "b" or "w". You are also given black and white paints. You are allowed to use k strokes, each of any color (black OR white), a stroke is defined as coloring of contiguous uncolored cells from the same row (which means a stroke can not go beyond the length of the row, also if you pick up your brush before the end of the row thats the end of that stroke). The aim is to minimize the number of errors, an error occurs if you paint a cell with wrong color OR a cell remains unpainted. What is the optimal strategy?
Knowing solution for one row problem (what is a minimal number of errors with k strokes on a given BW row) can be used to solve problem.
For each row make list of number of errors for given number strokes k_i = [0, 1, ..., min k needed to cover i-th row]. Now we have n lists (of different sizes). To find in which rows to use 'k' strokes, it is enough to iteratvely pop k elements from begining of list which stroke cover most cells.
So, main task is to solve one row problem, and I'm not sure how :-)
Let C be number of colour changes in a row. Than minimal number of strokes to cover row is a ceil( (C+1)/2 ). That can be done by alternate stroke colour with first stroke to cover whole row and next strokes between most distant change in a last stroke. First stroke has colour of one (or both) end(s).
I think , with similar approach it is possible to find number of errors when there isn't enough strokes to cover whole row. Some ranges of one colour have to be omitted. That is be done by:
starting with colour that is not on a boundary (omitting first stroke),
some strokes are not between most distant change in last stroke, but
between closer changes.
I'm not sure, but it seems like it is enough to find few smallest same colour parts and that is what will stay as an error. Probably it is important how far these parts are from ends.
That is for now ...