I'm working on a MapBasic code and I have a problem. I wrote a code that calculates the length of the line that the user clicked on. my program calculates the length of the line. The program I write separately writes the start and end points of the line.
The first question I want to ask you is how can I show the number of decimal digits of the starting and ending coordinates of the line shown by the program as 3 digits? the returned values either show two digits or do not appear in decimal digits.
The second question I want to ask you is:
the line length calculated with the program I wrote differs from the line length calculated with the calculator. what is the reason of this?
I converted the layer of the line object I draw in Mapinfo software to shape format using the Universal Translator tool.
I opened the table in ArcMap. the length of the same line in the table is very close to the length I calculate with my calculator. The result value I found in Mapinfo is not the same as the result I found in ArcMap.
I wrote another program using MapBasic. I created a dialog in the program. In my program, the user creates points in the layer with the X - Y coordinate values written in edittext. I created two points with the mapbasic program that I wrote. the program also writes the X - Y coordinates of the points on the screen after creating the point. when using the program I entered three digits of the decimal digits of the point coordinates. but the X - Y coordinates on the screen appear as two digits. I measure the distance between two points using the ruler in Mapinfo program. I also calculated the length using the X- Y coordinates that appear on the screen with the calculator. The length value that the ruler in Mapinfo finds is not the same as the length value I calculated.
When I tried the program I wrote in Mapinfo software, I set the projection of the layers I created as Türkish Coordinate Systems (3 degree k = 1 ITRF) Cenral Meridian 33.
Where am I doing wrong? Could you help me with this?
thanks everyone
Picture of mapbasic program I wrote added this ask.
In order to calculate length of a line you don't need Pythagoras formula, simply use ObjectLen(obje, "m")
In order to get desired number of digits have a look at function Format$. Examples from documentation:
Format$( 12345, ",#") ' returns "12,345"
Format$(-12345, ",#") ' returns "-12,345"
Format$( 12345, "$#") ' returns "$12345"
Format$(-12345, "$#") ' returns "-$12345"
Format$( 12345.678, "$,#.##") ' returns "$12,345.68"
Format$(-12345.678, "$,#.##") ' returns "-$12,345.68"
Format$( 12345.678, "$,#.##;($,#.##)") 'returns "$12,345.68"
Format$(-12345.678, "$,#.##;($,#.##)") 'returns "($12,345.68)"
Format$(12345.6789, ",#.###") ' returns "12,345.679"
Format$(12345.6789, ",#.#") ' returns "12,345.7"
Format$(-12345.6789, "#.###E+00") ' returns "-1.235e+04"
Format$( 0.054321, "#.###E+00") ' returns "5.432e-02"
Format$(-12345.6789, "#.###E-00") ' returns "-1.235e04"
Format$( 0.054321, "#.###E-00") ' returns "5.432e-02"
Format$(0.054321, "#.##%") ' returns "5.43%"
Format$(0.054321, "#.##\%") ' returns ".05%"
Format$(0.054321, "0.##\%") ' returns "0.05%"
Before you create any objects or perform calculation you should set the coordinate system, for example
Set CoordSys Earth Projection 25, 1003, "m", 7.4395833333, 46.9524055555, 600000, 200000 Bounds (-99400000, -99800000) (100600000, 100200000)
Note, you have CoordSys on several places, don't mix them
Set Map ... CoordSys ... which applies only for the map but not for objects in your code.
Commit Table ... CoordSys... which applies to saved table only.
SessionInfo( SESSION_INFO_COORDSYS_CLAUSE ) String result that indicates a session's CoordSys clause.
Related
I have a load cell connected to an HX711 which all works fine. I am attempting to create a calibration table with 10 points holding the raw sensor output and the calibrated value using a set of weights. The load cell is bi-directional so it works in any direction so the output is positive and negative counts but the zero is not necessarily 0 output.
This all works fine when the numbers are all positive or all negative in each lookup table but fails when there is negative & positive numbers in the captured points. Eg, the output from the HX711 is positive 28,000 with no load. Add a load of 1kg and get a reading of -56,000. The next reading for 1Kg is say, -83,000. These are stored as {28,000, -56000, 83,000} in an array with the the calibrated {0, 1, 2} in another array.
Normally I interpolate the result based on finding which 2 numbers the raw count falls between. Everything works when the numbers are less than -56,000 and I get readings of 1 to 2kg. When the reading is greater than -56,000, it fails to calculate the reading and I end up with NAN.
It can also be the other way around with negative and then positive. (-56,000, 28,000, 55,000} for example.
How to handle this situation?
I worked this out not long after I posted the question and thought that the answer would help anyone else with this same issue. By comparing the 2 values for negative or positive as I stepped through the table and then swapping them around in the calculation, it works. The difference between 28,000 and -56,000 comes out as 84,000 and using this to do the maths works. I confirmed operation by applying 1kg and 2kd test loads. It reads in both directions pos or neg.
I am trying to read a .tif-file in julia as a Floating Point Array. With the FileIO & ImageMagick-Package I am able to do this, but the Array that I get is of the Type Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}.
I can convert this FixedPoint-Array to Float32-Array by multiplying it with 255 (because UInt8), but I am looking for a function to do this for any type of FixedPointNumber (i.e. reinterpret() or convert()).
using FileIO
# Load the tif
obj = load("test.tif");
typeof(obj)
# Convert to Float32-Array
objNew = real.(obj) .* 255
typeof(objNew)
The output is
julia> using FileIO
julia> obj = load("test.tif");
julia> typeof(obj)
Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
julia> objNew = real.(obj) .* 255;
julia> typeof(objNew)
Array{Float32,2}
I have been looking in the docs quite a while and have not found the function with which to convert a given FixedPoint-Array to a FloatingPont-Array without multiplying it with the maximum value of the Integer type.
Thanks for any help.
edit:
I made a small gist to see if the solution by Michael works, and it does. Thanks!
Note:I don't know why, but the real.(obj) .* 255-code does not work (see the gist).
Why not just Float32.()?
using ColorTypes
a = Gray.(convert.(Normed{UInt8,8}, rand(5,6)));
typeof(a)
#Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
Float32.(a)
The short answer is indeed the one given by Michael, just use Float32.(a) (for grayscale). Another alternative is channelview(a), which generally performs channel separation thus also stripping the color information from the array. In the latter case you won't get a Float32 array, because your image is stored with 8 bits per pixel, instead you'll get an N0f8 (= FixedPointNumbers.Normed{UInt8,8}). You can read about those numbers here.
Your instinct to multiply by 255 is natural, given how other image-processing frameworks work, but Julia has made some effort to be consistent about "meaning" in ways that are worth taking a moment to think about. For example, in another programming language just changing the numerical precision of an array:
img = uint8(255*rand(10, 10, 3)); % an 8-bit per color channel image
figure; image(img)
imgd = double(img); % convert to double-precision, but don't change the values
figure; image(imgd)
produces the following surprising result:
That second "all white" image represents saturation. In this other language, "5" means two completely different things depending on whether it's stored in memory as a UInt8 vs a Float64. I think it's fair to say that under any normal circumstances, a user of a numerical library would call this a bug, and a very serious one at that, yet somehow many of us have grown to accept this in the context of image processing.
These new types arise because in Julia we've gone to the effort to implement new numerical types (FixedPointNumbers) that act like fractional values (e.g., between 0 and 1) but are stored internally with the same bit pattern as the "corresponding" UInt8 (the one you get by multiplying by 255). This allows us to work with 8-bit data and yet allow values to always be interpreted on a consistent scale (0.0=black, 1.0=white).
When I assign a system array of doubles to an ilnumerics double array, the values are rounded off to nearest integer. This happens particularly for only large arrays.
Is there any way in ILnumerics to specify up to how many decimals the rounding should occur?
The following screenshot shows the problem . Sample_pulsedata is double array of length 1860 which I am assigning to sample_ydata.
The elements are not really rounded. The effect rather comes from the way the elements are displayed in Visual Studios data tips. ILNumerics tries to find a common scale factor which allows to display all elements in an array aligned.
In your example - presumably - there exist large values at higher indices, which are not shown currently (scroll down in order to find them). These elements cause the scale factor to be 1/10^4. This is indicated in the first line, index [0]: '(:;:) 1e+004'. The 32.57 therefore must get rounded to 33 in order to fit into the 4 digits after the decimal point. '4' is a fixed value in ILNumerics and cannot easily get changed.
The actual values of the array elements are not affected, of course. You can use the Watch window to show only the interesting part of the array, without the rounding effect:
sample_ydata["0:13"]
Or, even better, use the ILNumerics Array Visualizer in order to visualize your data graphically. This not only gives a nice overview of the whole array but also prevents from such artefacts as you encountered.
I'm new to julia and trying to make a simple script to simulate population growth. So at each time-step the population grows as follows N(t+1)=N(t)(1+beta). So at each time step I sample from a poisson distribution with mean given by N(t+1). I would like to stop when either N reaches a maximum value or reaches 0. I've implemented this in Julia but the population often goes further than the maximum value i define. Additionally any time the N->0 i get an error message : ErrorException("lambda must be positive").
using Distributions
function new_pop(N)
beta=0.1
w_fit=1
rand(Poisson(N*(1+w_fit*beta)))
end
pop_S=10
pop_Max=100
while (pop_S<pop_Max | pop_S>0)
pop_S=new_pop(pop_S)
println(pop_S)
end
I think you might want || rather than |. A single bar does bitwise OR, whereas two bars is logical OR.
I have been stumped on this for a long time and I was wondering if anyone can help me out. How could I write a program that would calculate the shift value of an encrypted Caesar's Cypher file in C++? My assignment is to take each lower case letter and calculate how many times each is used, then find the most frequent character. I know I can do that with a map of char and int. But then I would have to go back to that letter to change it to the letter 'e'. With maps there's no way to search back through values. I think the only way is a vector of vectors but I wouldn't know how to find the letter again with those either. Does anyone know a better way or how could I use vectors to accomplish this?
You can go like this.
First read whole file in a buffer.
Create map with char key and int value. with all alphabets and values 0
Loop over whole buffer till end incrementing value in map by 1 for each character. Also maintain max variable storing key ha of character having maximum value.
At end of loop max variable will point to e.
Subtracting 4 from max will give you shift value for this cipher. If it comes negative then you can add 26. (As this calculation in in mod 26)
All you need is a vector of size 26 (one for each character) where A has index 0 and Z has index 25.
Go through the ciphertext and in the vector increase the value for the specified character index.
When you've gone through all the ciphertext then go through the vector and check for the highest value. This is probably the character E.
Now you take the index and subtract with 4 (index of E).
This yields the shift value.
Let's say 20 has the highest count then your shift value is 16.