Accelerometer using ADXL345 for Earthquake Detection - arduino

Well, i want to ask if ADXL345 can be used to detect an Earthquake Occurrence based on its magnitude/intensity level. For more information, I want to used an accelerometer to create a Device that can detect the intensity/magnitude level of an Earthquake.
I have absolutely no experience in this field, but it looks useful and fascinating.
Questions are:
is this device able to detect medium scale earthquakes?
if yes, does anybody did it, available to share experiences?
if no to the previous, is there any guide which explains algorithms, calculations and mechanical plans?

That sensor is not suitable. It has 13 bit resolution at +-16g full range. That gives you a sensitivity of 0.002g for the lsb. In order to detect an earthquake directly below you, you need approx. a few milli-g (e.g. see here), even less for earthquakes with an epicentre elsewhere.
You want a sensor which is much more sensitive by a factor of 100 and probably with more resolution (better ADC), too.
(And you should have been able to do this quick google-search analysis yourself ;) )

Using accelerometers reading tells you nothing about the actual magnitude of the quake itself. It tells you the size of the quake at your location. Combining location and amplitude will give you a 'weighted' measurement, but that's still useless without a calibration curve. Without knowing what acceleration, at a certain distance, corresponds to what magnitude you will be unable to tell what the magnitude is. You can certainly conclude that your measured earthquake has a median amplitude of, say, 2000% of a non-earthquake reading, but you won't be able to turn it into a Richter measurement. To do that you'd need to take some data during earthquakes of known magnitude and then work out how acceleration, distance and magnitude are related for your device. You could alternatively use a scale like the Shindo (just Google it).

Related

Measure distance of a pulley in Arduino/Raspberry PI

I'm trying to measure the distance an object is pulled along a pulley track using arduino (or Rasp Pi) sensors. I have an object that is manually pulled, from a resting position, on a pulley system and I need to be able to track the distance it travels over one pull.
Example: The object traveled 90% (example) of the total pulley distance for 5/10 pulls.
Example: The object traveled 11.53 ft along the pulley.
See the image below for a visual diagram. I have two ideas, but I'm not an Arduino sensor expert. I'd love input on an elegant solution.
Use an arduino sensor to sense how many times the pulley rotates, and then use an equation to determine the distance.
Use a sensor that senses the distance pulled in cord, maybe each direction, and record that distance. I got this idea from pump sensors, no idea if a comparison exists.
Really just looking for advice on what sensors to use, how to implement them (general), and what type of metrics I could record.
There are at least two ways of doing this:
Use a rotary encoder, which will tell how many rotations the pulley did. There are two disadvantages: line can slip on the pulley, resulting in inaccurate measurement, and there is no way you can now at what actual position the object is. Rotary encoder will give you only position relative to the starting position (unless you only need one rotation of the pulley - then you can use absolute encoders)
Use a distance sensor (ultrasonic, or IR) to measure at what height is the object. That way you will know the exact position of the object, but you may have a problem if your object is too small, has a shape or surface that will impact the measurements, or if the space is limited and a sensor can pick up walls, or other surrounding objects. Also if the objects may swing on the rope, there will probably a problem.
Software implementation for all of these solutions should be pretty simple, just decide on type of sensors - there are plenty of tutorials for all of them.
Both task 1 and 2 can be done easily using a digital rotary encoder at very nominal cost. It can sense direction and distance of travel quite accurately.

estimating distance to ibeacon AVR

I want to ask about I Beacon advertising, especially Tx Power.
I used two BLE module HM10 and HM11. I make one as a ibeacon (HM10). and other one used to connect and listen to HM10 broadcasting.
I used MCU ATmega32 AVR tied with HM11 and I used scanf function to read the broadcast. I want to extract the last byte (Tx Power). I want to measure the distance with AVR programming.
Could you tell me the algorithm?
The formula Apple uses to calculate a distance estimate to an iBeacon is not published. There are a number of alternative formulas including this one, based on a best fit power curve, that we wrote for the Android Beacon Library.
Further research we have done shows that the formula above basically works, but it has two main imperfections:
It does not work well for weaker beacon transmitters. With weaker broadcasts, the distance is underestimated.
It does not account for varying signal gains in receivers. Different receivers have different antennas and receivers which measure the same signals differently.
There is an ongoing discussion of the best formula here.
A bit late but hopefully useful to others. I have given up on Apple's "Accuracy" number; as #davidyoung points out, different devices will have different signal gains. Now I am not an engineer but more of a math and statistics person, so I have gone down the route of "fingerprinting" an indoor space instead. Essentially I read all RSSI from all beacons installed in a certain "venue". Some might not be within reach and therefore I just assume, in such cases, an RSSI of -95 dBm (which seems to be the floor past which a signal is not read any more). Such constituted array has the same beacons in the same positions at all times (even across app launches). I compute a 5 seconds moving average for each beacon (so a I se 5 arrays to do that). The resulting avg array is then shifted up by 95 units and normalised so that the sum of all of its values is one. If you want to tag an an indoor "point" you collect many of these normalised average arrays on that specific spot. I go ahead and construct a database of "spots". To forecast your proximity to any spot in a database you simply compute a quadratic distance of your current reading and the all of the fingerprints in the database.
Which beacons to use? At least class 2 in power. How many? At least a couple per room (put them in two adjacent corners, on the ceiling or high up).
The last step that you need to do is match the fingerprints with an x,y coordinate on your map. I never did this step, because I am mainly interested in proximity applications and not fully fingerprint and indoor space.
Perhaps the discussion above will serve you as a guidance on a technique that is used by many indoor location companies.
Disclosure: I have recently open sourced my code doing the above calculations.

ATmega128 microprocessor, issue regarding error when measuring distance in timer ticks?

Ok, so a laser on earth hits a mirror on the moon and bounces back. On the ATmega128 microprocessor, we use TIMER1 to capture the clock ticks when the laser shot out and the clock ticks when it returned, subtract and get a "distance" in clock ticks. (16MHz clock on ATmega128).
So we are supposed to determine how different can this measured distance be from the actual distance and what can cause it. As well as compute max error in each legal prescaler of TIMER1.
Looking at TIMER1's registers and input capture information in the ATmega128 datasheet I cannot find any kind of percentage error with the input capture. Like, this seems like a conceptual question, yet we are supposed to pull values out of the air and calculate something?
My question is if anyone knows anything of ATmega128, what values are being referred to in determining error from reading distance with timer ticks? My only guess is the error occurs when you use higher and higher prescalers because you lose preciseness when prescalers get larger. But again this is a conceptual answer and don't understand how I would calculate anything.
The counters/prescalars can be assumed to be perfect and will not cause any loss of resolution.
Your original clock source will be a predominant source of errors. If you are using an external clock with a crystal, these are usually good to 50 ppm (part per million) or better. If you are using an internal clock, the error is much higher (1% is not unreasonable for some microcontrollers).
The whole thing gets tricky if you remember your general relativity (you do have a PhD in Physics right?). The earth's rotation and gravity come into play wrt the speed of light and distance...

Which codec is best and what should be their parameters value?

I'm a beginner in the field of audio codec and finding it hard to understand; how does sampling rate, bit rate and any other parameter affect the encoding/decoding[Audio format], the quality of audio and file size.
I read constant bit rate is good than variable bit rate, but how to know what amount of bit rate would be perfect to encode the file in as small size as possible without compromising the quality. I'm specifically focusing on audio codec for present.
I had heard about the OPUS, SILK, G.722, SPEEX, but don't know which one should I use to get the better quality and less file size. Also, what parameters should I set for this codecs so they can work effectively for me.
Can anyone guide on this?
Thanks in advance
If you think of the original analog music as a sound wave then converting it to digital means approximating that wave as digital bits. The sampling rate is how many points on that wave you are taking per unit time so the higher the sampling rate the closer you are to the original sound. Lower sampling rate means higher compression but lesser audio quality.
Similarly the bit rate is effectively 'how much' information you're encoding at each point so again, lower bit rate means higher compression but lower audio quality.
Compression algorithms generally use pyschoacoustics to try to determine what information can be lost with the least amount of audible difference. In some sections of a track this may be more or less than in others so using a variable bit rate enables you to achieve higher compression without a 'big' audible drop in quality.
It's well explained here: Link
I don't know the details of those codecs but generally what you should use and what parameters you should pass depends on what you're trying to achieve and for what purpose. For portable use where audio quality might not be paramount you might want to pass lower values to achieve smaller file sizes - for audiophile speakers you probably want to pass the maximum.

Wifi Triangulation

What would be the best way to triangulate a wireless network passively. Are there tools available? Algorithms? Libraries?
My goal would be to create a relative map of various objects that sends or receive signals using signal strength (DB's), signal/noise ratio, signal phase, etc. from a few location points. With enough sampling, i'm guessing it would be possible to create a good 2d/3d map.
I'm searching for stuff in any language / platform.
Some keywords: wi-fi site survey, visualization, coverage, location, positioning
Thinking about using kismet to gather the data and then process it. Maybe Free Space Path Loss for RF in the 2.4Ghz range to calculate a relative distance. And optionally try to use RF obstacle attenuation estimation values (based on some user input) to give better estimates. Then use trilateration to generate possible relative coordinates.
You can't use the GPS technique because the timing is nothing like accurate enough.
The best you can do is Trilateration based on the signal strength from each base station and assume that range is proportional to signal.
You will probably need to force a connection to each base station in turn in order to measure the signal strength.
Interesting question. Initial thoughts were using output from something like the WiSpy spectrum analyzer. I like the idea of using a directional antenna. Looks like some research (may) be underway.
Instead of trilateration you could use bilinear interpolation. This is said to be better for non-linear distance vs. signal strength data like wifi in an urban environment would be. http://courses.cit.cornell.edu/ee476/FinalProjects/s2007/ayl26_ym82/ayl26_ym82/index.htm has the background math and the what I assume is AVR C for doing it with magnetic field sensors.
Using signal strength to judge distance could easily be thrown off by differences in materials blocking line-of-sight to each of the sampling points. It would probably be better to do the sampling with a directional antenna, and from each sampling point, find the bearing that maximizes signal strength to each device you want to locate. With this technique, you can use only two or three sampling locations, depending on the accuracy with which you can estimate the bearings.
Ars Technica has an article about this, citing the Fraunhofer Institute and Skyhooks Wireless. This technology is built into every iPhone and iPad.
Actually I think you should try using an algorithm like the GPS one (wikipedia).. of course you can simplify it according to your need, for example:
you need to install on every item that should broadcast its position (the navigation signal) an application that actually does it
you should use a different channel for every single item to be sure not to generate collisions (it depends also on how much you broadcast the signal)
so if you place at least 4 broadcasters you can triangulate on every client to allow it to calculate its position. Naturally the bcasters should be as much similar as possible in response..
by the way these are just ideas..

Resources