Physics-based animations on the Web use properties like stiffness, damping, and mass to define the spring animations. While values from react-spring and framer-motion are interchangeable (same numerical values), it is much harder to precisely map those onto the range that Qt SpringAnimation uses.
Relevant Properties (with typical values)
react-spring
framer-motion
Qt SpringAnimation
tension 150
stiffness 150
spring 0 - 5.0 🚫
friction 20
damping 20
damping 0 - 1.0 🚫
mass 1
mass 1
mass 1.0 ✅
Does anybody have experience with translating values from design to development?
Is it a simple factor or more complex (like C to F)?
I could just try different combinations. While spring animations might look similar, they can actually be quite different in fine detail. A mapping would allow me to design animations in something like Figma and use those values directly in Qt, which usually takes much longer to compile and "play around" with...
I tried to generate a Gaussian noise signal having a standard deviation of 1 by setting the amplitude value in the noise source configuration to 1.
When I tried to display it using the QT GUI Frequency Sink, I initially expected that the resulting PSD would fluctuate around 0 dB across all frequency points. However, I have found that the PSD displayed at QT GUI Frequency Sink fluctuates around -40 dB (instead of 0 dB) over all frequency points.
From the signal processing theory's point of view, this result is clearly incorrect.
Is there a bug in my app that I don't know?
I am using GNU Radio application v3.8.2.0-57-gd71cd177 (Python 3.9.0) on windows 10.
Two things: there's a bug in your expectation:
the sum of all powers needs to be around 1, i.e. 0dB, because that's the power and thus the variance, so the individual bins need to be lower - in fact, the lower the longer your FFT is.
But there's also undocumented scaling in the frequency sink, so we can call that a GNU Radio bug, if we want. GNU Radio is aware of the problem. It's not easy to address because it's bad to have unexpected behaviour, but it's also bad to break existing applications.
I would like to generate a frequency with the resolution of 0.1Hz from the range of 0.0 up til 1000.0 Hz ( Example such as 23.1 Hz, 100.5 Hz and 999.7 Hz) I have found that using AD9833 we can generate the signal as what I was required, but the notes are a bit confusing to me.
The specification can be obtained HERE .
Need your kind assist to if we can make the Arduino code.. lets say, to generate a signal of 123.4 Hz via Serial monitor from Arduino and it displayed as it is in the oscilloscope?
Thank you.
Looking at the notes, it appears that programming this chip will be non-trivial. If you don't require frequencies all the way down to 0 Hz, this job can be done much more easily with a standard Windows sound card. (Sound cards are AC-coupled, so won't go below a few Hz.) For one example, my Daqarta software can generate frequencies (with any waveform you want) at a resolution better than 0.001 Hz. The maximum frequency will be a bit less than half the sound card's sample rate... typically 20 kHz at the default 48000 Hz sample rate.
You don't have to buy Daqarta to get this capability; the Generator function will continue to work after the trial period... free, forever.
UPDATE: You don't mention what sort of waveforms you need, but note that if you can use square waves you may be able to do the whole job with the Arduino alone. The idea is to set up a timer to produce interrupts at some desired sample rate. On each interrupt you add a step value to an accumulator, and send the MSB of the accumulator to an output pin. You control the output frequency by changing the step value. This is essentially a 1-bit version of the phase accumulator approach used by the AD9833 (and by the Daqarta Generator). The frequency resolution is controlled by the sample rate and the size of the accumulator. You can easily get much better than 0.1 Hz resolution.
Best regards,
Okay, so I am trying to drive a 7 segment based display in order to display temperature in degrees celcius. So, I have two displays, plus one extra LED to indicate positive and negative numbers.
My problem lies in the software. I have to find some way of driving these displays, which means converting a given integer into the relevant voltages on the pins, which means that for each of the two displays I need to know the number of tens and number of 1s in the integer.
So far, what I have come up with will not be very nice for an arduino as it relies on division.
tens = numberToDisplay / 10;
ones = numberToDisplay % 10;
I have admittedly not tested this yet, but I think I can assume that for a microcontroller with limited division capabilities this is not an optimal solution.
I have wracked my brain and looked around for a solution using addition/subtraction/bitwise but I cannot think of one at all. This division is the only one I can see.
For this application it's fine. You don't need to get bothered with performance in a simple thermometer.
If however you do need something quicker than division and modulo, then bitwise operations come to help. Basically you would use bitwise & operator, to compare your value to display with patterns describing digits to be displayed on the display.
See the project here for example: http://fritzing.org/projects/2-digit-7-segment-0-99-counting-with-arduino/
You might also try using a 7-seg display driver chip to simplify your output and save pins. The MC14511BCP (a "4511") is a good one. It'll translate binary coded decimal (BCD) to the appropriate 7-seg configuration. Spec sheets are available here and they can be commonly found at electronics parts stores online.
we have a particle detector hard-wired to use 16-bit and 8-bit buffers. Every now and then, there are certain [predicted] peaks of particle fluxes passing through it; that's okay. What is not okay is that these fluxes usually reach magnitudes above the capacity of the buffers to store them; thus, overflows occur. On a chart, they look like the flux suddenly drops and begins growing again. Can you propose a [mostly] accurate method of detecting points of data suffering from an overflow?
P.S. The detector is physically inaccessible, so fixing it the 'right way' by replacing the buffers doesn't seem to be an option.
Update: Some clarifications as requested. We use python at the data processing facility; the technology used in the detector itself is pretty obscure (treat it as if it was developed by a completely unrelated third party), but it is definitely unsophisticated, i.e. not running a 'real' OS, just some low-level stuff to record the detector readings and to respond to remote commands like power cycle. Memory corruption and other problems are not an issue right now. The overflows occur simply because the designer of the detector used 16-bit buffers for counting the particle flux, and sometimes the flux exceeds 65535 particles per second.
Update 2: As several readers have pointed out, the intended solution would have something to do with analyzing the flux profile to detect sharp declines (e.g. by an order of magnitude) in an attempt to separate them from normal fluctuations. Another problem arises: can restorations (points where the original flux drops below the overflowing level) be detected by simply running the correction program against the reverted (by the x axis) flux profile?
int32[] unwrap(int16[] x)
{
// this is pseudocode
int32[] y = new int32[x.length];
y[0] = x[0];
for (i = 1:x.length-1)
{
y[i] = y[i-1] + sign_extend(x[i]-x[i-1]);
// works fine as long as the "real" value of x[i] and x[i-1]
// differ by less than 1/2 of the span of allowable values
// of x's storage type (=32768 in the case of int16)
// Otherwise there is ambiguity.
}
return y;
}
int32 sign_extend(int16 x)
{
return (int32)x; // works properly in Java and in most C compilers
}
// exercise for the reader to write similar code to unwrap 8-bit arrays
// to a 16-bit or 32-bit array
Of course, ideally you'd fix the detector software to max out at 65535 to prevent wraparound of the sort that is causing your grief. I understand that this isn't always possible, or at least isn't always possible to do quickly.
When the particle flux exceeds 65535, does it do so quickly, or does the flux gradually increase and then gradually decrease? This makes a difference in what algorithm you might use to detect this. For example, if the flux goes up slowly enough:
true flux measurement
5000 5000
10000 10000
30000 30000
50000 50000
70000 4465
90000 24465
60000 60000
30000 30000
10000 10000
then you'll tend to have a large negative drop at times when you have overflowed. A much larger negative drop than you'll have at any other time. This can serve as a signal that you've overflowed. To find the end of the overflow time period, you could look for a large jump to a value not too far from 65535.
All of this depends on the maximum true flux that is possible and on how rapidly the flux rises and falls. For example, is it possible to get more than 128k counts in one measurement period? Is it possible for one measurement to be 5000 and the next measurement to be 50000? If the data is not well-behaved enough, you may be able to make only statistical judgment about when you have overflowed.
Your question needs to provide more information about your implementation - what language/framework are you using?
Data overflows in software (which is what I think you're talking about) are bad practice and should be avoided. While you are seeing (strange data output) is only one side effect that is possible when experiencing data overflows, but it is merely the tip of the iceberg of the sorts of issues you can see.
You could quite easily experience more serious issues like memory corruption, which can cause programs to crash loudly, or worse, obscurely.
Is there any validation you can do to prevent the overflows from occurring in the first place?
I really don't think you can fix it without fixing the underlying buffers. How are you supposed to tell the difference between the sequences of values (0, 1, 2, 1, 0) and (0, 1, 65538, 1, 0)? You can't.
How about using an HMM where the hidden state is whether you are in an overflow and the emissions are observed particle flux?
The tricky part would be coming up with the probability models for the transitions (which will basically encode the time-scale of peaks) and for the emissions (which you can build if you know how the flux behaves and how overflow affects measurement). These are domain-specific questions, so there probably aren't ready-made solutions out there.
But one you have the model, everything else---fitting your data, quantifying uncertainty, simulation, etc.---is routine.
You can only do this if the actual jumps between successive values are much smaller than 65536. Otherwise, an overflow-induced valley artifact is indistinguishable from a real valley, you can only guess. You can try to match overflows to corresponding restorations, by simultaneously analysing a signal from the right and the left (assuming that there is a recognizable base line).
Other than that, all you can do is to adjust your experiment by repeating it with different original particle flows, so that real valleys will not move, but artifact ones move to the point of overflow.