I am working on a project where I am using a Keithley 2400 Sourcemeter to measure surface conductivity of a specific material. I have a fixed delay number (say 1000 ms) between each measurement (I am using LabTracer 2.0). For the purpose of my project I have to measure the conductivity on three different points on the surface of the specimen. In order to achieve that I am using an Arduino microcontroller which controls the circuit’s endings – three resistors on different places on the surface of the material. The Arduino board starts the process when it is being triggered by the Keithley (when the current flow starts – utilizing analogRead() )
My problem is that the delay time of Keithley is not exactly 1000 ms, and this causes out of sync phenomena.
An ideal solution to this would be if I could get an output in real-time each time a new measurement is taken by the Keithley. In that way I could use this output as a reference to whether open or close the circuit. The analogRead command is useful for the start of the circuit, as it gives an output when the current flow starts. Ideally I would like to receive another output for each new measurement.
The way I am thinking the whole process is this:
Initialization of the circuit by setting resistor_1 to OPEN
Start Keithley
Take measurement
CLOSE resistor_1
OPEN resistor_2
If new measurement is taken then:
CLOSE resistor_2
OPEN resistor_3
If new measurement is taken then:
CLOSE resistor_3
OPEN resistor_1
etc…
Any suggestions on that would be much appreciated. Thanks in advance!
I'm new to BLE and hope you will be able to point me towards the right implementation approach.
I'm working on an application in which the peripheral (battery operated) device continuously aggregate sensor readings.
On the mobile side application there will be a "sync" button, upon button press, I would like to transfer all the sensor readings that were accumulated in the peripheral to the mobile application.
The maximal duration between sync's can be several days, hence, the accumulated data can reach a size of 20Kbytes.
Now, I'm wondering what will be the best approach to perform the data transfer from the peripheral to the central application.
I thought about creating an array of characteristics where each characteristic will contain a fixed amount of samples (e.g. representing 1hour of readings).
Then, upon sync, I will:
Read the characteristics count (how many 1hours cells).
Then read the characteristics (1hour cells) one by one.
However, I have no idea if this is a valid approach ?
I'm not sure if this is the most "power efficient" way that I can
use.
I'm not sure if Characteristic READ is the way to go, or maybe
I need to use indication instead.
Any help here will be highly appreciated :)
Thanks in advance, Moti.
I would simply use notifications.
Use one characteristic which you write something to in order to trigger the transfer start.
Then have another characteristic which you simply stream data over by sending 20 bytes at a time. Most SDKs for BLE system-on-a-chips have some way to control the flow of data so you don't send too fast. Normally by having a callback triggered when it is ready to take the next notification.
In order to know the size of the data being sent, you can for example let the first notification contain the size, and rest of them the data.
This is the most time and power efficient way since there can be sent many notifications per connection interval, compared if you do a lot of reads instead which normally requires two round trips each. Don't use indications since they also require basically two round trips per indication. They're also quite useless anyway.
You could possibly increase the speed also by some % by exchanging a larger MTU (which leads to lower L2CAP/ATT headers overhead).
I'm using open source Torque 3d game engine for the avia simulator project.
I need to generate single image from the several IG (image generator) PCs.Each IG displays has its own view camera with certain angle offset and get the info about the current position from the server via LAN.
I've already setup multi IG system.
Network connection is robust (less than <1 ms)
Frame rate is good as well - about 70 FPS on each IG.
However while moving the whole picture looks broken because some IG are updating their views faster than others.
I'm looking for the solution that will make the IG update simultaneously. Maybe some kind of precise time synchronization algorithms that make different PC connected via LAN act as one.
I had a much simpler problem, but my approach might help you.
You've got to run clocks on all your machines with, say, a 15 millisecond tick. Each image needs to be generated correctly for a specific tick and marked with its tick ID time. The display machine can check its own clock, determine the specific tick number (time) for which it should display, grab the images for that specific time, and display them.
(To have the right mindset to think about this, imagine your network is really bad and think about one IG delivering 1000 images ahead of the current display tick while another is 5 ticks behind. Write for this sort of system and the results will look really good on the one you have.)
Ideally you want your display running a bit behind the IGs so you always have a full set of images for the current tick. I had a client-server setup and slowed the display (client) timer down if it came close to missing updates and sped it up if it was getting too far behind. You have to synchronize all your IG machines, so it might be better to have the master clock on the display and have it send messages to speed up any IG machine that's getting behind. (You may not have the variable network delays I had, but it's best to plan for them.)
The key is that each image must be made at a particular time, that the display include only images for the time being displayed, and that the composite images appear right when they should (every 15 milliseconds, on the millisecond). Also, do not depend on your network or even your machines to do anything in a timely manner. Use feedback to keep everything synched.
Addition On Feedback:
Say the last image for the frame at time T arrives 5ms after time T by the display computer's time (real time). If you display the frame for time T at T plus 10 ms, no one will notice the lag and you'll have plenty of time to assemble the images. Using a constant (10 ms) delay might work for you, especially if you make it big enough. It may be the way to go if you always run with the exact same network.
But you are depending on all your IG machines being precisely synchronized for real time, taking no more than a certain amount of time to produce their image, and delivering their image to the display machine all in predictable lengths of time.
What I'd suggest is have your display machine determine the delay based on the time stamps on the images it receives. It would want to increase the delay if it isn't getting the images it needs in time, and decrease it if all the IG's are running several images ahead of what the display needs. (You might want to ignore the occasional really late image. You have to decide which is more annoying: images that are out-of-date, a display that is running noticably behind time, or a display that noticably speeds up and slows down.)
In my original answer I was suggesting some kind of feedback from the display to keep the IG machines running on time, but that may be overkill: your computer's clocks are probably good enough for that.
Very generally, when any two processes have to coordinate over time, it's best if they talk to each other to stay in step (feedback) rather than each stick to a carefully timed schedule.
I have been involved in building a custum QGIS application in which live data is to be shown on the viewer of the application.
The IPC being used is unix message queues.
The data is to be refreshed at a specified interval say, 3 seconds.
Now the problem that i am facing is that the processing of the data which is to be shown is taking more than 3 seconds,so what i have done is that before the app starts to process data for the next update,the refresh QTimer is stopped and after the data is processed i again restart the QTimer.The app should work in such a way that after an update/refresh(during this refresh the app goes unresponsive) the user should get ample time to continue to work on the app apart from seeing the data being updated.I am able to get acceptable pauses for the user to work-- in one scenario.
But on different OS(RHEL 5.0 to RHEL 5.2) the situation is something different.The timer goes wild and continues to fire without giving any pauses b/w the successive updates thus going into an infinite loop.Handling this update data definitely takes longer than 3 sec,but for that very reason i have stopped-restarted the timer while processing..and the same logic works in one scenario while in other it doesnt.. The other fact that i have observed is that when this quick firing of the timer happens the time taken by the refreshing function to exit is very small abt 300ms so the start-stop of the timer that i have placed at the start-and-end of this function happens in that small time..so before the actual processing of the data finishes,there are 3-4 starts of the timer in queue waiting to be executed and thus the infinite looping problem gets worse from that point for every successive update.
The important thing to note here is that for the same code in one OS the refresh time is shown to be as around 4000ms(the actual processing time taken for the same amount of data) while for the other OS its 300ms.
Maybe this has something to do with newer libs on the updated OS..but I dont know how to debug it because i am not able to get any clues why its happening as such... maybe something related to pthreads has changed b/w the OSs??
So, my query is that is there any way that will assure that some processing in my app is timerised(and which is independent of the OS) without using QTimer as i think that QTimer is not a good option to achieve what i want??
What option can be there?? pthreads or Boost threads? which one would be better if i am to use threads as an alternate??But how can i make sure atleast a 3 second gap b/w successive updates no matter how long the update processing takes?
Kindly help.
Thanks.
If I was trying to get an acceptable, longer-term solution, I would investigate updating your display in a separate thread. In that thread, you could paint the display to an image, updating as often as you desire... although you might want to throttle the thread so it doesn't take all of the processing time available. Then in the UI thread, you could read that image and draw it to screen. That could improve your responsiveness to panning, since you could be displaying different parts of the image. You could update the image every 3 seconds based on a timer (just redraw from the source), or you could have the other thread emit a signal whenever the new data is completely refreshed.
In our game project we did have a timer loop set to fire about 20 times a second (the same as the application framerate). We use this to move some sprites around.
I'm wondering if this could cause problems and we should instead do our updates using an EnterFrame event handler?
I get the impression that having a timer loop run faster than the application framerate is likely to cause problems... is this the case?
As an update, trying to do it on EnterFrame caused very weird problems. Instead of a frame every 75ms, suddenly it jumped to 25ms. Note, it wasn't just our calculation claimed the framerate was different, suddenly the animations sped up to a crazy rate.
I'd go for the Enter frame, in some special cases it can be useful to have two "loops" one for logic and one for the visuals, but for most games I make I stick to the Enter frame-event listener. Having a separate timer for moving your stuff around is a bit unnecessary since having it set to anything except the framerate would make the motion either jerky or just not visible (since the frame is not redrawn).
One thing to consider however is to decouple your logic from the framerate, this is most easily accomplished by using getTimer (available in both as2 and as3) to calculate the time that has expired since the last frame and adjusting the motions or whatever accordingly.
A timer is no more reliable than the enter frame event, flash will try to keep up with whatever rate you've set, but if you're doing heavy processing or complex graphics it will slow down, both timers and framerate.
Here's a rundown of how Flash handles framerates and why you saw your content play faster.
At the deepest level, whatever host application that Flash is running in (the browser usually) polls flash at some interval. That interval might be every 10ms in one browser, or 50ms in another. Every time time that poll occurs, Flash does something like this:
Have (1000/framerate) miliseconds passed since the last frame update?
If no: do nothing and return
If yes: Execute a frame update:
Advance all (playing) timelines one frame
Dispatch all events (including an ENTER_FRAME event
Execute all frame scripts and event handlers with pending events
Draw screen updates
return
However, certain kinds of external events (such as keypresses, mouse events, and timer events) are handled asynchronously to the above process. So if you have an event handler that fires when a key is pressed, the code in that handler might be executed several times between frame updates. The screen will still only be redrawn once per frame update, unless you use the updateAfterEvent() method (global in AS2, attached to events in AS3).
Note that the asynchronous behavior of these events does not affect the timing of frame updates. Even if you use timer events to, for example, redraw the screen 50 times per second, frame animations will still occur at the published framerate, and scripted animations will not execute any faster if they're driven by the enterFrame event (rather than the timer).
The nice thing about using enter frame events, is your processing will degrade at the same pace as the rendering and you'll get a screen update right after the code block finishes.
Either method isn't guaranteed to occur at a specific time interval. So your event handler should be determining how long it's been since it last executed, and making decisions off of that instead of purely how many times it's run.
I think timerEvent and Enter Frame are both good options, I have used both of them in my games. ( Did you mean timerEvent by timer loop? )
PS: notice that in slow machines the timer may not refresh quick enough, so you may need to adjust your code to make game work "faster" in slow machines.
I would suggest using a class such as TweenLite ( http://blog.greensock.com/tweenliteas3/ ) which is lightweight at about 3kb or if you need more power you can use TweenMax, which i believe is 11kb. There are many advantages here. First off, this "engine" has been thoroughly tested and benchmarked and is well known as one of the most resource friendly ways to animate few or even many things. I have seen a benchmark, where in AS3, 1,500 sprites are being animated with TweenLite and it holds a strong 20 fps, as where competitors like Tweener would bog down to 9 fps http://blog.greensock.com/tweening-speed-test/. The next advantage is the ease of use as I will demonstrate below.
//Make sure you have a class path pointed at a folder that contains the following.
import gs.TweenLite;
import gs.easing.*;
var ball_mc:MovieClip = new MovieClip();
var g:Graphics = ball_mc.graphics;
g.beginFill(0xFF0000,1);
g.drawCircle(0,0,10);
g.endFill();
//Now we animate ball_mc
//Example: TweenLite.to(displayObjectName, totalTweeningTime, {someProperty:someValue,anotherProperty:anotherValue,onComplete:aFunctionCalledWhenComplete});
TweenLite.to(ball_mc, 1,{x:400,alpha:0.5});
So this takes ball_mc and moves it to 400 from its current position on the x axis and during that same Tween it reduces or increases the alpha from its current value to 0.5.
After importing the needed class, it is really only 1 line of code to animate each object, which is really nice. We can a also affect the ease, which I believe by default is Expo.easeOut(Strong easeOut). If you wanted it to bounce or be elastic such effects are available just by adding a property to the object as follows.
TweenLite.to(ball_mc, 1,{x:400,alpha:0.5,ease:Bounce.easeOut});
TweenLite.to(ball_mc, 1,{x:400,alpha:0.5,ease:Elastic.easeOut});
The easing all comes from the gs.easing.* import which I believe is Penner's Easing Equations utilized through TweenLite.
In the end we have no polling (Open loops) to manage such as Timer and we have very readable code that can be amended or removed with ease.
It is also important to note that TweenLite and TweenMax offer far more than I have displayed here and it is safe to say that I use one of the two classes in every single project. The animations are custom, they have functionality attached to them (onComplete: functionCall), and again, they are optimal and resource friendly.