I wonder if you wonderful people can point me in the right direction??
I'm quite new to web programming, I'm OK at conventional C/C++ etc but just really getting into the whole webside thang. Anyway, I was wondering how I may solve the following.
I want to have a sensor (doesn't matter what type it is, let's say a temperature sensor). This sensor will read it's environment reporting on (in this case) temperature. (Let's say the circuitry has been built and it's giving the data I need - via a desktop app)
My question is
List item I want to convert the sensor output to a 3D graph (any ideas guys),
I want to be able to show the graph on PC/smartphones (so think a web interface the best approach - unless someone/anyone has better ideas and most importantly links)
List item I want to show the temperature change/graph in 'real time' (absolutely crucial) through the web interface .
I think, the conversion of the data would have to be done server-side. I'm thinking (and I may be wrong so please do correct)
List item Client side - Get sensor data (via the desktop app)
List item Client side - Transmit data (via the desktop app - not sure how this would be done yet)
List item Server side - Convert data to graph
List item Server side - Transmit data/graph
List item Client side - Receive data
List item Client side - Show graph
If this were just a question of rendering a sensors data to a PC screen (self contained application) - I would feel confident to tackle it. However, I think where I'm getting stuck is the rendering/transmitting and displaying of the 3D images in real time using web technology, platforms and languages. If it helps, I am just picking up Php, MySql and python. I already know C/C++/VB and assembly.
I hope this makes sense, any starting points that you can give will be greatly appreciated
Jason :)
You could use MRTG http://en.wikipedia.org/wiki/Multi_Router_Traffic_Grapher
It provides real time and png graphics of sensor data.
It is very easy to configure and shows/records tipical 1 year of data in a RRD http://en.wikipedia.org/wiki/Round-Robin_Database
Related
First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.
Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk
I'm using open source Torque 3d game engine for the avia simulator project.
I need to generate single image from the several IG (image generator) PCs.Each IG displays has its own view camera with certain angle offset and get the info about the current position from the server via LAN.
I've already setup multi IG system.
Network connection is robust (less than <1 ms)
Frame rate is good as well - about 70 FPS on each IG.
However while moving the whole picture looks broken because some IG are updating their views faster than others.
I'm looking for the solution that will make the IG update simultaneously. Maybe some kind of precise time synchronization algorithms that make different PC connected via LAN act as one.
I had a much simpler problem, but my approach might help you.
You've got to run clocks on all your machines with, say, a 15 millisecond tick. Each image needs to be generated correctly for a specific tick and marked with its tick ID time. The display machine can check its own clock, determine the specific tick number (time) for which it should display, grab the images for that specific time, and display them.
(To have the right mindset to think about this, imagine your network is really bad and think about one IG delivering 1000 images ahead of the current display tick while another is 5 ticks behind. Write for this sort of system and the results will look really good on the one you have.)
Ideally you want your display running a bit behind the IGs so you always have a full set of images for the current tick. I had a client-server setup and slowed the display (client) timer down if it came close to missing updates and sped it up if it was getting too far behind. You have to synchronize all your IG machines, so it might be better to have the master clock on the display and have it send messages to speed up any IG machine that's getting behind. (You may not have the variable network delays I had, but it's best to plan for them.)
The key is that each image must be made at a particular time, that the display include only images for the time being displayed, and that the composite images appear right when they should (every 15 milliseconds, on the millisecond). Also, do not depend on your network or even your machines to do anything in a timely manner. Use feedback to keep everything synched.
Addition On Feedback:
Say the last image for the frame at time T arrives 5ms after time T by the display computer's time (real time). If you display the frame for time T at T plus 10 ms, no one will notice the lag and you'll have plenty of time to assemble the images. Using a constant (10 ms) delay might work for you, especially if you make it big enough. It may be the way to go if you always run with the exact same network.
But you are depending on all your IG machines being precisely synchronized for real time, taking no more than a certain amount of time to produce their image, and delivering their image to the display machine all in predictable lengths of time.
What I'd suggest is have your display machine determine the delay based on the time stamps on the images it receives. It would want to increase the delay if it isn't getting the images it needs in time, and decrease it if all the IG's are running several images ahead of what the display needs. (You might want to ignore the occasional really late image. You have to decide which is more annoying: images that are out-of-date, a display that is running noticably behind time, or a display that noticably speeds up and slows down.)
In my original answer I was suggesting some kind of feedback from the display to keep the IG machines running on time, but that may be overkill: your computer's clocks are probably good enough for that.
Very generally, when any two processes have to coordinate over time, it's best if they talk to each other to stay in step (feedback) rather than each stick to a carefully timed schedule.
I have to configure my video camera display resolution before capturing and processing the data. Initially I did it as follows.
Created all necessary interfaces.
Added camera and renderer filters
Did RenderStream with Capture and Preview PIN Categories.
Then did the looping through AM_MEDIA_TYPE structures and setting the params.
This worked for a lot of cameras, but a few cameras failed. Then I changed the order of 3 and 4 given above. That is, I did the setting of params before the RenderStream. This time, the error cases went through, but a few On board cameras in SONY VAIO laptop etc seem to fail.
Now, my questions are
Which is the optimal and correct method of getting and setting AM_MEDIA_TYPE parameters and running the graph?
If there are different cameras, if I get an indication of which order is the best for a particular camera by going through the camera's DirectShow interfaces, that will also serve my purpose.
Please help me in this at the earliest,
Thanks and regards,
Shiju
IAMStreamConfig::SetFormat needs to be used to set capture format before the pin is connected and rendered. This way the downstream subchain of filters is built with proper media types.
i am developing a multiplayer roleplaying game, (No, its not a mmorpg. ;)
My current setup is like this.
Client tells the server "I want to move forward"/"I want to move backwards", the server then updates your entity, and informs all clients in the area about the change. The server is also updating each entity every 20ms and sending updates every 100ms to the clients, these updates contains position, velocity, rotation etc.
So far so good, however i have nothing in store for smoothing the movement between the packets on the client side, and i must say, i can not get it working. I have been reading up on prediction, interpolation, deadreackoning but its all a big mess for me.
So right now i am just doing something like "Position = Packet.Position", which causes a very stuttering movement.
So, what i want help with is, how do i get a more smooth movement? Have been looking at the XNA Prediction Sample, but i could not get it right.
Thanks //F
Read Valve's description of their multiplayer protocol. It should be instructive, and gives a very clear example on how you do the prediction/interpolation.
I'd suggest the idea from another question (see the accepted answer)
Here the client calculates its position itself as if its not a network game. Client regularly sends his current position to the server. And if client cheats or can't continue moving in the chosen direction, server just sends the client his correct position.
The same algorithm was used in Ultima Online (at least when I was playing it 10 years ago)
I solved it by running a ghost entity alongside with my main one.
The ghost will get updated every frame aswell, but whenever a packet comes in, his values are set to the values of the packet.
I then gradually tweak the real entity to where the ghost is.