how do I create a system that takes in voice data(rtp) and then creates loss in this data(like delay or packet drop/loss)? The output of the system(data) should be readable which made me think i might not be able to use ns-2.Also, ns-2 does not support VBR(needed for voice). I might be wrong in this aspect though. How can I achieve this loss condition in linux environment? please give suggestions.
NIST Net looks as if it will do what you want.
Related
so basically I've been online trying to research this the whole day and I seem to only be able to come across specific setups that people have for their own specific needs rather than a generic list of hardware needed.
What I want to do firstly using my raspberry pi 2 running raspbian, and secondly a laptop running kali, is to be able to do penetration testing along with some extras.
What I Am looking for is a list of hardware that I need (other than the rpi2 for the first case and laptop for the second) that will enable me to sniff out WiFi signals, and attempt to get onto the network. I believe the general name for this is wardriving.
I know that I need a portable power supply for the rpi2, and a screen or some sort (I want a small screen that I can see the rpi gui desktop from. Not just terminal), so any suggestions of examples of those would be appreciated.
Where I get confused is about the WiFi antenna that I need. From what I understand is that it needs to be one that can monitor as well as connect to a WiFi, but I don't really know of any examples or of what the actual difference is between it and a normal usb WiFi stick.
I'm also not sure what else I need to have beyond that to successfully accomplish my stated goal.
Any further help would be greatly appreciated, and I think beneficial to anyone else who's looking to get started doing the same thing.
Any extra information would be good too what I mean is when I was doing my research I saw some people mentioning radio attachments, gps attachments, etc. But I'm not really sure if they're necessary to start or things that can be added further down the road with experience.
Thanks.
Ok so I seem to have found a good article that answers at least the general part of my question. It can be found here.
http://lewiscomputerhowto.blogspot.ca/2014/06/how-to-hack-wpawpa2-wi-fi-with-kali.html?m=1
It also gives tips on the process of pen testing.
My project is to create an interactive program using the Micro:bit microprocessor I'm building a game which uses a drill motor as a controller of sorts reading the rotation direction and speed as inputs for control
but my mentor also said it would be cool to power the board at the same time as the game is running so now I hit the situation where once I stop turning to change direction or my speed goes below transmitting 3.3 volts to power it then the game restarts and I lose all progress
I had the Idea of using a second micro:bit as a sort of storage place being powered by my computer and the two continuously communicating sending back player position and other objects on the LED's
but i can't figure out how to get the two Micro:bit's to talk to each other
If someone could just point me in the right direction or even set up some sort of communication to nudge me in the right direction as I start moving forward
i'm a high school student who doesn't know as much as I pretend to so I'll probably need a lot of help (i am more advanced then most in my class at this sort of thing so think of me as a tech gifted teenager thrown in with college students losing my undeserved ego day by day LOL) please help me somehow I'm currently completely lost
You won't be able to use Bluetooth for the reasons indicated in the documentation (not enough memory): http://microbit-micropython.readthedocs.io/en/latest/ble.html
However, there is an incoming implementation of the lighter radio module, which would allow you to send simple data: https://github.com/bbcmicrobit/micropython/pull/283
The proposed documentation can be found in: https://github.com/bbcmicrobit/micropython/pull/305
As you can see in GitHub, at the time of writing it has not yet been merged into micropython. So if you'd like to try it you would have to clone the repository, apply the patch and build it from source. Keep in mind there is risk for the API to change, as there are still discussions about it.
Alternatively, as Sean has mentioned, you can use the C++ DAL implementation of the radio module to get something running on the meantime. Or if you prefer, the blocks and touch develop languages also offer radio functionality.
I don't think there is a way to do this in micropython (or at least simply), but the microbit runtime docs describe that, as well as supporting bluetooth, the 2.4 GHz radio:
However, it can also be placed into a much simpler mode of operation based that allows simple, direct micro:bit to micro:bit communication
In order to use this, you might need to write in c++ using the mbed environment (or offline) - but I hope this at least gives you a pointer to start from.
Here's a blog post describing how to do data logging using two microbits in exactly the configuration you describe.
http://www.suppertime.co.uk/blogmywiki/2016/06/microbit-logger
How to get the two micro:bits to talk to each other
As of 2016, you can! First check micropython has the radio module
import radio
If you get the error "No module named 'radio'", use https://codewith.mu/
Then follow the radio tutorial https://microbit-micropython.readthedocs.io/en/latest/tutorials/radio.html
The API is
https://microbit-micropython.readthedocs.io/en/latest/radio.html
BACKSTORY
The other day I found a motorized wheel chair that someone was throwing away. Being a maker who spends a lot of time looking at what other people have made online I decided to snatch it and try to make a robot out of it. I also bought an Arduino mega, a Kinect sensor, and a motor controller to try to control the motors and give it some form of vision.
MY VISION
Honestly I don’t intend for this robot to be much more than a fun, and challenging, project. My current goals are to have it run SLAM algorithms to figure out where it is on a map and for it to navigate to predetermined points on the map. However at this point I would be happy with just being able to do a simple teleop control with the keyboard.
MY PROBLEM
I have spent the past week researching ros and how to get it talking to my Arduino. I have installed diff_drive_controller, Turtlebot, ros_control, ros_serial, ros_arduino_bridge, and several others trying to find something that will tell the motors what to do. By now I feel like I have a good barely below the surface understanding about how ros works. Basically there are a series of nodes each publishing info for the other nodes to see and subscribing to info that they want to read. All I want right now is a node that publishes data about the velocity of the motors based on it trying to navigate or teleop or something like that. I think turtlebot is my best bet considering it is an all in one stack that does everything I want it to do. The only problem is I don’t have an iRobot create. But it seems like it should be simple enough to intercept those commands and have them drive my own robot base. However I’m not sure which topic to listen on and how to run turtle bot in a way that doesn’t try to connect to an iRobot create. I could just listen to the /cmd_vel_mux/input/teleop topic but I think that would limit me to just teleop and it might make it hard to move on to autonomy in the future.
What topic should I listen on? Am I going about this the right way? Are there any packages that would be better suited for my needs? Keep in mind that I am new to ros so tutorials would be appreciated.
I look forward to your responses
Thanks, Logan
Nice sounding project! I second the recommendation to take a look at the tutorials, but I think you spend some more time with the basic ROS tutorials before diving into the world of Arduino + ROS.
For instance, I noticed one misconception I believe you may have. It doesn't really matter which topic your nodes listens to, as its just a name that can be easily remapped through a parameter given when launching the node. The important thing is to ensure you are listening to the right type of messages - they specify the interface by which all the different nodes communicate. There's a bunch of options, and if none of them fit your use case, you can define your own.
I suspect that for low-level things, such as drivers for your motors, you will need to write your own ROS nodes. For advanced functionality, such as SLAM; there's a variety of options. You can find one that's suited to the input data you have available from your sensors.
One last recommendation is to take advantage of the features of ROS that allow you to break a big problem into manageable subtasks. Do one thing at a time - implement a motor contoroller, write a teleoperation method; taking care to specify suitable interfaces at each point. The advantage of this approach is that if you made smart choices in defining individual components with good interfaces, it is very easy to replace them with another one should you so wish.
You can find a list of tutorials how to interface Arduino and ROS here.
I am struggling while trying to port recent SQLite sources to VxWorks 6.8. The architecture is PPC.
I made a separate topic (Crash around pthreads while integrating SQLite into RTP application on VxWorks) to provide all details about the particular problem I am experiencing at the moment. But it looks like the problem is too specific and requires some amount of certain experience (porting C code to different platforms, pthreads, SQLite, knowledge of VxWorks).
So, I decided to just get a confirmation that it is doable at all. I mean it is doable for sure but I need to know that someone has actually succeeded with it within some reasonable time frame.
Please respond only if you yourself accomplished this. No general suggestions, like: "VxWorks is POSIX, SQLite is C - should not be a problem".
To moderators: I don't mean to duplicate my question. I am just norrowing it down and intend to close if no constructive ansver(s) appear.
Thanks in advance
Ok, I have figured it out - it was the problem in dosFS. I formatted the flash to HRFS and was able to run SQLite.
I had few porting issues on the way, something that is really platform-dependent. One familiar with POSIX functions should be able to figure it out.
I guess there is a problem with dosFS. So for now I will stick to HRFS.
In case of any questions about the porting itself - just contact me, may be I had te same and fixed.
Just for informaiton, I am using PPC and VxWorks 6.8.
Regards
We have a device that has an analog camera. We have a card that samples it and digitizes it. This is all done in directx. At this point in time, replacing hardware is not an option, but we need to code such that we can see this video feed real-time regardless of any hardware or underlying operating system changes occur in the future.
Along this line, we've chosen Qt to implement a GUI to view this camera feed. However, if we move to a linux or other embedded platform in the future and change other hardware (including the physical device where the camera/video sampler lives), we will need to change the camera display software as well, and that's going to be a pain because we need to integrate it into our GUI.
What i proposed was migrating to a more abstract model where data is sent over a socket to the GUI and the video is displayed live after being parsed from the socket stream.
First, is this a good idea or a bad idea?
Secondly, how would you implement such a thing? How do the video samplers usually give usable output? How can I push this output over a socket? Once I am on the receiving end parsing the output, how do I know what to do with the output (as in how to get the output to render)? The only thing I can think of would be to write each sample to a file and then to display the contents of the file every time a new sample arrives. This seems like an inefficient solution to me, if it would work at all.
How do you recommend I handle this? Are there any cross-platform libraries available for such a thing?
Thank you.
edit: i am willing to accept suggestions of something different rather than what is listed above.
Have you looked at QVision? It is a Qt based framework for managing video and video processing. You don't need the processing, but I think it will do what you want.
Anything that duplicates the video stream is going to cost you in performance, especially in an embedded space. In most situations for video, I think you're better off trying to use local hardware acceleration to blast the video directly to the screen. With some proper encapsulation, you should be able to use Qt for the GUI surrounding the video, and have a class that is platform specific that you use to control the actual video drawing to the screen (where to draw, and how big, etc.).
Edit:
You may also want to look at the Phonon library. I haven't looked at it much, but it appears to support showing video that may be acquired from a range of different sources.