I am working on a project that involves GNU Radio/GRC and am not very familiar with the software. I am trying to output data to a serial port in GNU Radio using a block, but have not found a way to do so.
I was wondering if there is a pre-defined block that I can use to put this information to a serial port (USB on a Raspberry Pi 3), or if I had to create my own block. And if I had to create my own block, what that code would look like.
I have been able to write the data to a file using the File Sink to make sure I was getting data, and was wondering if the fix is something as simple as changing the File sink to an serial port sink. See picture below:
http://imgur.com/a/BdaMZ
I also did some research and found a github repo that looks like what I need -- unfortunately, the repository that it links to is no longer there. It did mention using pyserial, which is what I believe is meant for creating my own block in python. The link to this repo is below:
https://github.com/jmalsbury/gr-pyserial
… was wondering if the fix is something as simple as changing the File sink to an serial port sink.
Yes! Or No, it's even easier:
So, in fact, you could even simply use your file sink to write to e.g. /dev/ttyS0 (or /dev/ttyUSB0, or whatever is the device name of your serial port), but you'd have to set up the serial port to work like you want it to separately first. A way of doing that would be using stty, e.g.
stty -F /dev/ttyS0 115200
prior to running your flow graph.
Note that practically all in your flow graph point points to you not being sufficiently proficient with GNU Radio to successfully exchange data. I can't cover everything here, please read the official Guided Tutorials, but:
In a flow graph like yours, where the IO is the inherently rate-limiting element, you must not use "Throttle". Throttle is really just a tool to avoid a flowgraph consuming all your CPU (and to slow down simulations)
Giving your files a .grc ending is bad practice, as that is the ending reserved for GNU Radio flow graphs.
Giving it a .txt ending is plain misleading, since there's no text involved whatsoever. The "file format" (I wouldn't even call it a format) is really just plain binary numbers, as your computer handles them; not decimal ASCII representations of these floating point binary numbers
I also did some research and found a github repo that looks like what I need -- unfortunately, the repository that it links to is no longer there. It did mention using pyserial, which is what I believe is meant for creating my own block in python. The link to this repo is below:
Don't know what you're referring to, https://github.com/jmalsbury/gr-pyserial is perfectly existing!
Related
I am new to Intel Pintools, and am trying to write a pintool that stops at a given instruction type and then looks for specific instructions following it in the section. I've got the xed decoding working, but I am stuck at the part where I get the actual hex opcode. How can I do that?
I would love to use INS_Opcode() -- but these are instructions that haven't been executed yet (and may never be), so they aren't INS objects. There's xed_operand_values_get_iclass(), but that returns an iclass enum, not the actual primary opcode. I see from the xed header files that there are some raw buffers associated with the various xed structures, but it is not at all clear to me how I can use that to get the information I need. Can anyone enlighten me?
Apparently I missed it the first time I looked at the header files, but there's xed3_operand_get_nominal_opcode(), which does exactly what I need it to. Related: grep is a wonderful thing.
Assume I have sensitive information (passwords, private keys,...) that I saved to a file which I encrypted.
Is there an easy to use tool to convert back and forth between a small file (say 0.5kB) and an image (QR code?) that I can print out to have a safe backup?
You can use LaTeX with the ps-tricks and pst-barcode modules, it produces nice QR codes, and yesm we used it exactly for this purpose: Paper backup of SSH private keys.
Denso Wave of QR code developer distributes software on their site.
Membership registration on the site is required to obtain it.
Even if you can not print with this, there are various tools regardless of free/commercial, so please search.
The maximum amount of data that can be stored in the QR code is 2953 bytes in binary mode.
However, it depends on the ability of the scanner to use.
QR code FAQ #6 Can an image or sound be stored in a QR Code?
I wrote a linux program to do this, called qr-backup.
In researching similar programs as part of it, I discovered a number of alternative projects as well. All of these are also linux-only.
asc2qr.sh
paperbackup. Focused on GPG/SSH key backup. See also the paperkey preprocessor, to reduce the size of keys.
qrdump (incomplete)
qrpdf
If your file is very small (0.5KB is a good cutoff), you can generate one single QR code. An example command-line program to generate it is qrencode. Several web converters are also available.
According to https://randomascii.wordpress.com/2013/11/04/exporting-arbitrary-data-from-xperf-etl-files/, wpaexporter.exe should be the right tool to do so.
I manage to prepare a profile with the right data, but, unfortunately, wpaexporter keep trying to translate addresses, even if "-symbols" is not given to the command line, generating some useless
/<ModuleName.dll>!<Symbols disabled>
warnings.
This is annoying because part of our application use some Delphi code that can not generate symbols in a Microsoft compatible format. With addresses, we would be able to find the Delphi symbols in the call stack using map files.
Is there a way to extract call stack addresses from a wpr trace ?
Thanks, i completely missed processing options of xperf...
In the meantime, i found that LogParser (https://www.microsoft.com/en-us/download/details.aspx?id=24659) can also export an etl file to a csv (with actual values as well) :
LogParser.exe" "Select * from file.etl" -i:ETW -o:CSV -oTsFormat "HH:mm:ss.ln" > output_file.csv
From what i have seen so far, LogParser output might be more suitable for automatic parsing (only one line per event in the file, no header) while xperf output is more suitable for human processing (tabular representation).
Yes. You can also use xperf.exe. Have you tried the actions option?
xperf -a stack should help here I expect.
You can see detailed info with xperf -help processing command.
I've put a lot of time and research into finding a reliable solution for my problem. I am attempting to modify values in packets sent to and from my Iphone, of course I can't directly view and tamper with packets directly from my phone. So I've connected my phone via proxy to my computer (windows) so I could attempt to modify the packets their. Now I can successfully view and save packets but I can't seem to find a way to modify them on the fly.
I've followed many suggestions posted here such as scapy and other tools like it yet I can't seem to get them to work on windows at all, also i'm not sure these tools are even right for my end goal. I am familiar with modifying live packets on programs like WPE Pro, but I don't think that's the right tool for the job.
My question is this, is this the right path for accomplishing my goal? If so do you have a suggestion for tools that may help? If not, where should I begin looking for different solution?
Edit: Specifically I am aiming to alter hex code in the packet. In the http 1.1 packets I want to alter, I know the exact position of a certain group of hex values that translate to a plain text number. For example, say on line 80 is some hex equaling "value:12345". I want to alter that "12345" part to different numbers, keeping everything else the same so my phone will process that different value instead.
I have an issue regarding mrjob.
I'm using an hadoopcluster over 3 datanodes using one namenode and one
jobtracker.
Starting with a nifty sample application I wrote something like the
following
first_script.py:
for i in range(1,2000000):
print "My Line "+str(i)
this is obviously writing a bunch of lines to stdout
the secondary script is the mrjobs Mapper and Reducer.
Calling from an unix (GNU) i tried:
python first_script| python second_script.py -r hadoop
This get's the job done but it is uploading the input to the hdfs
completely. Just when everything ist uploaded he is starting the
second job.
So my question is:
Is it possible to force a stream? (Like sending EOF?)
Or did I get the whole thing wrong?
Obviously you have long since forgotten about this but I'll reply anyway: No it's not possible to force a stream. The whole hadoop programming model is about taking files as input and outputting files (and possibly creating side effects e.g. uploading the same stuff to a database).
It might help if you clarified what you want to achieve a little more.
However it sounds like you might want the contents of a pipe to be periodically processed, rather than wait until the stream is finished. The stream cant be forced.
The reader of the pipe (your second_script.py) needs break its stdin into chunks, either using
a fixed number of lines like this question and answer, or
non-blocking reads and a preset idle period, or
a predetermined break sequence emitted from first_script.py, such as a 'blank' line consisting of only \0.