Is iBeacon Advertising data is started to
02 01 1a 1a ff 4c 00 02 15
or
02 01 06 1a ff 4c 00 02 15??
I captured sample Advertising packets however, it started to
02 01 1a 1a ff 4c 00 0c 0e
I known as last 2 byte is data type and data length that always same value :(
it is not iBeacon packets? plz explain to me ;(
The 0x0215 value is a two byte beacon type code associated with iBeacon. Both iOS CoreLocation and a properly configured Android Beacon Library will recognize this byte sequence as an iBeacon transmission, decode it and respond with detection callbacks.
The 0x0c0e is a a different byte sequence that is not associated with any beacon type code I have seen on my 5 years of work in the beacon industry. it may not be a beacon advertisement at all, but a more general manufacturer advertisement used for other purposes. it may also be a custom beacon format that is not widely known.
You will. red to check with the device manufacturer to see which of the two is the case. If it is a custom beacon format, you can use the Android Beacon Library's BeaconParser class to configure it to detect this beacon type, but you will still need to get the specs of the format from the manufacturer.
Related
I am recording stereo audio from line-in of the desktop using Microsoft's Core Audio API. It records at 44100Hz, 32 bit. I want to know how the stereo data is recorded into the buffer, like is it first 32 bit is of one microphone and next 32 bit is of second microphone or something else?Here is the code I used to record audio
Typically the channels are interleaved. So your buffer should look like this: [left 16 bits][right 16 bits][left 16 bits][right 16 bits] ... [left 16 bits][right 16 bits].
My capture graph is dying. I have traced the problem to a media-sample buffer starvation inside the Microsoft Mpeg-2 Demultiplexer filter.
Processing stops inside CBaseAllocator::GetBuffer. The pool is exhausted and the thread sleeps waiting indefinitely for a buffer to be recycled.
0:866> ~~[3038]s
ntdll!NtWaitForSingleObject+0x14:
00007ffe`49199f74 c3 ret
0:094> k
# Child-SP RetAddr Call Site
00 00000035`807fede8 00007ffe`460b9252 ntdll!NtWaitForSingleObject+0x14
01 00000035`807fedf0 00007ffe`22a35f4e KERNELBASE!WaitForSingleObjectEx+0xa2
02 00000035`807fee90 00007ffe`35609460 QUARTZ!CBaseAllocator::GetBuffer+0x7e
03 00000035`807feec0 00007ffe`3560697a mpg2splt!CMediaSampleCopyBuffer::GetCopyBuffer+0x60
04 00000035`807fef60 00007ffe`35606cc9 mpg2splt!CBufferSourceManager::GetNewCopyBuffer+0x3a
05 00000035`807fefa0 00007ffe`356073de mpg2splt!CStreamParser::CopyStream+0x89
06 00000035`807feff0 00007ffe`35608325 mpg2splt!CMpeg2PESStreamParser::ProcessBuffer_+0x15a
07 00000035`807ff040 00007ffe`35610724 mpg2splt!CMpeg2PESStreamParser::ProcessSysBuffer+0x135
08 00000035`807ff090 00007ffe`3560fb2e mpg2splt!CStreamMapContext::Process+0xb4
09 00000035`807ff110 00007ffe`3560f621 mpg2splt!CTransportStreamMapper::ProcessTSPacket_+0x30e
0a 00000035`807ff2d0 00007ffe`355fd0c1 mpg2splt!CTransportStreamMapper::Process+0xf1
0b 00000035`807ff320 00007ffe`355f4eb8 mpg2splt!CMPEG2Controller::ProcessMediaSampleLocked+0x111
0c 00000035`807ff3a0 00007ffe`355f98a7 mpg2splt!CMPEG2Demultiplexer::ProcessMediaSampleLocked+0x7c
0d 00000035`807ff3f0 00007ffd`ba58cba3 mpg2splt!CMPEG2DemuxInputPin::Receive+0x87
0e 00000035`807ff480 00007ffd`ba58ca4d 0x00007ffd`ba58cba3
0f 00000035`807ff530 00007ffd`ba58c92e 0x00007ffd`ba58ca4d
10 00000035`807ff590 00007ffe`19b5222e 0x00007ffd`ba58c92e
11 00000035`807ff5d0 00007ffe`246e5402 clr!UMThunkStub+0x6e
12 00000035`807ff660 00007ffe`2472aa23 qedit!CSampleGrabber::Receive+0x1b2
13 00000035`807ff6d0 00007ffe`287ea6d6 qedit!CTransformInputPin::Receive+0x53
14 00000035`807ff700 00007ffe`287ea459 Obsidian_DSP_DirectShow!MulticastSourceFilter::UDP_consumerThreadProc+0x276 [s:\library\obsidian.dsp.directshow\multicastsourcefilter.cpp # 475]
15 00000035`807ff7f0 00007ffe`46f73034 Obsidian_DSP_DirectShow!MulticastSourceFilter::UDP_consumerThreadEntry+0x9 [s:\library\obsidian.dsp.directshow\multicastsourcefilter.cpp # 445]
16 00000035`807ff820 00007ffe`49171461 KERNEL32!BaseThreadInitThunk+0x14
17 00000035`807ff850 00000000`00000000 ntdll!RtlUserThreadStart+0x21
Here are a few facts about this particular graph:
The source media is in the form of a heavily multiplexed MPEG2-TS UDP
stream.
This stream contains 14 SD TV programs, consuming 37.5Mbps of
network bandwidth.
The problem occurs predictably during periods
where the stream becomes heavily fragmented (the audio and video decoders
emit a burst of samples with IsDiscontinuity() in TRUE.
According to windbg (and SOS) There are No managed or unmanaged locks contended (no possibility of a deadlock).
There is no evidence of a "runaway"
thread (not stuck on an infinite loop).
The graph's final filter is a
GDCL bridge box, that then bridges the decoded sample to an MP4 muxer
box.
The demuxer video output is connected to an instance of ffdshow
decoder filter. The demuxer audio output is connected to an instance
of lav audio decoder filter.
Am I right to suspect the problem could be inside either the ffdshow or the lav filter? (who else could be holding demuxer buffers?)
Any pointers or suggestions on how can I trace why the buffer pool inside the demuxer is exhausted?
It looks like memory allocator on certain pin connection has all buffers in user with external references, and so it fell asleep waiting for new buffer to be returned for recycling.
This is expected behavior, and the problem is either too few buffers or excessive referencing.
You seem to be able to identify pin connection using call stack, and you could either increase amount of buffers or provide a custom memory allocator which expands on demand.
The easiest is when it's your filter is a part of the connection, and you can affect allocator during negotiation phase by either providing allocator requirements or directly updating the allocator properties. In more complicated cases you could locate existing connection and change properties before going active. In even more complicated you could insert your no-op filter into processing chain just for the purpose of getting in between and having direct access to effective allocator.
I have a serial device (sick LMS200) connected to my PC using a RS422 to USB converter. The serial settings (baud, stop bits, etc...) on the LMS200 and my PC match and are communicating (verified using an application that ships with the LMS200). I need to write a custom application which communicates with the LMS.
Before I can begin building my application I need to figure out how to exchange datagrams between the PC and the LMS. To figure this out I have been trying to manually send datagrams using PuTTy. The manual for the LMS ( https://drive.google.com/open?id=0Byv4owwJZnRYVUJPMXdud0Z6Uzg) defines the datagram types and how they should be built. For example, on pg 46 of the manual it is possible to see a datagram that sends a specific instruction to the unit; it looks like this: 02 00 02 00 30 01 31 18.
However when I use PuTTy to send the string 02 00 02 00 30 01 31 18 the LMS does not respond (which it should). I believe it does not respond because the datagram is missing either some serial header data or I am not representing the hex values correctly (I tried to represent bytes such as00 using 0x00 and 00h but had no success). Can you please help me formulate a valid serial message using the manual? I have been at this for a very long time and I am having a really hard time understanding how to convert the information in the manual into a valid datagram.
Please let me know if I can provide any more info. Thanks in advance.
I am not representing the hex values correctly (I tried to represent bytes such as00 using 0x00 and 00h but had no success).
The Ctrl key on terminal/PC keyboards can be used to generate ASCII control characters (i.e. the unprintable characters with byte values of 0x00 through 0x1F).
Just like the Shift key generates the shifted or uppercase character of the key (instead of its unshifted or lower-case character), the Ctrl key (with an alphabetic or a few other keys) can generate an ASCII control character.
The typical USA PC keyboard can generate an ASCII 'NUL' character by typing ctrl-#, that is, by holding down the CTRL and Shift keys, and typing 2 (since the '#' character is the shifted character of the 2 key on USA PC keyboards).
In similar fashion for 'SOH' or 0x01 type ctrl-A (i.e. CTRL+A keys, the Shift is not necessary), for 'STX' or 0x02 type ctrl-B, et cetera.
For 'SUB' or 0x1A type ctrl-Z.
For 'ESC' or 0x1B type the Esc key.
For 'FS' or 0x1C type ctrl-\ (or CTRL+\).
For 'GS' or 0x1D type ctrl-] (or CTRL+]).
For 'RS' or 0x1E type ctrl-^ (or CTRL+Shift+6).
For 'US' or 0x1F type ctrl-_ (or CTRL+Shift+-).
Note that a few oft-used ASCII control codes have dedicated keys, e.g.
'HT' has the Tab key for 0x09,
'BS' has the Backspace key for 0x08,
'LF' has the Enter key (in Linux) for 0x0A, and
'ESC' has the Esc key for 0x1B.
When you don't know how to generate the ASCII control characters from the keyboard, you could fall back on creating the message in a file using a hex editor (not a text editor), and then send the file.
Actually the binary file could be the most reliable method of hand-generation of binary messages. Hand-typing of control codes could fail when a code is intercepted by the shell or application program as a special directive to it (e.g. ctrl-C or ctrl-Z to abort the program) rather than treating it as data.
Escaping the input data is one method that might be available to avoid this.
Phone modems have managed to avoid this issue when in transparent (aka data) mode, by requiring a time guard (i.e. specific idle times) to separate and differentiate commands from data.
The way to get this done is to:
(1) download HexEdit software, and create a file containing HEX values (not decimal reprsentations of the ascii table - where the number 2 was being tramitted as 32)
(2) use Tera Term software to then send the file over the serial line.
I try to read multipart sms in PDU mode.
The message was in 3 parts
Below are PDUs I got by using command AT+CMGF=0 and AT+CMGL=4
Part1:07914150740250F7440B917130263521F600005140723295528AA005C01B5B0301B2E53C194D46A3C96834196D169BD16833DA8C368BCD62B3D82C368BCD62B3586C169BC566B1596C169BC562B3D82C368BCD62B3D82C368BCD62B3DBEC769BDD66B7D90D328B41663768DC0699DD66B7D96D769BDD66B7D96D76BBCD6EB3DBEC36BBCD6EB3DBEC36BBCD6EF7D96D769BDD67F7D96D769FDD67B7FBEC3EBBCFEEB3DB7D769FDD
Part2:07912160130320F8440B917130263521F600005140723295528AA005C01B5B0302CE6EB3DBEC56AB41D9729E8C26A3D164349A8C368BCD68B4196D469BC566B1596C169BC566B1592C368BCD62B3D82C368BCD62B1596C169BC566B1596C169BC566B1D96D76BBCD6EB3DBEC0699C520B31B346E83CC6EB3DBEC36BBCD6EB3DBEC36BBDD66B7D96D769BDD66B7D96D769BDD66B7FBEC36BBCDEEB3FBEC36BBCFEEB3DB7D769FDD
Part3:07914140540500F9440B917130263521F600005140723295528A1805C01B5B0303CEEEB3DB7D769FDD67B7D96D76ABD5
*According to my understanding, in order to identify if it's a multipart message I have to check if TP-UDHI is set where it's the sixth bit in first octet. In this case it's not set.
*The bold part of the PDU was the Data Header
*I thought in order to indicate this is a concatenate message it has to be 00 instead of C0?
Please correct me if I get it wrong..
Question 1: Why is the TP-UDHI not set in this case first octet was 07?
Question 2: Why the first octet in UDH is not 00 instead of C0?
Ok answering question 1) You missed spotting that right at the start of the PDU there is the SMSC address. So in fact your PDU header octet is 44. This indicates that there is a UDH present in the PDU.
This is the SMSC address:
07914150740250F7
Directly thereafter is the PDU header 44.
Regarding question 2) things get a little more complicated. Right now I am not spotting that the UDH contains any indication of a concatenated SMS. Don't forget that the UDH is not just for concatenated messages. It can contain lots of other information based on the 3GPP ETSI spec 03.40.
After a closer look it looks like either the SMS was strangely encoded on the sending side or the mobile operator messed around with the UDH. You were correct isolating the UDH as:
C01B5B0302
Based upon the byte prior then the UDH length should be 5 bytes. But the first IEI (Information Element) is misleading. C0 defines the IEI as a SC specific IEI and not a concatenated IEI. Then the next by 1B says the IEI data should be 27 bytes long which contradicts the UDH length of 5.
So from my perspective something mangled the UDH (which can happen with mobile operators, sms aggregators, or even bad encoders).
If you would play around with what you have removing C01B and replacing with 0003 to ensure the 8-bit concat reference:
00035b0301
00035B0302
00035B0303
Then you would end up with a UDH telling you that the MR is 91 and the parts a correctly specified.
Starting from videoprocessing project, I'm trying to build a directshow filter that connects to a RTSP server becoming a source filter for the Windows MPEG1 decoder (I can not use other formats or decoders having WinCE as OS target).
My filter declares MediaType
MEDIATYPE_Video type
FORMAT_MPEGVideo subtype
MEDIASUBTYPE_MPEG1Payload formatType
Currently, when I connect my rtspSource filter with the CLSID_CMpegVideoCodec decoder, I am rendering a black video.
However, if I replace the windows decoder with CLSID_LAV_VideoDecoderFilter provided by the LAVFilters project, the video is correctly rendered.
After reading "How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter", dealing with the same issue for H264 and MPEG-4, I also read the RFC2250 and then I have depacketized the data but the result is the same.
Currently I'm sending to decoder packets starting with Video Stream Start Code
000001 00 (Picture)
or integral packets starting with
000001 B3 (Sequence Header)
and which contain within them also startCode
000001 B2 (User Data)
000001 B8 (Group Of Picture)
000001 00 (Picture)
000001 01 (Slice)
Still referring to the previous link, which deals with H264 and MPEG-4 cases, speak about "Process data for decoder" but I am not clear exactly what is expected by the CLSID_CMpegVideoCodec filter, after agreeing the format type MEDIASUBTYPE_MPEG1Payload.
However, adding at the beginning of each sample the three bytes 000001 or the 4 bytes 00000100, the video is rendered with images updated approximately every 2 seconds and losing the intermediate images.
I performed the tests both by setting the IMediaSample with
SetTime(NULL, NULL)
that setting
SetTime(start, start+1)
with:
start = (rtp_timestamp - rtp_timestamp_first_packet) + 300ms
following the answer to "Writing custom DirectShow RTSP/RTP Source push filter - timestamping data coming from live sources"
but the results do not change.
Any suggestions would be greatly appreciated.
Thanks in advance.