Binary Character in QR code - qr-code

It seem QR code is able to store Binary character(8bits) but I can't find anyway to do it.
Anyone have any idea about it?

Just use QR code's Byte Mode to encode these bytes. That's all. Note that readers will want to interpret these bytes as text, so there's no chance that readers will understand your binary data as binary data (like, an encoded image). But maybe you have a custom reader that does something special with it.

Related

What does this response mean?

This is a response from server of a video file. When seeing the preview in chrome(image) it shows in some characters(Not sure what kind of character is that. If someone know please let me know what is the name of those characters/symbols). Same video response in firefox(image) is seen as base64. So, is the video is transferred to the browser in form of base64 string even when the content type is set to video/mp4(image)? I notice this when i download a pdf file as well. Please explain me. Thanks.
You're looking at something that is binary data, not text, therefore it doesn't show as any ascii characters that make any sense.

Raw opcode from xed decoded data structures?

I am new to Intel Pintools, and am trying to write a pintool that stops at a given instruction type and then looks for specific instructions following it in the section. I've got the xed decoding working, but I am stuck at the part where I get the actual hex opcode. How can I do that?
I would love to use INS_Opcode() -- but these are instructions that haven't been executed yet (and may never be), so they aren't INS objects. There's xed_operand_values_get_iclass(), but that returns an iclass enum, not the actual primary opcode. I see from the xed header files that there are some raw buffers associated with the various xed structures, but it is not at all clear to me how I can use that to get the information I need. Can anyone enlighten me?
Apparently I missed it the first time I looked at the header files, but there's xed3_operand_get_nominal_opcode(), which does exactly what I need it to. Related: grep is a wonderful thing.

Which is broken: the DICOM or the converter?

I'm trying to convert a 4D DICOM image (x,y,z,time) to a different file format. Something goes wrong, because the output image has lost the time dimension.
I'm trying to decide whether:
the DICOM series is broken -- it's possible that a 3rd party, who anonymized data, removed critical information from the header; or
the conversion code is incomplete -- it simply can't handle this flavour of DICOM
The answer to this will determine whether I have to fix the DICOM, or fix the converter.
I've tried diving into the DICOM standard, to understand what specific header values mean, but I don't find this document helpful; it gives a mere word or two for each header field. I see fields in my data that look suspicious, but I don't know if it's actually wrong, or if I don't understand what it's supposed to be telling me.
I can think of several ways to answer my problem:
Are there any tools out there that can confidently classify a DICOM series as either valid or invalid?
Is there a document which describes precisely what each DICOM header value is supposed to contain?
Is there a better approach to figuring out which is broken -- the image, or the converter?
You are not looking at the right document, you should be looking at PS 3.3 - current. For example:
A.4.3 MR Image IOD Module Table
Or as someone mentioned in the comments use dciodvfy from dicom3tools package.

How to detect wrong encoding declaration?

I am building a ASP.NET webservice loading other webpages and then hand it clients.
I have been doing quite well with character code treatment, reading the meta tag from HTML then use that codeset to read the file.
But nevertheless, some less educated users just don't understand code sets. They declare a specific encoding method e.g. "gb2312", but in fact, he is just using normal UTF8. When I use gb2312 to decode the text, everything turns out a holy mess.
How can I detect whether the text is properly decoded? I loaded that page into my IE, which correctly use UTF-8 to decode the page. How does it achieve that?
Based on the BOM you can tell what encoding is used.
BOM and encoding
If you want to detect character set you could use the C# port of mozilla's character set detector.
CharDetSharp
If you want to make it extra sure that you are using a correct one, you maybe could be looking for special characters that are not supposed to be there. It is not very likely to include "óké". So you could be looking for such characters and try to use different encoding/character set to process your file.
Actually it is really hard to make your application completely "fool-proof".

How to repair unicode letters?

Someone in email sent me letters like this
IVIØR†€™
correct should be
IVIØR†€™
suppose to be
How do I represent them in their original Portuguese langauge, it got altered after being passed through HTTP GET request.
I probably will not be able to fix the site.. but maybe create a repair tool to repair these broken encoded letters? or anyone know of any repair tool? or how to do it manually by hand? Seems like nothing is lost.. just badly interpreted
What happened here is that UTF-8 got misinterpreted as ISO-8859-1; and then other kinds of mangling (the bad ISO-8859-1 string being re-UTF-8-encoded; the non-breaking space character '\xA0' being converted to regular space '\x20') seem to have happened afterward, though those may just be a result of pasting it into Stack Overflow.
Due to the subsequent mangling, there's no really good way to completely undo it, but you can largely undo it by passing it through a not-very-strict UTF-8 interpreter. For example, if I save "IVIØR†€™" as a text-file on my computer, using Notepad, with the "ANSI" (single-byte) encoding, and then I open it in Firefox and tell it to interpret it as UTF-8 (Firefox > Web Developer > Character Encoding > Unicode (UTF-8)), then it displays "IVIØR� €™". (The "�" is because of the '\xA0' having been changed to '\x20', which broke the UTF-8 encoding.)
They're probably not broken. It's just a difference between the encoding they were sent in, vs. the decoding you're viewing them in.
Figure out what encoding was originally used, and use the same one to decode it, and it should look like the original. In terms of writing a "fix-it" tool, you'd always need to know what encoding they were originally created in, which can be complicated depending on the source, and whether or not you have access to said information.

Resources