tcp message end best practices - tcp

whats the best practice to end a tcp message? I now have my own custom string of characters, but I am paranoid that in a case of blind luck, its possible to transmit something on the wire that can contain your end message chracater(s)/string .
SMTP servers take the enter key, but won't that freak out if a peace of text has "enters" in it and is transmitted on the wire?
I would like to get some ideas on this.
Thanks

If you are using a custom string as the end indicator, you must ensure that this end indicator won't appear in the message body, using some string escape techniques.
For example, if you are using "\r\n" as the end indicator, you must turn the "\r\n" in the message body to another form.

Can better use the length of message and pass it the beginning?
For example of text protocol seee here. Your problem solve it part
An encoded string is a string with the following encoding rules.
- Characters in the range [0x10 - 0xff] are encoded as itselves.
- A character in the range [0x00 - 0x0f] is prefixed by 0x01 and
shifted by 0x40. For example, 0x03 is encoded as 0x01 0x43.

Related

How to use HMAC-SHA256 Authorization header with Unicode bytes instead of UTF-8?

I'm creating HMAC-SHA256 Authorization header for my rest request.
My hunch is that internally Paw is using UTF-8 (or some other non-Unicode) encoding to calculate the checksum. My server side API uses Unicode to calculate the same thing for comparison but with the same inputs I receive different outputs on each end :(
Is there a way to configure Paw to use Unicode?
For unicode inputs for HMAC-SHA256 you can use the Escape Sequence dynamic value. Choose ``Custom` escape sequence and type your sequence in the input field (\u + code for unicode characters and \x + code for hex bytes).
If this doesn't work for you, don't hesitate ti send us a support e-mail to support#luckymarmot.com

asp.Net + encrypted QueryString requested not reading '+' sign

I have an encrypted query string passed from another page, it reads something like "/se73j+sef" but after receiving it, the '+' sign got omitted and became "/se73j sef". Is this normal? Please kindly advice. Thanks.
Is this normal?
Yes, perfectly normal. + is a special character in an url. It means space (0x20 ASCII character). If you want to represent the + sign you will have to url encode it:
/se73j%2Bsef
To url encode a string in .NET you could use the UrlEncode method. Or depending on how you are building the url there are certainly better ways.

Question marks in email for characters like non-breaking space. It only occurs on Unix and not on Windows

I am facing a weird problem related to content type/encoding.
Here is my Java code snippet below. This code works perfectly fine on a Windows machine where the application server is running on windows and the SMTP server for sending emails is also Windows localhost. When I deploy the same code on a Unix server, the email sent for the exact same content contains question marks (???) for special characters like non-breaking white space.
I did a lot of googling, but I did not find any solution. How can I fix this problem? The content types I tried were ISO-8859-1, UTF-8 and Windows-1252. Nothing helps.
MimeMessage message = new MimeMessage(session);
.............
Multipart mp = new MimeMultipart();
MimeBodyPart messageBody = new MimeBodyPart();
messageBody.setContent(mailMessage, "text/html;charset=Windows-1252");
messageBody.setHeader("Content-Type", "text/html;charset=Windows-1252");
// Add body to the multimedia part
mp.addBodyPart(messageBody);
message.setContent(mp);
// Send message
Transport.send(message);
Are you using the same mail server in both cases? And the same client program to view the message?
For debugging, just before the Transport.send call, add:
message.writeTo(new FileOutputStream("msg.txt"));
and then examine the msg.txt file to see if the characters are correctly encoded.
How do you create the text in the mailMessage String? If you don't create the string with the correct Unicode characters, no charset is going to make it right.
Also, you don't ever need to set the Content-Type header explicitly, remove that line.
And, instead of setContent, use:
messageBody.setText(mailMessage, "html", "utf-8");
That makes sure the Content-Type header is set correctly and the parameters (e.g., charset) are quoted correctly.
Ultimately, I had to go with a crude way of doing it. I replaced such characters with space.
mailMessage.replaceAll("[^\\x20-\\x7e]", " ");
Now, all the special characters like non-breaking space or any other character falling out of normal range, will be replaced with space. The email in this case was anyway meant for normal text.

dealing with an encrypted HttpUtility.UrlEncode parameter

I have a problem dealing with encrypted URL parameters when applying HttpUtility.UrlEncode or UrlDecode.
for a given url string: ?fid=7kqguwhYMNw=&uid=YCRSGG71+58=
the PLUS sign which is part of the encrypted data of uid is stripped out and replaced with a space so my attempts to decrypt it fail.
OK, so I know that the + is a reserved shorthand for space in QUERYSTRING(RFC 1630) but since I don't have too much control over the value that is returned from encryption how can I get around this.
EDIT:
OK, so good point brought up. Ignore the UrlEncode/UrlDecode part of the question. Request.QueryString(["uid"]) will still have the plus sign stripped out of it when I pass it to my decryption method.
I would suggest adding code to remove the = characters, replace + with -, and replace / with .
s = s.Replace("=", "").Replace("+", "-").Replace("/", ".")
If you need to process the resulting string, you can do the reverse:
s = s.Replace(".", "/").Replace("-", "+")
(there is no reason to put back the = characters... they are merely padding).
That way you don't need to worry about URL encoding and decoding and it avoids unnecessary expansion of your string. It also looks more professional to users if they end up seeing the URL... percent signs in URL are ugly and almost always unnecessary... it screams "amateur" whenever I see them.
The Base-64 encoded value needs to be URL-encoded before it is put in the URL. If I do HttpUtility.UrlEncode("YCRSGG71+58=") then I get YCRSGG71%2b58%3d - which has no plus signs, and can be correctly decoded.
In other words, the code that is putting a base-64 value on the URL without encoding it first is wrong. If you control that code, you should change it. If you don't control that code, then don't try to decode something that wasn't url-encoded in the first place.
As a side remark, you should use HttpUtility.UrlEncode and HttpUtility.UrlDecode for this kind of work. However, even these wont help you since the URL is malformed anyway.
So, don't use anything at all! Since it's not encoded, why decode it?

when assigning location.href, please explain url encoding (in asp.net and firefox)

In some javascript, I have:
var url = "find.aspx?" + "location=" + encodeURIComponent( address );
alert( url );
location.href = url;
where the value of address is the string "Seattle, WA".
In the alert I see
find.aspx?Seattle%2C%20WA
as I expect.
But on the server side, when I look at Request.Url, the relevant substring I see is
find.aspx?Seattle, WA
And in the Firefox url window I see
find.aspx?location=Seattle%2C WA
So I'm getting three different representations whereas I would expect that in all three places I should see what I see in the alert. My expectation is that the url I assign to location.href should show up as-is in the browser url window, and should be passed as-is to the server in Request.Url (and I would need to decode the values on the server before using them). What's happening?
Firefox converts certain encoded characters into their literal forms as a way to be friendly to users. It will also convert spaces typed into the address bar into %20 for the server.
Update: The reason Firefox doesn't display the comma unencoded is because commas are allowed in URLs, but spaces are not, so it knows that a space is going to be unambiguously interpreted, whereas the pre-encoded comma is different from a non-encoded comma to some servers. see: Can I use commas in a URL?
ASP is probably trying to help you out by auto-un-encoding the string for you.
Update: It looks like ASP.NET unencodes Request.Url for you by default, as mentioned here: QueryString malformed after URLDecode They also mention that you can use HttpRequest.Url.Query to access the un-decoded version.
The alert is the only thing not doing any "magic" for you.
For the alert, you are doing the encoding yourself. Perhaps it looks the same as on the server-side if you removed encodeURIComponent.
On the server side, ASP.NET will always show you the unencoded form. This is to make it easier to directly map to files that also have text that needed to be (un)encoded.
Note that you can replace every letter for its UTF8 representation in URL Encoding. It will still be the same URL. I.e., type the following in the browser window and it will still work: %66%59%6E%64.aspx?location=Seattle%2C%20WA. To only encode the necessary chars, use UrlEncode on the server side if you create a link yourself.
URL encoding can become fairly tricky. You ask to explain it. To know the correct escape of a certain character, you need to know how that character looks in UTF8. The hexadecimal value of the UTF-8 bytes then become the %XX%YY value of your letter. Sometimes it's one %XX, but it can be up to six byte sequences in total (some Chinese characters for instance).
URL Encoding works one way only. Never double-encode or double-unencode. This is prohibited by the specification. Also, because you can encode any character, it is not always possible (as you found out) to do roundtrip encoding/unencoding. If you unencode and re-encode again, it is well possible that the resulting string is different, but syntactically the same.
In HTML, URL Encoding is sometimes interspersed with HTML Encoding. I.e., the ampersand is valid in HTML, but not in HTML. find.aspx?city=A&name=B becomes find.aspx?city=A&name=B in and HTML URL. However, browsers are lenient and will accept wrongly HTML-encoded strings.
Finally, a not on the browser: if you type in a space in a link, even inside an <a> tag, it will escape the space (or other character) for you. Likewise, it will nowadays show the odd characters (é, ï etc) in the address bar, but when it sends it over HTTP, the browser will correctly do the encoding for you.
Update: about anwering your question of needing a "definitive" reference or proof.
While I couldn't find any on the internet, I decided to look for it myself using Reflector. Going through the methods that set, for instance, the HttpRequest.QueryString, you quickly encounter the private method HttpRequest.FillInQueryStringCollection which then calls HttpValueCollection.FillfromEncodedBytes. Somewhat near the end of that method, HttpUtility.UrlDecode is called for the values. Conclusion: do not call it yourself, to prevent double decoding.
You can see this for yourself when you download Reflector and disassemble the .NET libs of System.Web.
For your example you can change this line
var url = "find.aspx?" + "location=" + encodeURIComponent( address );
to
var url = "find.aspx?" + "location=" + address;
and see the address as it is. Bu if address variable contains any '&' character your variable will be corrupt. So you are using encodeURIComponent to encode these things url.
On the Server side all these encoded strings are decoded back. It means encodeURIComponent is just for sending the address variable (whether it contains & character or not) to server side correctly.

Resources