When using Paw for Elasticsearch the JSON editor is very hard to use to the the structure of requests. For some reason the text input is using four spaces for tabs, while most (all?) JSON documents are 2 (even Paw defaults is 2 spaces for JSON responses).
Any way to fix this? I know you can click "JSON" and then go back to Text again, but it's quite a tedious work flow.
Related
Daily, we get a 15+M xml dump that contains a bunch of superfluous content that masks the needed details. It is not problem to extract the content from the xml tags, however, the blob has proven to be a problem.
I can extract the headers of the info that I am after using str_extrac, however, I also need to the character vector that follows. An example
\n\nSubject:\n\tSecurity ID:\t\tS-1-5-21-1390067357-1580818891-1801674531-43388\n
Unfortunately, I cannot post a full copy of the blob, as it contains proprietary content. As you can see, the fields that I need are all separated with embedded new line and tab characters, which I am trying to trigger on, but I cannot find a way to configure str_extract to capture the additional content.
Any insight you might have would be greatly appreciated.
After trying Paw3 for a while, I found it's really amazing, but I have some little issue about the operation:
How can I bulk edit HTTP headers instead of editing in a table one by one?
How can I fold some of the JSON text code when response is too long?
When I search in the response, is there any way to show the number of the matches?
Many thanks.
Thanks for the kind words about Paw!
Unfortunately, none of the 3 things you've asked are already implemented.
How can I bulk edit HTTP headers instead of editing in a table one by one?
There's no ability to bulk-edit headers yet. Instead, we recommend users to use environment variables as reusable presets. We'd like to later add a batch-edit feature.
How can I fold some of the JSON text code when response is too long?
There's no way to fold JSON texts yet. You could use the regular JSON tree if you need to fold items. Same here, it's something we'd like to add to the text too.
When I search in the response, is there any way to show the number of the matches?
Not displayed. It would be easy to add though. I take note :)
As a security measure we're using the Microsoft.Security.Application.Encoder.HtmlEncode method to encode and render values that have been stored in our database by various users.
We would like to allow the user to use single quotes but they are being encoded as & #39;
Does anyone know of a safe way to allow single quotes to render but ensure the rest of the input is encoded? Is it just a case of replacing after the encoding has taken place? This approach seems a bit hacky.
I got to the bottom of this. The web control was also encoding the input data and therefore html encoding was taking place twice.
I am building a ASP.NET webservice loading other webpages and then hand it clients.
I have been doing quite well with character code treatment, reading the meta tag from HTML then use that codeset to read the file.
But nevertheless, some less educated users just don't understand code sets. They declare a specific encoding method e.g. "gb2312", but in fact, he is just using normal UTF8. When I use gb2312 to decode the text, everything turns out a holy mess.
How can I detect whether the text is properly decoded? I loaded that page into my IE, which correctly use UTF-8 to decode the page. How does it achieve that?
Based on the BOM you can tell what encoding is used.
BOM and encoding
If you want to detect character set you could use the C# port of mozilla's character set detector.
CharDetSharp
If you want to make it extra sure that you are using a correct one, you maybe could be looking for special characters that are not supposed to be there. It is not very likely to include "óké". So you could be looking for such characters and try to use different encoding/character set to process your file.
Actually it is really hard to make your application completely "fool-proof".
Inside an asp.net page, should I use
<html><title>My page's title from México</title></html>
Or
<html><title>My page’s title from México</title></html>
Both examples have the same output. Since asp.net encodes all my pages to utf-8, there is no need to use html entities, is that right?
The ASCII table is set of characters, arguable the first standardized set of characters back in the days when you could only spare 1 byte per character. http://asciitable.com/ But I did some looking around at the extended character set of ASCII and it appears that the character you are referencing is an ASCII character. So there really isn't a problem which ever way you choose to display your title.
My revised answer is go for less expensive one according to space (i.e. the first one)
The second example will ensure compatibility with ASCII standards of HTML transmition. So my vote is for the second example, so you don't have to ensure the HTML is output and encoded as UTF-8 all the way through all the proxy servers and any other kind of caching and translation that might occur.
You're correct; As long as there's unicode at both ends of the pipe, it really doesn't matter. Personally, I would use the first simply because it's more readable.
And, honestly, unicode has been widespread for some time. I personally believe that it's time to leave anyone who can't handle UTF-8 behind.