I am extending the ByteArrayLengthHeaderSerializer to return the length from a tcp message header. The problem is the very first message on the socket contains an 8 byte session without a header. After that first message, all messages will have a header with a length (as well as some other fields). The first 4 bytes of the header will always be a constant value.
I'd like to read the first 4 bytes to determine if I have a message with a header or a raw sessionId.
If not a header then I would push back the 4 bytes and return a length of 8.
If it was a header (first 4 bytes matched the constant value) then I would read the rest of the header, find the length field in the header and return that value.
Also, this application is probably using nio.
Not directly; I did something similar a few years ago by writing a custom deserializer that used a PushBackInputStream.
You should be able to write a custom deserializer; wrap the IS in an PushBackIS. Peek at the start and push back if needed, then delegate to the BALHS to decode those messages with proper headers.
When I was inbounding trading partner's X12 file using biztalk. I am required to store the ISA segment of the file in a db table.
I am using the promoted property EDI.ISA_Segment to get the ISA string.
Recently, I noticed for one trading partner. There're extra chars found in the ISA segment:
The ISA segment shall look like this:
ISA*00* *00* *ZZ*######## *ZZ*##### *150105*0606*^*00501*000000936*1*P*>~
But the ISA_segment using the promoted property was:
ISA*00* *00* *ZZ*######## *ZZ*##### *150105*0606*^*00501*000000936*1*P*>~
G
There's extra < LF >+G in the ISA segment.
The trading partner do send the X12 file with segment suffix and it was also configed in the BizTalk Agreement correctly.
It looks BizTalk takes 2 extra chars to the ISA_segment after reached the "~", I am wondering if this is a bug or it is caused some mis-configuation?
You need to look at the original EDI using a good text editor such as Notepad++ so you can see exactly what those characters are.
The X12 spec allows the use of CR and/or LF as part of the Segment Terminator so it might be a side effect of changing encodings from EDI to the database.
Let's say I have a quoting site, that accepts new quotes and presents a random one every time you visit it.
I would have resources like:
| URL | Method | What is does |
|-----------------------|--------|----------------------|
| / | GET | Shows a random quote |
| /quote | POST | Create a new quote |
| /quote/slang-of-quote | GET | Presents a quote |
Resources could be presented on HTML, JSON, XML, image/png, etc.
Proper headers for cache control would be sent on relevant resources (probably just on /quote/another-quote URLs).
What is the beast approach (for the index page to give a user a random quote) other than issuing a 307 to a real quote resource? Is that even nice to HTTP/REST?!
Suppose we have a string "http://www.example.com/feed". I am breaking this string up into three pieces for use with Apache's URI class:
1. "http"
2. "www.example.com"
3. "/feed"
Is there a proper term for this process of breaking down a URI into its component pieces?
A uri can be parsed into it's component parts:
The following are two examples from the RFC3986:
foo://example.com:8042/over/there?name=ferret#nose
\_/ \______________/\_________/ \_________/ \__/
| | | | |
scheme authority path query fragment
| _____________________|__
/ \ / \
urn:example:animal:ferret:nose
A uri can be either a url or a urn.
Split or parse? I think it's really semantics, and there's not an agreed upon term.
I would always use the term parsing.
Is it possible force the browser to fresh the cached CSS?
This is not as simple as every request. We have a site that has had stable CSS for a while.
Now we need to make some major updates to the CSS; however, browsers that have cached the CSS will not receive the new CSS for a couple of days causing rendering issues.
Is there a way to force refresh of the CSS or are we better just opting for version specific CSS URLs?
TL;DR
Change the file name or query string
Use a change that only occurs once per release
File renaming is preferable to a query string change
Always set HTTP headers to maximize the benefits of caching
There are several things to consider and a variety of ways to approach this. First, the spec
What are we trying to accomplish?
Ideally, a modified resource will be unconditionally fetched the first time it is requested, and then retrieved from a local cache until it expires with no subsequent server interaction.
Observed Caching Behavior
Keeping track of the different permutations can be a bit confusing, so I created the following table. These observations were generated by making requests from Chrome against IIS and observing the response/behavior in the developer console.
In all cases, a new URL will result in HTTP 200. The important thing is what happens with subsequent requests.
+---------------------+--------------------+-------------------------+
| Type | Cache Headers | Observed Result |
+---------------------+--------------------+-------------------------+
| Static filename | Expiration +1 Year | Taken from cache |
| Static filename | Expire immediately | Never caches |
| Static filename | None | HTTP 304 (not modified) |
| | | |
| Static query string | Expiration +1 Year | HTTP 304 (not modified) |
| Static query string | Expire immediately | HTTP 304 (not modified) |
| Static query string | None | HTTP 304 (not modified) |
| | | |
| Random query string | Expiration +1 Year | Never caches |
| Random query string | Expire immediately | Never caches |
| Random query string | None | Never caches |
+---------------------+--------------------+-------------------------+
However, remember that browsers and web servers don't always behave the way we expect. A famous example: in 2012 mobile Safari began caching POST requests. Developers weren't pleased.
Query String
Examples in ASP.Net MVC Razor syntax, but applicable in nearly any server processing language.
...since some applications have traditionally used GETs and HEADs with
query URLs (those containing a "?" in the rel_path part) to perform
operations with significant side effects, caches MUST NOT treat
responses to such URIs as fresh unless the server provides an explicit
expiration time. This specifically means that responses from HTTP/1.0
servers for such URIs SHOULD NOT be taken from a cache.
Appending a random parameter to the end of the CSS URL included in your HTML will force a new request and the server should respond with HTTP 200 (not 304, even if it is hasn't been
modified).
<link href="normalfile.css?random=#Environment.TickCount" />
Of course, if we randomize the query string with every request, this will defeat caching entirely. This is rarely/never desirable for a production application.
If you are only maintaining a few URLs, you might manually modify them to contain a build number or a date:
#{
var assembly = Assembly.GetEntryAssembly();
var name = assembly.GetName();
var version = name.Version;
}
<link href="normalfile.css?build=#version.MinorRevision" />
This will cause a new request the first time the user agent encounters the URL, but subsequent requests will mostly return 304s. This still causes a request to be made, but at least the whole file isn't served.
Path Modification
A better solution is to create a new path. With a little effort, this process can be automated to rewrite the path with a version number (or some other consistent identifier).
This answer shows a few simple and elegant options for non-Microsoft platforms.
Microsoft developers can use a HTTP module which intercepts all requests for a given file type(s), or possibly leverage an MVC route/controller combo to serve up the correct file (I haven't seen this done, but I believe it is feasible).
Of course, the simplest (not necessarily the quickest or the best) method is to just rename the files in question with each release and reference the updated paths in the link tags.
I think renaming the CSS file is a far better idea. It might not suit all applications but it'll ensure the user only has to load the CSS file once. Adding a random string to the end will ensure they have to download it every time.
The same goes for the javascript method and the apache methods above.
Sometimes the simple answer can be the most effective.
Another solution is:
<FilesMatch "\.(js|css)$">
Header set Cache-Control "max-age=86400, public"
</FilesMatch>
This limits the maximum cache age to 1 day or 86400 seconds.
Please go read Tim Medora's answer first, as this is a really knowledgeable and great effort post.
Now I'll tell you how I do it in PHP. I don't want to bother with the traditional versioning or trying to maintain 1000+ pages but I want to ensure that the user always gets the latest version of my CSS and caches that.
So I use the query string technique and PHP filemtime() which is going to return the last modified timestamp.
This function returns the time when the data blocks of a file were being written to, that is, the time when the content of the file was changed.
In my webapps I use a config.php file to store my settings, so in here I'll make a variable like this:
$siteCSS = "/css/standard.css?" .filemtime($_SERVER['DOCUMENT_ROOT']. "/css/standard.css");
and then in all of my pages I will reference my CSS like this:
<link rel="stylesheet" type="text/css" media="all" href="<?php echo $siteCSS?>" />
This has been working great for me so far on PHP/IIS.
Yu might be able to do it in apache...
<FilesMatch "\.(html|htm|js|css)$">
FileETag None
<IfModule mod_headers.c>
Header unset ETag
Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
Header set Pragma "no-cache"
Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
</IfModule>
</FilesMatch>