Will websocket messages always arrive entirely, at once? - http

This websockets tutorial https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_client_applications has this example:
exampleSocket.onmessage = function(event) {
var f = document.getElementById("chatbox").contentDocument;
var text = "";
var msg = JSON.parse(event.data);
var time = new Date(msg.date);
var timeStr = time.toLocaleTimeString();
switch(msg.type) {
case "id":
clientID = msg.id;
setUsername();
break;
case "username":
text = "<b>User <em>" + msg.name + "</em> signed in at " + timeStr + "</b><br>";
break;
case "message":
text = "(" + timeStr + ") <b>" + msg.name + "</b>: " + msg.text + "<br>";
break;
case "rejectusername":
text = "<b>Your username has been set to <em>" + msg.name + "</em> because the name you chose is in use.</b><br>"
break;
case "userlist":
var ul = "";
for (i=0; i < msg.users.length; i++) {
ul += msg.users[i] + "<br>";
}
document.getElementById("userlistbox").innerHTML = ul;
break;
}
if (text.length) {
f.write(text);
document.getElementById("chatbox").contentWindow.scrollByPages(1);
}
};
it parses the received message from the websockets server. However, can I trust that a json, sent by the websocket server, will always arrive at once for the client? What if it arrives partially? Parsing it would break it.
Can I make this assumption even for very large json messages?
I'm asking because on a TCP stream, this wouldn't be possible. The message could arrive partially.
If there is no way to receive the message entirely at once, how could I know when to parse a JSON?

Yes, each message is received as a whole. From the RFC 6455:
1.2. Protocol Overview
After a successful handshake, clients and servers transfer data back and forth in conceptual units referred to in this specification as "messages". On the wire, a message is composed of one or more frames. The WebSocket message does not necessarily correspond to a particular network layer framing, as a fragmented message may be coalesced or split by an intermediary.
6.2. Receiving Data
If the frame comprises an unfragmented message (Section 5.4), it is said that A WebSocket Message Has Been Received with type /type/ and data /data/. If the frame is part of a fragmented message, the "Application data" of the subsequent data frames is concatenated to form the /data/. When the last fragment is received as indicated by the FIN bit (frame-fin), it is said that A WebSocket Message Has Been Received with data /data/ (comprised of the concatenation of the "Application data" of the fragments) and type /type/ (noted from the first frame of the fragmented message). Subsequent data frames MUST be interpreted as belonging to a new WebSocket message.
https://www.rfc-editor.org/rfc/rfc6455
The confusion might come from the term 'socket' which is a low-level raw OS channel that yields data in chunks. However, WebSocket is a higher level protocol.

Related

In C#, how do I create an invalid X509Chain?

The X509ChainStatusFlags enum contains a lot of possible values: https://learn.microsoft.com/en-us/dotnet/api/system.security.cryptography.x509certificates.x509chainstatusflags?view=netframework-4.8
Are there easy ways to construct a certificate and chain that produce some of these flags? I want to construct them in order to integration-test my certificate validation logic.
Each different kind of failure requires a different amount of work to test for. Some are easy, some require heroic effort.
The easiest: error code 1: X509ChainStatusFlags.NotTimeValid.
X509Certificate2 cert = ...;
X509Chain chain = new X509Chain();
chain.ChainPolicy.VerificationTime = cert.NotBefore.AddSeconds(-1);
bool valid = chain.Build();
// valid is false, and the 0 element will have NotTimeValid as one of the reasons.
Next up: X509ChainStatusFlags.NotValidSignature.
X509Certificate2 cert = ...;
byte[] certBytes = cert.RawData;
// flip all the bits in the last byte
certBytes[certBytes.Length - 1] ^= 0xFF;
X509Certificate2 badCert = new X509Certificate2(certBytes);
chain.ChainPolicy.ApplicationPolicy.Add(new Oid("0.0", null));
bool valid = chain.Build();
// valid is false. On macOS this results in PartialChain,
// on Windows and Linux it reports NotValidSignature in element 0
Next up: X509ChainStatusFlags.NotValidForUsage.
X509Certificate2 cert = ...;
X509Chain chain = new X509Chain();
chain.ChainPolicy.ApplicationPolicy.Add(new Oid("0.0", null));
bool valid = chain.Build();
// valid is false if the certificate has an EKU extension,
// since it shouldn't contain the 0.0 OID.
// and the 0 element will report NotValidForUsage.
Some of the more complicated ones require building certificate chains incorrectly, such as making an child certificate have a NotBefore/NotAfter that isn't nestled within the CA's NotBefore/NotAfter. Some of these heroic efforts are tested in https://github.com/dotnet/runtime/blob/4f9ae42d861fcb4be2fcd5d3d55d5f227d30e723/src/libraries/System.Security.Cryptography.X509Certificates/tests/DynamicChainTests.cs and/or https://github.com/dotnet/runtime/blob/4f9ae42d861fcb4be2fcd5d3d55d5f227d30e723/src/libraries/System.Security.Cryptography.X509Certificates/tests/RevocationTests/DynamicRevocationTests.cs.

DM SerialControl Communication

Running this script in DM results in the following error during the first execution. Subsequent executions fail on SPOpen(1,9600,1,0,8), which I think implies the serial port is open at that point, but the first execution says it is not.
What is the unexpected error that is preventing communication with the serial port?
SPOpen(1,9600,1,0,8)
SPOpen( "COM1" )
SPSendString(1, "*IDN?" )
string message
number test
message = SPReceiveString(1,8,test)
Result("Acquisition "+message+" "+test+"\n")
SPClose(1)
I can't test the serial commands myself at the moment, and the exact script code of course depends on what is on the other end of the serial connections, i.e. what is expected and what is send back. And also what timeouts/delays need to be expected and cared for.
However, I can see two immediate issues with your script:
The 'SPOpen()' command returns an ID value. You need this ID in the subsequent commands, not the port number.
Whenever the script fails (i.e. throws and error), the command to close the port is never executed and it remains open (and hence blocked). To safeguard against this, you can use a 'Try{}Catch{}' construct.
I would expect your script to look something more akin to the following:
number port = 666
number baud = 9600
number stop = 10
number parity = 0
number data = 8
number portID
try
{
portID = SPOpen( port, baud, stop, parity, data )
Result( "\n Port ("+port+") opened, Handle ID: " + portID )
Result( "\n Sending messge:" + message )
string message = "*IDN?"
SPSendString( portID, message )
Result( "\n messge send." )
// Wait for response
Result( "\n Waiting for response." )
sleep( 0.3 )
number pendingBytes = SPGetPendingBytes(portID)
Result( "\n Pending bytes:" + pendingBytes )
number maxLength = 50
number bytes_back
string reply
while( pendingBytes > 1 )
{
reply += SPReceiveString( portID, maxLength, bytes_back )
pendingBytes = SPGetPendingBytes(portID)
}
Result( "\n Reply:" + Reply )
}
catch
{
// Any thrown error end up here.
// Ensures the port will not remain open
Result( "ERROR OCCURRED.\n" )
break
}
SPClose( portID )
Result( "\n Port ("+port+") closed, using Handle ID: " + portID )
The above is untested code and will surely require some adaptation, but it should get you started. You might need some "delays" when waiting for a result and you might want to wait for specific results in a while-loop.

Failing to write offset data to zookeeper in kafka-storm

I was setting up a storm cluster to calculate real time trending and other statistics, however I have some problems introducing the "recovery" feature into this project, by allowing the offset that was last read by the kafka-spout (the source code for kafka-spout comes from https://github.com/apache/incubator-storm/tree/master/external/storm-kafka) to be remembered. I start my kafka-spout in this way:
BrokerHosts zkHost = new ZkHosts("localhost:2181");
SpoutConfig kafkaConfig = new SpoutConfig(zkHost, "test", "", "test");
kafkaConfig.forceFromStart = false;
KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("test" + "spout", kafkaSpout, ESConfig.spoutParallelism);
The default settings should be doing this, but I think it is not doing so in my case, every time I start my project, the PartitionManager tries to look for the file with the offsets, then nothing is found:
2014-06-25 11:57:08 INFO PartitionManager:73 - Read partition information from: /storm/partition_1 --> null
2014-06-25 11:57:08 INFO PartitionManager:86 - No partition information found, using configuration to determine offset
Then it starts reading from the latest possible offset. Which is okay if my project never fails, but not exactly what I wanted.
I also looked a bit more into the PartitionManager class which uses Zkstate class to write the offsets, from this code snippet:
PartitionManeger
public void commit() {
long lastCompletedOffset = lastCompletedOffset();
if (_committedTo != lastCompletedOffset) {
LOG.debug("Writing last completed offset (" + lastCompletedOffset + ") to ZK for " + _partition + " for topology: " + _topologyInstanceId);
Map<Object, Object> data = (Map<Object, Object>) ImmutableMap.builder()
.put("topology", ImmutableMap.of("id", _topologyInstanceId,
"name", _stormConf.get(Config.TOPOLOGY_NAME)))
.put("offset", lastCompletedOffset)
.put("partition", _partition.partition)
.put("broker", ImmutableMap.of("host", _partition.host.host,
"port", _partition.host.port))
.put("topic", _spoutConfig.topic).build();
_state.writeJSON(committedPath(), data);
_committedTo = lastCompletedOffset;
LOG.debug("Wrote last completed offset (" + lastCompletedOffset + ") to ZK for " + _partition + " for topology: " + _topologyInstanceId);
} else {
LOG.debug("No new offset for " + _partition + " for topology: " + _topologyInstanceId);
}
}
ZkState
public void writeBytes(String path, byte[] bytes) {
try {
if (_curator.checkExists().forPath(path) == null) {
_curator.create()
.creatingParentsIfNeeded()
.withMode(CreateMode.PERSISTENT)
.forPath(path, bytes);
} else {
_curator.setData().forPath(path, bytes);
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
I could see that for the first message, the writeBytes method gets into the if block and tries to create a path, then for the second message it goes into the else block, which seems to be ok. But when I start the project again, the same message as mentioned above shows up. No partition information can be found.
I had the same problem. Turned out I was running in local mode which uses an in memory zookeeper and not the zookeeper that Kafka is using.
To make sure that KafkaSpout doesn't use Storm's ZooKeeper for the ZkState that stores the offset, you need to set the SpoutConfig.zkServers, SpoutConfig.zkPort, and SpoutConfig.zkRoot in addition to the ZkHosts. For example
import org.apache.zookeeper.client.ConnectStringParser;
import storm.kafka.SpoutConfig;
import storm.kafka.ZkHosts;
import storm.kafka.KeyValueSchemeAsMultiScheme;
...
final ConnectStringParser connectStringParser = new ConnectStringParser(zkConnectStr);
final List<InetSocketAddress> serverInetAddresses = connectStringParser.getServerAddresses();
final List<String> serverAddresses = new ArrayList<>(serverInetAddresses.size());
final Integer zkPort = serverInetAddresses.get(0).getPort();
for (InetSocketAddress serverInetAddress : serverInetAddresses) {
serverAddresses.add(serverInetAddress.getHostName());
}
final ZkHosts zkHosts = new ZkHosts(zkConnectStr);
zkHosts.brokerZkPath = kafkaZnode + zkHosts.brokerZkPath;
final SpoutConfig spoutConfig = new SpoutConfig(zkHosts, inputTopic, kafkaZnode, kafkaConsumerGroup);
spoutConfig.scheme = new KeyValueSchemeAsMultiScheme(inputKafkaKeyValueScheme);
spoutConfig.zkServers = serverAddresses;
spoutConfig.zkPort = zkPort;
spoutConfig.zkRoot = kafkaZnode;
I think you are hitting this bug:
https://community.hortonworks.com/questions/66524/closedchannelexception-kafka-spout-cannot-read-kaf.html
And the comment from the colleague above fixed my issue. I added some newer libraries to.

google api .net client v3 getting free busy information

I am trying to query free busy data from Google calendar. Simply I am providing start date/time and end date/time. All I want to know is if this time frame is available or not. When I run below query, I get "responseOBJ" response object which doesn't seem to include what I need. The response object only contains start and end time. It doesn't contain flag such as "IsBusy" "IsAvailable"
https://developers.google.com/google-apps/calendar/v3/reference/freebusy/query
#region Free_busy_request_NOT_WORKING
FreeBusyRequest requestobj = new FreeBusyRequest();
FreeBusyRequestItem c = new FreeBusyRequestItem();
c.Id = "calendarresource#domain.com";
requestobj.Items = new List<FreeBusyRequestItem>();
requestobj.Items.Add(c);
requestobj.TimeMin = DateTime.Now.AddDays(1);
requestobj.TimeMax = DateTime.Now.AddDays(2);
FreebusyResource.QueryRequest TestRequest = calendarService.Freebusy.Query(requestobj);
// var TestRequest = calendarService.Freebusy.
// FreeBusyResponse responseOBJ = TestRequest.Execute();
var responseOBJ = TestRequest.Execute();
#endregion
Calendar API will only ever provide ordered busy blocks in the response, never available blocks. Everything outside busy is available. Do you have at least one event on the calendar
with the given ID in the time window?
Also the account you are using needs to have at least free-busy access to the resource to be able to retrieve availability.
I know this question is old, however I think it would be beneficial to see an example. You will needed to actually grab the Busy information from your response. Below is a snippet from my own code (minus the call) with how to handle the response. You will need to utilized your c.Id as the key to search through the response:
FreebusyResource.QueryRequest testRequest = service.Freebusy.Query(busyRequest);
var responseObject = testRequest.Execute();
bool checkBusy;
bool containsKey;
if (responseObject.Calendars.ContainsKey("**INSERT YOUR KEY HERE**"))
{
containsKey = true;
if (containsKey)
{
//Had to deconstruct API response by WriteLine(). Busy returns a count of 1, while being free returns a count of 0.
//These are properties of a dictionary and a List of the responseObject (dictionary returned by API POST).
if (responseObject.Calendars["**YOUR KEY HERE**"].Busy.Count == 0)
{
checkBusy = false;
//WriteLine(checkBusy);
}
else
{
checkBusy = true;
//WriteLine(checkBusy);
}
if (checkBusy == true)
{
var busyStart = responseObject.Calendars["**YOUR KEY HERE**"].Busy[0].Start;
var busyEnd = responseObject.Calendars["**YOUR KEY HERE**].Busy[0].End;
//WriteLine(busyStart);
//WriteLine(busyEnd);
//Read();
string isBusyString = "Between " + busyStart + " and " + busyEnd + " your trainer is busy";
richTextBox1.Text = isBusyString;
}
else
{
string isFreeString = "Between " + startDate + " and " + endDate + " your trainer is free";
richTextBox1.Text += isFreeString;
}
}
else
{
richTextBox1.Clear();
MessageBox.Show("CalendarAPIv3 has failed, please contact support\nregarding missing <key>", "ERROR!");
}
}
My suggestion would be to break your responses down by writing them to the console. Then, you can "deconstruct" them. That is how I was able to figure out "where" to look within the response. As noted above, you will only receive the information for busyBlocks. I used the date and time that was selected by my client's search to show the "free" times.
EDIT:
You'll need to check if your key exists before attempting the TryGetValue or searching with a keyvaluepair.

How do I detect what browser is used to access my site?

How do I detect what browser (IE, Firefox, Opera) the user is accessing my site with? Examples in Javascript, PHP, ASP, Python, JSP, and any others you can think of would be helpful. Is there a language agnostic way to get this information?
If it's for handling the request, look at the User-Agent header on the incoming request.
UPDATE: If it's for reporting, configure your web server to log the User-Agent in the access logs, then run a log analysis tool, e.g., AWStats.
UPDATE 2: FYI, it's usually (not always, usually) a bad idea to change the way you're handling a request based on the User-Agent.
Comprehensive list of User Agent Strings from various Browsers
The list is really large :)
You would take a look at the User-Agent that they are sending. Note that you can send whatever agent you want, so that's not 100% foolproof, but most people don't change it unless there's a specific reason to.
A quick and dirty java servlet example
private String getBrowserName(HttpServletRequest request) {
// get the user Agent from request header
String userAgent = request.getHeader(Constants.BROWSER_USER_AGENT);
String BrowesrName = "";
//check for Internet Explorer
if (userAgent.indexOf("MSIE") > -1) {
BrowesrName = Constants.BROWSER_NAME_IE;
} else if (userAgent.indexOf(Constants.BROWSER_NAME_FIREFOX) > -1) {
BrowesrName = Constants.BROWSER_NAME_MOZILLA_FIREFOX;
} else if (userAgent.indexOf(Constants.BROWSER_NAME_OPERA) > -1) {
BrowesrName = Constants.BROWSER_NAME_OPERA;
} else if (userAgent.indexOf(Constants.BROWSER_NAME_SAFARI) > -1) {
BrowesrName = Constants.BROWSER_NAME_SAFARI;
} else if (userAgent.indexOf(Constants.BROWSER_NAME_NETSCAPE) > -1) {
BrowesrName = Constants.BROWSER_NAME_NETSCAPE;
} else {
BrowesrName = "Undefined Browser";
}
//return the browser name
return BrowesrName;
}
You can use the HttpBrowserCapabilities Class in ASP.NET. Here is a sample from this link
private void Button1_Click(object sender, System.EventArgs e)
{
HttpBrowserCapabilities bc;
string s;
bc = Request.Browser;
s= "Browser Capabilities" + "\n";
s += "Type = " + bc.Type + "\n";
s += "Name = " + bc.Browser + "\n";
s += "Version = " + bc.Version + "\n";
s += "Major Version = " + bc.MajorVersion + "\n";
s += "Minor Version = " + bc.MinorVersion + "\n";
s += "Platform = " + bc.Platform + "\n";
s += "Is Beta = " + bc.Beta + "\n";
s += "Is Crawler = " + bc.Crawler + "\n";
s += "Is AOL = " + bc.AOL + "\n";
s += "Is Win16 = " + bc.Win16 + "\n";
s += "Is Win32 = " + bc.Win32 + "\n";
s += "Supports Frames = " + bc.Frames + "\n";
s += "Supports Tables = " + bc.Tables + "\n";
s += "Supports Cookies = " + bc.Cookies + "\n";
s += "Supports VB Script = " + bc.VBScript + "\n";
s += "Supports JavaScript = " + bc.JavaScript + "\n";
s += "Supports Java Applets = " + bc.JavaApplets + "\n";
s += "Supports ActiveX Controls = " + bc.ActiveXControls + "\n";
TextBox1.Text = s;
}
PHP's predefined superglobal array $_SERVER contains a key "HTTP_USER_AGENT", which contains the value of the User-Agent header as sent in the HTTP request. Remember that this is user-provided data and is not to be trusted. Few users alter their user-agent string, but it does happen from time to time.
On the client side, you can do this in Javascript using the navigation.userAgent object. Here's a crude example:
if (navigator.userAgent.indexOf("MSIE") > -1)
{
alert("Internet Explorer!");
}
else if (navigator.userAgent.indexOf("Firefox") > -1)
{
alert("Firefox!");
}
A more detailed and comprehensive example can be found here: http://www.quirksmode.org/js/detect.html
Note that if you're doing the browser detection for the sake of Javascript compatibility, it's usually better to simply use object detection or a try/catch block, lest some version you didn't think of slip through the cracks of your script.
For example, instead of doing this...
if(navigator.userAgent.indexOf("MSIE 6") > -1)
{
objXMLHttp = new ActiveXObject("Microsoft.XMLHTTP");
}
else
{
objXMLHttp = new XMLHttpRequest();
}
...this is better:
if(window.XMLHttpRequest) // Works in Firefox, Opera, and Safari, maybe latest IE?
{
objXMLHttp = new XMLHttpRequest();
}
else if (window.ActiveXObject) // If the above fails, try the MSIE 6 method
{
objXMLHttp = new ActiveXObject("Microsoft.XMLHTTP");
}
It may be dependent of your setting. With apache on linux, its written in the access log /var/log/apache2/access_log
You can do this by:
- looking at the web server log, OR
- looking at the User-Agent field in the HTML request (which is a plain text stream) before processing it.
First of all, I'd like to note, that it is best to avoid patching against specific web-browsers, unless as a last result -try to achieve cross-browser compatibility instead using standard-compliant HTML/CSS/JS (yes, javascript does have a common denominator subset, which works across all major browsers).
With that said, the user-agent tag from the HTTP request header contains the client's (claimed) browser. Although this has become a real mess due to people working against specific browser, and not the specification, so determining the real browser can be a little tricky.
Match against this:
contains browser
Firefox -> Firefox
MSIE -> Internet Explorer
Opera -> Opera (one of the few browsers, which don't pretend to be Mozilla :) )
Most of the agents containing the words "bot", or "crawler" are usually bots (so you can omit it from logs / etc)
check out browsecap.ini. The linked site has files for multiple scripting languages. The browsecap not only identifies the user-agent but also has info about the browser's CSS support, JS support, OS, if its a mobile browser etc.
cruise over to this page to see an example of what info the browsecap.ini can tell you about your current browser.

Resources