Widevine License Response with "invalid_license_challenge" internal_status=106. what does 106 stand for? - widevine

We got a failure in our Widevine license request. The response is below:
[
{
"status":"INVALID_LICENSE_CHALLENGE",
"supported_tracks":[
],
"internal_status":106,
"client_info":[
],
"content_owner":"xxx",
"content_provider":"xxx",
"service_version_info":{
"license_sdk_version":"16.4.2 Built on Feb 16 2021 15:15:05 (1613517266)",
"license_service_version":"widevine_license_wls_20210210_101970-RC02"
}
}
]
I just couldn't find the definition for 106 anywhere on the Internet.

Related

How to connect to OpenVPN from Chromebook when using tls-crypt?

​Hi there!
I'm very Very new to Chromebook and the ONC file, so my apology if it's already asked and answered.
I'm running OpenVPN v2.4.9 Server and everything works just fine form Mac/Linux/Windows using .ovpn formatted client configuration file. On the server-side, I'm using tls-crypt ​(as opposed to tls-auth) as per the new recommendation and looks like that's where it's failing from the CB, using ONC file.
This is my server configuration:
auth SHA256
auth-nocache
ca /etc/openvpn/server/ca.crt
cert /etc/openvpn/server/server.crt
cipher AES-256-GCM
client-config-dir /etc/openvpn/client
compress lz4-v2
dev tun
dh /etc/openvpn/server/dh2048.pem
explicit-exit-notify 1
ifconfig-pool-persist /etc/openvpn/server/ipp.txt
keepalive 10 120
key /etc/openvpn/server/server.key
log /var/log/openvpn/connection.log
log-append /var/log/openvpn/connection.log
max-clients 10
ncp-ciphers AES-256-GCM
persist-key
persist-tun
plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so login
port 1194
proto udp4
push "compress lz4-v2"
push "dhcp-option DNS 8.8.8.8"
push "redirect-gateway def1 bypass-dhcp"
push "route 10.0.0.0 255.255.0.0"
remote-cert-eku "TLS Web Client Authentication"
server 192.168.10.0 255.255.255.0
sndbuf 2097152
status /var/log/openvpn/status.log
tls-crypt /etc/openvpn/server/ta.key
tls-version-min 1.2
verb 3
And this my client ONC config :
{
"Type": "UnencryptedConfiguration",
"Certificates": [
{
"GUID": "Bootstrap-Server-CA",
"Type": "Authority",
"X509": "MIIGITCCBAmgAw.....MAYsw8ZLPlmJNN/wA=="
},
{
"GUID": "Bootstrap-Root-CA",
"Type": "Authority",
"X509": "MIIGDDCCA/SgAf.....TbtcIBMrAiSlsOwHg=="
},
{
"GUID": "Bootstrap-User-Cert",
"Type": "Client",
"PKCS12": "MIILvQIBAzCC.....srrOGmHY3h7MPauIlD3"
}
],
"NetworkConfigurations": [
{
"GUID": "BOOTSTRAP_CONN_1",
"Name": "bootstrap_vpn",
"Type": "VPN",
"VPN": {
"Type": "OpenVPN",
"Host": "xx.xxx.xx.xxx",
"OpenVPN": {
"Auth": "SHA256",
"Cipher": "AES-256-GCM",
"ClientCertRef": "Bootstrap-User-Cert",
"ClientCertType": "Ref",
"IgnoreDefaultRoute": true,
"KeyDirection": "1",
"Port": 1194,
"Proto": "udp4",
"RemoteCertEKU": "TLS Web Client Authentication",
"RemoteCertTLS": "server",
"UseSystemCAs": true,
"ServerCARefs": [
"Bootstrap-Server-CA",
"Bootstrap-Root-CA",
],
"TLSAuthContents": "-----BEGIN OpenVPN Static key V1-----\n....\n.....\n-----END OpenVPN Static key V1-----\n",
"UserAuthenticationType": "Password"
}
}
}
]
}
It fails with no such useful message on the client-side (apart from saying: Failed to connect to the network..) but on the server, it's reported as:
Wed Sep 23 17:44:15 2020 us=591576 tls-crypt unwrap error: packet authentication failed
Wed Sep 23 17:44:15 2020 us=591631 TLS Error: tls-crypt unwrapping failed from [AF_INET]xx.xx.xx.xx:64762
Wed Sep 23 17:44:44 2020 us=359795 tls-crypt unwrap error: packet authentication failed
Wed Sep 23 17:44:44 2020 us=359858 TLS Error: tls-crypt unwrapping failed from [AF_INET]xx.xx.xx.xx:19733
Any idea what I am doing wrong or missing? I'd really appreciate if anyone can put me to the right direction.
-S
As far as I know, the ONC format does not accept tls-crypt. If your Chromebook accepts Android apps, you can use the unofficial OpenVPN android app (blinkt.de), which does accept that.

Data ingestion task : Hadoop running in local instead of remote Hadoop EMR cluster

I have setup a multi-node druid cluster with:
1) 1 node running as coordinator and overlord (m4.xl)
2) 2 nodes each running historical and middle managers both. (r3.2xl)
3) 1 node running broker (r3.2xl)
Now I have an EMR cluster running which I want to use for ingestion tasks, the problem is whenever I try to submit a job via the CURL command, the job always starts as local hadoop job in both the middle managers instead of being submitted to the remote EMR cluster. My data lies in S3 and also S3 is configured for deep storage as well.
I have also copied all the jars from EMR master to hadoop-dependencies/hadoop-client/2.7.3/
Druid version: 0.9.2
EMR version: 5.2
Please find attached indexing job, common runtime properties and middle manager runtime properties.
Q1) How to get the job to submit to remote EMR cluster.
Q2) Logs for
the indexing task are not coming on overlord:8090, how to enable it.
File: data_index.json
{
"type": "index_hadoop",
"spec": {
"ioConfig": {
"type": "hadoop",
"inputSpec": {
"type": "static",
"paths": "s3n://<kjcnskd>smallTest"
}
},
"dataSchema": {
"dataSource": "multi_value_test_01",
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "day",
"queryGranularity": "none",
"intervals": [
"2011-09-12/2017-09-13"
]
},
"parser": {
"type": "string",
"parseSpec": {
"format": "tsv",
"delimiter": "\u0001",
"listDelimiter": "|",
"columns": [
"article_type",
"brand",
"gender",
"brand_type",
"master_category",
"supply_type",
"business_unit",
"testdim",
"date",
"week",
"month",
"year",
"style_id",
"live_styles",
"non_live_styles",
"broken_style",
"new_season_styles",
"live_styles_qty",
"non_live_styles_qty",
"broken_style_qty",
"new_season_styles_qty"
],
"dimensionsSpec": {
"dimensions": [
"article_type",
"brand",
"gender",
"brand_type",
"master_category",
"supply_type",
"business_unit",
"testdim",
"week",
"month",
"year",
"style_id"
]
},
"timestampSpec": {
"column": "date",
"format": "yyyyMMdd"
}
}
},
"metricsSpec": [
{
"name": "live_styles",
"type": "doubleSum",
"fieldName": "live_styles"
},
{
"name": "non_live_styles",
"type": "doubleSum",
"fieldName": "non_live_styles"
},
{
"name": "broken_style",
"type": "doubleSum",
"fieldName": "broken_style"
},
{
"name": "new_season_styles",
"type": "doubleSum",
"fieldName": "new_season_styles"
},
{
"name": "live_styles_qty",
"type": "doubleSum",
"fieldName": "live_styles_qty"
},
{
"name": "broken_style_qty",
"type": "doubleSum",
"fieldName": "broken_style_qty"
},
{
"name": "new_season_styles_qty",
"type": "doubleSum",
"fieldName": "new_season_styles_qty"
}
]
},
"tuningConfig": {
"type": "hadoop",
"partitionsSpec": {
"type": "hashed",
"targetPartitionSize": 5000000
},
"jobProperties": {
"fs.s3.awsAccessKeyId": "XXXXXXXXXXXXXX",
"fs.s3.awsSecretAccessKey": "XXXXXXXXXXXXXX",
"fs.s3.impl": "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
"fs.s3n.awsAccessKeyId": "XXXXXXXXXXXXXX",
"fs.s3n.awsSecretAccessKey": "XXXXXXXXXXXXXX",
"fs.s3n.impl": "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
"io.compression.codecs": "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec"
}
}
}
}
File: common.runtime.properties
#
# Licensed to Metamarkets Group Inc. (Metamarkets) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Metamarkets licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
#
# Extensions
#
# This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
# based on your particular setup.
druid.extensions.loadList=["druid-kafka-eight", "druid-s3-extensions", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "mysql-metadata-storage"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.
druid.extensions.hadoopDependenciesDir=hadoop-dependencies/hadoop-client/2.7.3
#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.service.host=10.0.1.152
druid.zk.paths.base=/druid
#
# Metadata storage
#
# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://metadata.store.ip:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=metadata.store.ip
#druid.metadata.storage.connector.port=1527
# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://10.0.1.140:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=druid123
# For PostgreSQL (make sure to additionally include the Postgres extension):
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...
#
# Deep storage
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments
# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
#druid.storage.type=hdfs
#druid.storage.storageDirectory=/druid/segments
# For S3:
druid.storage.type=s3
druid.storage.bucket=asfvdcs
druid.storage.baseKey=druid/segments
druid.s3.accessKey=XXXXXXXXXXXX
druid.s3.secretKey=XXXXXXXXXXXX
#
# Indexing service logs
#
# For local disk (only viable in a cluster if this is a network mount):
druid.indexer.logs.type=file
druid.indexer.logs.directory=var/druid/indexing-logs
# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
#druid.indexer.logs.type=hdfs
#druid.indexer.logs.directory=/druid/indexing-logs
# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=testashutosh
#druid.indexer.logs.s3Prefix=druid/indexing-logs
#
# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator
#
# Monitoring
#
druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info
File: middle manager runtime.properties
druid.service=druid/middleManager
druid.port=8091
# Number of tasks per middleManager
druid.worker.capacity=3
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
druid.server.http.numThreads=25
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=2
# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=hdfs://ip-10-0-1-xxx.ap-southeast-1.compute.internal:8020/tmp/druid-indexing
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.7.3"]
druid.indexer.runner.type=remote
You need to tell Druid about the Hadoop cluster. To quote the manual:
Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid nodes. You can do this by copying them into conf/druid/_common/core-site.xml, conf/druid/_common/hdfs-site.xml, and so on.
If you have already done that, than it would indicate in issue with one of the config files (happened to me).

An unexpected error occurred with tcp plugin

I made a simple logstash configuration:
tcp.conf
input {
tcp {
port => 22
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
stdout { codec => rubydebug }
}
running the configuration:
bin/logstash -f tcp.conf
executing this command:
telnet localhost 22
I get this error:
Using milestone 2 input plugin 'tcp'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin. For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}
+---------------------------------------------------------+
| An unexpected error occurred. This is probably a bug. |
| You can find help with this problem in a few places: |
| |
| * chat: #logstash IRC channel on freenode irc. |
| IRC via the web: http://goo.gl/TI4Ro |
| * email: logstash-users#googlegroups.com |
| * bug system: https://logstash.jira.com/ |
| |
+---------------------------------------------------------+
The error reported is:
Permission denied - bind(2)
I am doing this configuration fallow the Syslog example
"Permission denied - bind" means that logstash can't attach itself to the listed port.
Often, this is because you're running logstash as a non-privileged user who cannot access ports numbered below 1024.
In your case, you're trying to connect to port 22. As the ssh/scp/sftp port, this seems like an odd place to look for log files.

Amazon S3 do not return 100 continue, how?

I want to upload a file to S3. I create a httpconnnection(s3.amazonaws.com), and i send the head={'Date': 'Fri, 18 Jul 2014 03:24:08 +0000', 'Host': 'xx-bucket.s3.amazonaws.com', 'Content-Length': 32, 'Authorization': 'AWS AKIAI2V3MxxxOOKYAAA:JrjeT5rTxCl752tMWEZ0knEE3Zw=', 'Expect': '100-continue'}.
Then i waitting for s3 return a status 100 continue. Saddly, s3 does not return.
This situation did not happen before a week.
And if i use httpSconnection(s3.amazonaws.com), s3 will return status 100 continue.
what happend?

How to influence layout of graph items?

I am trying to visualize a simple Finite State Machine graph using Graphviz. The layout created by Graphviz is not completely to my liking. I was expecting a more compact result with shorter edges.
So far, I have tried using groups and changing the weights of edges, but not much luck. It is not clear to me why Graphviz draws the graph the way it does and how to adjust its algorithm to my liking. Are there any parameters I can set to achieve that? Or should I use another command than dot? I tried neato, but the result looked completely messed up and again, I do not really understand what I am doing...
This is my best result so far:
Trying to visualize a better lay-out than this, I think the graph would look nicer if the red boxes were aligned differently, more compact for example like indicated by the arrows in this picture:
I used dot to create the graph and he source code is as follows:
1 digraph JobStateDiagram
2 {
3 rankdir=LR;
4 size="8,5";
5
6 node [style="rounded,filled,bold", shape=box, fixedsize=true, width=1.3, fontname="Arial"];
7 Created [fillcolor=black, shape=circle, label="", width=0.25];
8 Destroyed [fillcolor=black, shape=doublecircle, label="", width=0.3];
9 Empty [fillcolor="#a0ffa0"];
10 Announced [fillcolor="#a0ffa0"];
11 Assigned [fillcolor="#a0ffa0"];
12 Working [fillcolor="#a0ffa0"];
13 Ready [fillcolor="#a0ffa0"];
14 TimedOut [fillcolor="#ffa0a0"];
15 Failed [fillcolor="#ffa0a0"];
16
17 {
18 rank=source; Created Destroyed;
19 }
20
21 edge [style=bold, fontname="Arial" weight=2]
22 Empty -> Announced [ label="announce" ];
23 Announced -> Assigned [ label="assign" ];
24 Assigned -> Working [ label="start" ];
25 Working -> Ready [ label="finish" ];
26 Ready -> Empty [ label="revoke" ];
27
28 edge [fontname="Arial" color="#aaaaaa" weight=1]
29 Announced -> TimedOut [ label="timeout" ];
30 Assigned -> TimedOut [ label="timeout" ];
31 Working -> TimedOut [ label="timeout" ];
32 Working -> Failed [ label="error" ];
33 TimedOut -> Announced [ label="announce" ];
34 TimedOut -> Empty [ label="revoke" ];
35 Failed -> Announced [ label="announce" ];
36 Failed -> Empty [ label="revoke" ];
37
38 edge [style=bold, fontname="Arial" weight=1]
39 Created -> Empty [ label="initialize" ];
40 Empty -> Destroyed [ label="finalize" ];
41 Announced -> Empty [ label="revoke" ];
42 Assigned -> Empty [ label="revoke" ];
43 Working -> Empty [ label="revoke" ];
44 }
Also, anybody please let me know if I do any strange things in the Graphviz file above -- any feedback is appreciated.
Update:
More experimenting and trying some suggestions like ports, given by user marapet, have increased my confusion... For example, in the picture below, why does dot choose to draw these strange detours for Working->Failed and Failed->Announced, as opposed to straighter lines?
To me your output looks alright. TimedOut and Failed are of course all the way to the right because there is an edge going from Working to them. That's what dot does best, and while you can make some tweaks to adjust graphviz layouts, I think it's better to use an other tool if you want to create a particular graph layout and control everything.
That being said, I did give it a quick try with graphviz. I changed some lines to create a straight line with all the green nodes, and to align the red nodes as indicated in your question. I also added edge concentrators - the result doesn't look better to me:
digraph JobStateDiagram
{
rankdir=LR;
size="8,5";
concentrate=true;
node [style="rounded,filled,bold", shape=box, fixedsize=true, width=1.3, fontname="Arial"];
Created [fillcolor=black, shape=circle, label="", width=0.25];
Destroyed [fillcolor=black, shape=doublecircle, label="", width=0.3];
Empty [fillcolor="#a0ffa0"];
Failed [fillcolor="#ffa0a0"];
Announced [fillcolor="#a0ffa0"];
Assigned [fillcolor="#a0ffa0"];
Working [fillcolor="#a0ffa0"];
Ready [fillcolor="#a0ffa0"];
TimedOut [fillcolor="#ffa0a0"];
{
rank=source; Created; Destroyed;
}
{
rank=same;Announced;Failed;
}
{
rank=same;Assigned;TimedOut;
}
edge [style=bold, fontname="Arial", weight=100]
Empty -> Announced [ label="announce" ];
Announced -> Assigned [ label="assign" ];
Assigned -> Working [ label="start" ];
Working -> Ready [ label="finish" ];
Ready -> Empty [ label="revoke", weight=1 ];
edge [color="#aaaaaa", weight=1]
Announced -> TimedOut [ label="timeout" ];
Assigned -> TimedOut [ label="timeout" ];
Working -> TimedOut [ label="timeout" ];
Working -> Failed [ label="error" ];
TimedOut -> Announced [ label="announce" ];
TimedOut -> Empty [ label="revoke" ];
Failed -> Announced [ label="announce" ];
Failed -> Empty [ label="revoke" ];
Created -> Empty [ label="initialize" ];
Empty -> Destroyed [ label="finalize" ];
Announced -> Empty [ label="revoke" ];
Assigned -> Empty [ label="revoke" ];
Working -> Empty [ label="revoke" ];
}
You may also improve by using ports in order to control where edges start and end.
As to your question about strange things in your dot file: Except line numbers (which finally allowed me to put column mode of my text editor to good use) and aligning, your file looks fine to me. I do structure my dot files similarly (graph properties, node list, groupings, edges) whenever possible. Just be aware that the order of first appearance of nodes may have an impact on the final layout.
Although this is a very old question, I had similar problem and would like to share my result. Besides the "weight", "rank=same" tricks, I just found these methods can be used to adjust the layout result:
dir=back
add more edges or nodes and set style=invis
When it comes to this particular graph in the question, actually rank=same and weight would do the main job and style=invis can do some fine tuning. So by adding these lines
{
rank=same;Announced;Failed;
}
{
rank=same;Assigned;TimedOut;
}
to the file and adding weight=1 to the 'Ready to Empty' edge, and with some invisible edges to fine tune the spaces I got this:
The complete graph dot source:
digraph JobStateDiagram
{
rankdir=LR;
size="8,5";
node [style="rounded,filled,bold", shape=box, fixedsize=true, width=1.3, fontname="Arial"];
Created [fillcolor=black, shape=circle, label="", width=0.25];
Destroyed [fillcolor=black, shape=doublecircle, label="", width=0.3];
Empty [fillcolor="#a0ffa0"];
Announced [fillcolor="#a0ffa0"];
Assigned [fillcolor="#a0ffa0"];
Working [fillcolor="#a0ffa0"];
Ready [fillcolor="#a0ffa0"];
TimedOut [fillcolor="#ffa0a0"];
Failed [fillcolor="#ffa0a0"];
{
rank=source; Created Destroyed;
}
{
rank=same;Announced;Failed; #change here
}
{
rank=same;Assigned;TimedOut; #change here
}
edge [style=bold, fontname="Arial" weight=20] #change here
Empty -> Announced [ label="announce" ];
Announced -> Assigned [ label="assign" ];
Assigned -> Working [ label="start" ];
Working -> Ready [ label="finish" ];
Ready -> Empty [ label="revoke" weight=1 ]; #change here
edge [fontname="Arial" color="#aaaaaa" weight=2] #change here
Announced -> TimedOut [ label="timeout" ];
Assigned -> TimedOut [ label="timeout" weight=1]; #change here
Working -> TimedOut [ label="timeout" ];
Working -> Failed [ label="error" ];
TimedOut -> Announced [ label="announce" ];
TimedOut -> Empty [ label="revoke" ];
Failed -> Announced [ label="announce" ];
Failed -> Empty [ label="revoke" ];
edge [style=bold, fontname="Arial" weight=1]
Created -> Empty [ label="initialize" ];
Empty -> Destroyed [ label="finalize" ];
Announced -> Empty [ label="revoke" ];
Assigned -> Empty [ label="revoke" ];
Working -> Empty [ label="revoke" ];
Assigned -> Working [ label="start" style=invis ]; #change here
Assigned -> Working [ label="start" style=invis ]; #change here
}
Update: instead of putting 'Failed' and 'Announced' at the same rank, putting 'Failed', 'Assigned' and 'TimedOut' the same rank might produce a better result like below, which IMO better illustrates the similarity and difference between Failed and TimedOut. (You have to remove the invis edges though to get the graph below)

Resources