Hi there!
I'm very Very new to Chromebook and the ONC file, so my apology if it's already asked and answered.
I'm running OpenVPN v2.4.9 Server and everything works just fine form Mac/Linux/Windows using .ovpn formatted client configuration file. On the server-side, I'm using tls-crypt (as opposed to tls-auth) as per the new recommendation and looks like that's where it's failing from the CB, using ONC file.
This is my server configuration:
auth SHA256
auth-nocache
ca /etc/openvpn/server/ca.crt
cert /etc/openvpn/server/server.crt
cipher AES-256-GCM
client-config-dir /etc/openvpn/client
compress lz4-v2
dev tun
dh /etc/openvpn/server/dh2048.pem
explicit-exit-notify 1
ifconfig-pool-persist /etc/openvpn/server/ipp.txt
keepalive 10 120
key /etc/openvpn/server/server.key
log /var/log/openvpn/connection.log
log-append /var/log/openvpn/connection.log
max-clients 10
ncp-ciphers AES-256-GCM
persist-key
persist-tun
plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so login
port 1194
proto udp4
push "compress lz4-v2"
push "dhcp-option DNS 8.8.8.8"
push "redirect-gateway def1 bypass-dhcp"
push "route 10.0.0.0 255.255.0.0"
remote-cert-eku "TLS Web Client Authentication"
server 192.168.10.0 255.255.255.0
sndbuf 2097152
status /var/log/openvpn/status.log
tls-crypt /etc/openvpn/server/ta.key
tls-version-min 1.2
verb 3
And this my client ONC config :
{
"Type": "UnencryptedConfiguration",
"Certificates": [
{
"GUID": "Bootstrap-Server-CA",
"Type": "Authority",
"X509": "MIIGITCCBAmgAw.....MAYsw8ZLPlmJNN/wA=="
},
{
"GUID": "Bootstrap-Root-CA",
"Type": "Authority",
"X509": "MIIGDDCCA/SgAf.....TbtcIBMrAiSlsOwHg=="
},
{
"GUID": "Bootstrap-User-Cert",
"Type": "Client",
"PKCS12": "MIILvQIBAzCC.....srrOGmHY3h7MPauIlD3"
}
],
"NetworkConfigurations": [
{
"GUID": "BOOTSTRAP_CONN_1",
"Name": "bootstrap_vpn",
"Type": "VPN",
"VPN": {
"Type": "OpenVPN",
"Host": "xx.xxx.xx.xxx",
"OpenVPN": {
"Auth": "SHA256",
"Cipher": "AES-256-GCM",
"ClientCertRef": "Bootstrap-User-Cert",
"ClientCertType": "Ref",
"IgnoreDefaultRoute": true,
"KeyDirection": "1",
"Port": 1194,
"Proto": "udp4",
"RemoteCertEKU": "TLS Web Client Authentication",
"RemoteCertTLS": "server",
"UseSystemCAs": true,
"ServerCARefs": [
"Bootstrap-Server-CA",
"Bootstrap-Root-CA",
],
"TLSAuthContents": "-----BEGIN OpenVPN Static key V1-----\n....\n.....\n-----END OpenVPN Static key V1-----\n",
"UserAuthenticationType": "Password"
}
}
}
]
}
It fails with no such useful message on the client-side (apart from saying: Failed to connect to the network..) but on the server, it's reported as:
Wed Sep 23 17:44:15 2020 us=591576 tls-crypt unwrap error: packet authentication failed
Wed Sep 23 17:44:15 2020 us=591631 TLS Error: tls-crypt unwrapping failed from [AF_INET]xx.xx.xx.xx:64762
Wed Sep 23 17:44:44 2020 us=359795 tls-crypt unwrap error: packet authentication failed
Wed Sep 23 17:44:44 2020 us=359858 TLS Error: tls-crypt unwrapping failed from [AF_INET]xx.xx.xx.xx:19733
Any idea what I am doing wrong or missing? I'd really appreciate if anyone can put me to the right direction.
-S
As far as I know, the ONC format does not accept tls-crypt. If your Chromebook accepts Android apps, you can use the unofficial OpenVPN android app (blinkt.de), which does accept that.
Related
I'm trying to configure xpack for elasticsearch/kibana, I've activated the trial license for elasticsearch, configured xpack for kibana/elasticsearch and also I've generated ca.crt, node1-elk.crt, node1-elk.key and also kibana.key , kibana.crt and if I'm testing with curl towards the elasticsearch using the kibana user and password and also the ca.crt it's working like a charm, if I'm trying to access kibana from the GUI says that the "Server is not ready yet" and the logs show that " unable to verify the first certificate" :
{"type":"log","#timestamp":"2021-11-16T04:41:09-05:00","tags":["error","savedobjects-service"],"pid":13250,"message":"Unable to retrieve version information from Elasticsearch nodes. unable to verify the first certificate"}
My configuration:
kibana.yml
server.name: "my-kibana"
server.host: "0.0.0.0"
elasticsearch.hosts: ["https://0.0.0.0:9200"]
server.ssl.enabled: true
server.ssl.certificate: /etc/kibana/certs/kibana.crt
server.ssl.key: /etc/kibana/certs/kibana.key
server.ssl.certificateAuthorities: ["/etc/kibana/certs/ca.crt"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "kibana"
elasticsearch.yml
node.name: node1
network.host: 0.0.0.0
discovery.seed_hosts: [ "0.0.0.0" ]
cluster.initial_master_nodes: ["node1"]
xpack.security.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.key: /etc/elasticsearch/certs/node1.key
xpack.security.http.ssl.certificate: /etc/elasticsearch/certs/node1.crt
xpack.security.http.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.crt" ]
xpack.security.transport.ssl.key: /etc/elasticsearch/certs/node1.key
xpack.security.transport.ssl.certificate: /etc/elasticsearch/certs/node1.crt
xpack.security.transport.ssl.certificate_authorities: [ "/etc/elasticsearch/certs/ca.crt" ]
curl testing:
[root#localhost kibana]# curl -XGET https://0.0.0.0:9200/_cat/nodes?v -u kibana_system:kibana --cacert /etc/elasticsearch/certs/ca.crt
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.100.102 23 97 3 0.00 0.02 0.08 cdfhilmrstw * node1
I don't know what to do more here:
[root#localhost kibana]# curl -XGET https://0.0.0.0:9200/_license -u kibana_system:kibana --cacert /etc/elasticsearch/certs/ca.crt
{
"license" : {
"status" : "active",
"uid" : "872f0ad0-723e-43c8-b346-f43e2707d3de",
"type" : "trial",
"issue_date" : "2021-11-08T18:26:15.422Z",
"issue_date_in_millis" : 1636395975422,
"expiry_date" : "2021-12-08T18:26:15.422Z",
"expiry_date_in_millis" : 1638987975422,
"max_nodes" : 1000,
"issued_to" : "elasticsearch",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}
Thank you for your help
I have a Spring Boot Java app and am sending metrics to a hierarchical Graphite metrics system. I'm using the management.metrics.export.graphite.tags-as-prefix and mapping host and app to prefix my metrics. I then have a metric with namespace jvm.memory.committed, but the metric namespace is coming over the wire as host.app.jvmMemoryCommitted.*. So it's replacing dots (".") in the metric namespace and camelCasing the following piece of the namespace.
application.properties
management.metrics.export.graphite.tags-as-prefix=[host, app]
Customizer for tags as prefix.
#Bean
public MeterRegistryCustomizer<MeterRegistry> commonTags() {
return r -> r.config().commonTags("host", "localhost", "app", "app");
}
When I look at the .../actuator/metrics/jvm.memory.committed endpoint I see the following:
"name": "jvm.memory.committed",
"description": "The amount of memory in bytes that is committed for the Java virtual machine to use",
"baseUnit": "bytes",
"measurements": [
{
"statistic": "VALUE",
"value": 759701504
}
],
"availableTags": [
{
"tag": "area",
"values": [
"heap",
"nonheap"
]
},
{
"tag": "app",
"values": [
"app"
]
},
{
"tag": "host",
"values": [
"localhost"
]
},
{
"tag": "id",
"values": [
"G1 Old Gen",
"CodeHeap 'non-profiled nmethods'",
"G1 Survivor Space",
"Compressed Class Space",
"Metaspace",
"G1 Eden Space",
"CodeHeap 'non-nmethods'"
]
},
]
}
However, when the metrics are being sent with the metric names changed from *.jvm.memory.committed.* to *.jvmMemoryCommitted.*. How can I preserve the metrics namespace in dot-notation?
See the tcpdump output below:
$ sudo tcpdump -i any -A -s0 -vv udp port 2003 | grep -i committed
tcpdump: data link type PKTAP
tcpdump: listening on any, link-type PKTAP (Apple DLT_PKTAP), capture size 262144 bytes
....E....5..#............E...p..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Eden_Space 178257920.00 1628102627
....E....5..#............E...p..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Eden_Space 178257920.00 1628102627
....E...o...#............E...m..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Old_Gen 465567744.00 1628102627
....E...o...#............E...m..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Old_Gen 465567744.00 1628102627
....E.......#............E...s..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Survivor_Space 10485760.00 1628102627
....E.......#............E...s..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Survivor_Space 10485760.00 1628102627
....E...m...#............E...{..localhost.app.jvmMemoryCommitted.area.nonheap.id.CodeHeap_'non-nmethods' 3604480.00 1628102627
....E...m...#............E...{..localhost.app.jvmMemoryCommitted.area.nonheap.id.CodeHeap_'non-nmethods' 3604480.00 1628102627
....E....J..#............E......localhost.app.jvmMemoryCommitted.area.nonheap.id.CodeHeap_'non-profiled_nmethods' 10420224.00 1628102627
....E....J..#............E......localhost.app.jvmMemoryCommitted.area.nonheap.id.CodeHeap_'non-profiled_nmethods' 10420224.00 1628102627
....E.......#............E...z..localhost.app.jvmMemoryCommitted.area.nonheap.id.Compressed_Class_Space 9306112.00 1628102627
....E.......#............E...z..localhost.app.jvmMemoryCommitted.area.nonheap.id.Compressed_Class_Space 9306112.00 1628102627
....E....,..#............E...n..localhost.app.jvmMemoryCommitted.area.nonheap.id.Metaspace 69607424.00 1628102627
....E....,..#............E...n..localhost.app.jvmMemoryCommitted.area.nonheap.id.Metaspace 69607424.00 1628102627
^C444 packets captured
3200 packets received by filter
0 packets dropped by kernel```
I think the problem is that I'm using tags, but in a Hierarchical metrics system, but I can't figure out how to configure it properly. I can't seem to find my folly.
Spring Boot 2.5.2
Micrometer Core and Micrometer Registry Graphite 1.7.2
Graphite uses a HierarchicalNameMapper to convert the metric names and tags into a hierarchical string.
See https://micrometer.io/docs/registry/graphite#_hierarchical_name_mapping
I'm not certain why you mapper is camelCasing your metric name, but you can set the HierarchicalNameMapper when you construct your own GraphiteMeterRegistry and have fine tuned control of how they are generated.
I am creating virtualbox redhat box with packer with the template attached below. Everything is fine except that when the host is created and rebooted, the eth0 network adapter does not start as it is created with ONBOOT=no in /etc/sysconfig/network-scripts. However, if I open the UI of the box and manually trigger ifup eth0, it starts fine, ssh becomes available and the process completes as expected. However, I need to use it in a jenkins pipeline so there is no option someone can go and start the network interface manually. The question, is there any way to change the ONBOOT option to yes for the network adapter with virtualbox manage commands, or trigger the ifup eth0 command somehow. Either option may solve the problem.
{
"variables": {
"build_base": ".",
"isref_machine":"create-ova-caf",
"build_name":"virtual-box-jenkins",
"output_name":"packer-virtual-box",
"disk_size":"40000",
"ram":"1024",
"disk_adapter":"ide"
},
"builders":[
{
"name": "{{user `build_name`}}",
"type": "virtualbox-iso",
"guest_os_type": "Other_64",
"iso_url": "rhelis74_1710051533.iso",
"iso_checksum": "",
"iso_checksum_type": "none",
"hard_drive_interface":"{{user `disk_adapter`}}",
"ssh_username": "root",
"ssh_password": "Secret1.0",
"shutdown_command": "shutdown -P now",
"guest_additions_mode":"disable",
"boot_wait": "3s",
"boot_command": [ "auto<enter>"],
"ssh_timeout": "40m",
"headless":
"true",
"vm_name": "{{user `output_name`}}",
"disk_size": "{{user `disk_size`}}",
"output_directory":"{{user `build_base`}}/output-{{build_name}}",
"format": "ovf",
"vrdp_bind_address": "0.0.0.0",
"vboxmanage": [
["modifyvm", "{{.Name}}","--nictype1","virtio"],
["modifyvm", "{{.Name}}","--memory","{{ user `ram`}}"]
],
"skip_export":true,
"keep_registered": true
}
],
"provisioners": [
{
"type":"shell",
"inline": ["ls"]
}
]
}
To change the network interface boot settings to onboot=yes, we need to create an anaconda kickstart script or copy one from an existing machine and change the configurations in it and pass it as
"boot_command": [ "<esc><wait>",
"vmlinuz initrd=initrd.img net.ifnames=0 biosdevname=0 ",
"ks=hd:fd0:/anaconda-ks.cfg",
"<enter>"
],
and in anaconda file
network --bootproto=dhcp --device=eth0 --onboot=on --ipv6=auto --activate
I've been searching around and haven't found a solution to my situation. I'm running a cluster of 5 machines with Docker Swarm. For right now, I'm trying to get two of the containers communicating with each other. If both containers are running on the same single machine, everything works fine. If one container is on one machine/node, and another container is on a different machine/node, things stop working.
One container is nginx, the other is a front-end nodejs app that nginx will act as a proxy for. nginx "assumes" that the front-end container is running on the localhost only. Although this can be the case, it's not always the case. The front-end container can be running in one of any 5 hosts.
I'm not sure how to get nginx to realize that there are 4 other hosts out there which might have this front-end service it needs to connect with. I do have a DNS on IP 127.0.0.11 (Docker standard embedded DNS server).
Here is an error I see whenever the containers are on different nodes
2016/11/01 14:33:41 [error] 8#8: *1 connect() failed (113: No route to host) while connecting to upstream, client: 10.255.0.3, server: , request: "GET / HTTP/1.1", upstream: "http://10.10.0.2:8079/", host: "cluster3"
In this case, cluster3 is the localhost running the nginx container. However, front-end is running on cluster4, which explains why it can't find front-end.
Below is my current nginx.conf file. Is there something I can change here to get this to work right? I was playing around with upstream and resolver but that didn't seem make a difference.
worker_processes 5; ## Default: 1
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
server { # simple load balancing
listen 80;
location / {
proxy_pass http://front-end:8079;
}
}
}
EDIT:
I notice the following gets printed to the kernel logs whenever I point my browser to the nginx instance.
[ 225.521382] BUG: using smp_processor_id() in preemptible [00000000] code: nginx/3501
[ 225.528808] caller is $x+0x5ac/0x828 [ip_vs]
[ 225.528820] CPU: 0 PID: 3501 Comm: nginx Not tainted 3.14.79-92 #1
[ 225.528825] Call trace:
[ 225.528836] [<ffffffc001088e40>] dump_backtrace+0x0/0x128
[ 225.528842] [<ffffffc001088f8c>] show_stack+0x24/0x30
[ 225.528849] [<ffffffc00188460c>] dump_stack+0x88/0xac
[ 225.528858] [<ffffffc0013dcf94>] debug_smp_processor_id+0xe4/0xe8
[ 225.528899] [<ffffffbffc1ec8fc>] $x+0x5ac/0x828 [ip_vs]
[ 225.528938] [<ffffffbffc1ecbdc>] ip_vs_local_request4+0x2c/0x38 [ip_vs]
[ 225.528945] [<ffffffc001789ecc>] nf_iterate+0xc4/0xd8
[ 225.528951] [<ffffffc001789f6c>] nf_hook_slow+0x8c/0x138
[ 225.528957] [<ffffffc0017994e8>] __ip_local_out+0x90/0xb0
[ 225.528963] [<ffffffc001799528>] ip_local_out+0x20/0x48
[ 225.528968] [<ffffffc001799848>] ip_queue_xmit+0x118/0x370
[ 225.528975] [<ffffffc0017b09d8>] tcp_transmit_skb+0x438/0x8b0
[ 225.528980] [<ffffffc0017b231c>] tcp_connect+0x54c/0x638
[ 225.528987] [<ffffffc0017b50c8>] tcp_v4_connect+0x258/0x3b8
[ 225.528993] [<ffffffc0017cca50>] __inet_stream_connect+0x120/0x358
[ 225.528999] [<ffffffc0017cccd0>] inet_stream_connect+0x48/0x68
[ 225.529005] [<ffffffc00173bf80>] SyS_connect+0xc0/0xe8
[ 269.026442] BUG: using smp_processor_id() in preemptible [00000000] code: nginx/3501
[ 269.028592] caller is ip_vs_schedule+0x31c/0x4f8 [ip_vs]
[ 269.028600] CPU: 0 PID: 3501 Comm: nginx Not tainted 3.14.79-92 #1
[ 269.028604] Call trace:
[ 269.028618] [<ffffffc001088e40>] dump_backtrace+0x0/0x128
[ 269.028624] [<ffffffc001088f8c>] show_stack+0x24/0x30
[ 269.028632] [<ffffffc00188460c>] dump_stack+0x88/0xac
[ 269.028641] [<ffffffc0013dcf94>] debug_smp_processor_id+0xe4/0xe8
[ 269.028680] [<ffffffbffc1eb2dc>] ip_vs_schedule+0x31c/0x4f8 [ip_vs]
[ 269.028719] [<ffffffbffc1fe658>] $x+0x130/0x278 [ip_vs]
[ 269.028757] [<ffffffbffc1eca08>] $x+0x6b8/0x828 [ip_vs]
[ 269.028795] [<ffffffbffc1ecbdc>] ip_vs_local_request4+0x2c/0x38 [ip_vs]
[ 269.028802] [<ffffffc001789ecc>] nf_iterate+0xc4/0xd8
[ 269.028807] [<ffffffc001789f6c>] nf_hook_slow+0x8c/0x138
[ 269.028814] [<ffffffc0017994e8>] __ip_local_out+0x90/0xb0
[ 269.028820] [<ffffffc001799528>] ip_local_out+0x20/0x48
[ 269.028825] [<ffffffc001799848>] ip_queue_xmit+0x118/0x370
[ 269.028832] [<ffffffc0017b09d8>] tcp_transmit_skb+0x438/0x8b0
[ 269.028837] [<ffffffc0017b231c>] tcp_connect+0x54c/0x638
[ 269.028844] [<ffffffc0017b50c8>] tcp_v4_connect+0x258/0x3b8
[ 269.028850] [<ffffffc0017cca50>] __inet_stream_connect+0x120/0x358
[ 269.028855] [<ffffffc0017cccd0>] inet_stream_connect+0x48/0x68
[ 269.028862] [<ffffffc00173bf80>] SyS_connect+0xc0/0xe8
I get the following error when running vagrant up --provision to set up my development environment with vagrant...
==> default: [2014-12-08T20:33:51+00:00] ERROR: remote_file[http://nginx.org/download/nginx-1.7.8.tar.gz] (nginx::source line 58) had an error: Chef::Exceptions::ChecksumMismatch: Checksum on resource (0510af) does not match checksum on content (12f75e)
My chef JSON has the following for nginx:
"nginx": {
"version": "1.7.8",
"user": "deploy",
"init_style": "init",
"modules": [
"http_stub_status_module",
"http_ssl_module",
"http_gzip_static_module"
],
"passenger": {
"version": "4.0.53",
"gem_binary": "/home/vagrant/.rbenv/shims/gem"
},
"configure_flags": [
"--add-module=/home/vagrant/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/passenger-3.0.18/ext/nginx"
],
"gzip_types": [
"text/plain",
"text/html",
"text/css",
"text/xml",
"text/javascript",
"application/json",
"application/x-javascript",
"application/xml",
"application/xml+rss"
]}
and Cheffile has the following cookbook:
cookbook 'nginx'
How do I resolve the Checksum mismatch?
The nginx cookbook requires you to edit the checksum attribute when using another version of nginx. The remote_file resource that is causing you an error is:
remote_file nginx_url do
source nginx_url
checksum node['nginx']['source']['checksum']
path src_filepath
backup false
end
You need to update the checksum value. Specifically node['nginx']['source']['checksum'].
So in your JSON, you would add this line:
"source": {"checksum": "insert checksum here" }
Edit: As pointed out in the comments, the checksum is SHA256. You can generate the checksum of the file like so:
shasum -a 256 nginx-1.7.8.tar.gz