Spring Boot Micrometer Hierarchical Metric Namespace Concatenation - graphite

I have a Spring Boot Java app and am sending metrics to a hierarchical Graphite metrics system. I'm using the management.metrics.export.graphite.tags-as-prefix and mapping host and app to prefix my metrics. I then have a metric with namespace jvm.memory.committed, but the metric namespace is coming over the wire as host.app.jvmMemoryCommitted.*. So it's replacing dots (".") in the metric namespace and camelCasing the following piece of the namespace.
application.properties
management.metrics.export.graphite.tags-as-prefix=[host, app]
Customizer for tags as prefix.
#Bean
public MeterRegistryCustomizer<MeterRegistry> commonTags() {
return r -> r.config().commonTags("host", "localhost", "app", "app");
}
When I look at the .../actuator/metrics/jvm.memory.committed endpoint I see the following:
"name": "jvm.memory.committed",
"description": "The amount of memory in bytes that is committed for the Java virtual machine to use",
"baseUnit": "bytes",
"measurements": [
{
"statistic": "VALUE",
"value": 759701504
}
],
"availableTags": [
{
"tag": "area",
"values": [
"heap",
"nonheap"
]
},
{
"tag": "app",
"values": [
"app"
]
},
{
"tag": "host",
"values": [
"localhost"
]
},
{
"tag": "id",
"values": [
"G1 Old Gen",
"CodeHeap 'non-profiled nmethods'",
"G1 Survivor Space",
"Compressed Class Space",
"Metaspace",
"G1 Eden Space",
"CodeHeap 'non-nmethods'"
]
},
]
}
However, when the metrics are being sent with the metric names changed from *.jvm.memory.committed.* to *.jvmMemoryCommitted.*. How can I preserve the metrics namespace in dot-notation?
See the tcpdump output below:
$ sudo tcpdump -i any -A -s0 -vv udp port 2003 | grep -i committed
tcpdump: data link type PKTAP
tcpdump: listening on any, link-type PKTAP (Apple DLT_PKTAP), capture size 262144 bytes
....E....5..#............E...p..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Eden_Space 178257920.00 1628102627
....E....5..#............E...p..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Eden_Space 178257920.00 1628102627
....E...o...#............E...m..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Old_Gen 465567744.00 1628102627
....E...o...#............E...m..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Old_Gen 465567744.00 1628102627
....E.......#............E...s..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Survivor_Space 10485760.00 1628102627
....E.......#............E...s..localhost.app.jvmMemoryCommitted.area.heap.id.G1_Survivor_Space 10485760.00 1628102627
....E...m...#............E...{..localhost.app.jvmMemoryCommitted.area.nonheap.id.CodeHeap_'non-nmethods' 3604480.00 1628102627
....E...m...#............E...{..localhost.app.jvmMemoryCommitted.area.nonheap.id.CodeHeap_'non-nmethods' 3604480.00 1628102627
....E....J..#............E......localhost.app.jvmMemoryCommitted.area.nonheap.id.CodeHeap_'non-profiled_nmethods' 10420224.00 1628102627
....E....J..#............E......localhost.app.jvmMemoryCommitted.area.nonheap.id.CodeHeap_'non-profiled_nmethods' 10420224.00 1628102627
....E.......#............E...z..localhost.app.jvmMemoryCommitted.area.nonheap.id.Compressed_Class_Space 9306112.00 1628102627
....E.......#............E...z..localhost.app.jvmMemoryCommitted.area.nonheap.id.Compressed_Class_Space 9306112.00 1628102627
....E....,..#............E...n..localhost.app.jvmMemoryCommitted.area.nonheap.id.Metaspace 69607424.00 1628102627
....E....,..#............E...n..localhost.app.jvmMemoryCommitted.area.nonheap.id.Metaspace 69607424.00 1628102627
^C444 packets captured
3200 packets received by filter
0 packets dropped by kernel```
I think the problem is that I'm using tags, but in a Hierarchical metrics system, but I can't figure out how to configure it properly. I can't seem to find my folly.
Spring Boot 2.5.2
Micrometer Core and Micrometer Registry Graphite 1.7.2

Graphite uses a HierarchicalNameMapper to convert the metric names and tags into a hierarchical string.
See https://micrometer.io/docs/registry/graphite#_hierarchical_name_mapping
I'm not certain why you mapper is camelCasing your metric name, but you can set the HierarchicalNameMapper when you construct your own GraphiteMeterRegistry and have fine tuned control of how they are generated.

Related

How to connect to OpenVPN from Chromebook when using tls-crypt?

​Hi there!
I'm very Very new to Chromebook and the ONC file, so my apology if it's already asked and answered.
I'm running OpenVPN v2.4.9 Server and everything works just fine form Mac/Linux/Windows using .ovpn formatted client configuration file. On the server-side, I'm using tls-crypt ​(as opposed to tls-auth) as per the new recommendation and looks like that's where it's failing from the CB, using ONC file.
This is my server configuration:
auth SHA256
auth-nocache
ca /etc/openvpn/server/ca.crt
cert /etc/openvpn/server/server.crt
cipher AES-256-GCM
client-config-dir /etc/openvpn/client
compress lz4-v2
dev tun
dh /etc/openvpn/server/dh2048.pem
explicit-exit-notify 1
ifconfig-pool-persist /etc/openvpn/server/ipp.txt
keepalive 10 120
key /etc/openvpn/server/server.key
log /var/log/openvpn/connection.log
log-append /var/log/openvpn/connection.log
max-clients 10
ncp-ciphers AES-256-GCM
persist-key
persist-tun
plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so login
port 1194
proto udp4
push "compress lz4-v2"
push "dhcp-option DNS 8.8.8.8"
push "redirect-gateway def1 bypass-dhcp"
push "route 10.0.0.0 255.255.0.0"
remote-cert-eku "TLS Web Client Authentication"
server 192.168.10.0 255.255.255.0
sndbuf 2097152
status /var/log/openvpn/status.log
tls-crypt /etc/openvpn/server/ta.key
tls-version-min 1.2
verb 3
And this my client ONC config :
{
"Type": "UnencryptedConfiguration",
"Certificates": [
{
"GUID": "Bootstrap-Server-CA",
"Type": "Authority",
"X509": "MIIGITCCBAmgAw.....MAYsw8ZLPlmJNN/wA=="
},
{
"GUID": "Bootstrap-Root-CA",
"Type": "Authority",
"X509": "MIIGDDCCA/SgAf.....TbtcIBMrAiSlsOwHg=="
},
{
"GUID": "Bootstrap-User-Cert",
"Type": "Client",
"PKCS12": "MIILvQIBAzCC.....srrOGmHY3h7MPauIlD3"
}
],
"NetworkConfigurations": [
{
"GUID": "BOOTSTRAP_CONN_1",
"Name": "bootstrap_vpn",
"Type": "VPN",
"VPN": {
"Type": "OpenVPN",
"Host": "xx.xxx.xx.xxx",
"OpenVPN": {
"Auth": "SHA256",
"Cipher": "AES-256-GCM",
"ClientCertRef": "Bootstrap-User-Cert",
"ClientCertType": "Ref",
"IgnoreDefaultRoute": true,
"KeyDirection": "1",
"Port": 1194,
"Proto": "udp4",
"RemoteCertEKU": "TLS Web Client Authentication",
"RemoteCertTLS": "server",
"UseSystemCAs": true,
"ServerCARefs": [
"Bootstrap-Server-CA",
"Bootstrap-Root-CA",
],
"TLSAuthContents": "-----BEGIN OpenVPN Static key V1-----\n....\n.....\n-----END OpenVPN Static key V1-----\n",
"UserAuthenticationType": "Password"
}
}
}
]
}
It fails with no such useful message on the client-side (apart from saying: Failed to connect to the network..) but on the server, it's reported as:
Wed Sep 23 17:44:15 2020 us=591576 tls-crypt unwrap error: packet authentication failed
Wed Sep 23 17:44:15 2020 us=591631 TLS Error: tls-crypt unwrapping failed from [AF_INET]xx.xx.xx.xx:64762
Wed Sep 23 17:44:44 2020 us=359795 tls-crypt unwrap error: packet authentication failed
Wed Sep 23 17:44:44 2020 us=359858 TLS Error: tls-crypt unwrapping failed from [AF_INET]xx.xx.xx.xx:19733
Any idea what I am doing wrong or missing? I'd really appreciate if anyone can put me to the right direction.
-S
As far as I know, the ONC format does not accept tls-crypt. If your Chromebook accepts Android apps, you can use the unofficial OpenVPN android app (blinkt.de), which does accept that.

How to fix VirtualBox redhat-7 eth0 ONBOOT=no connectivity issue with vboxmange tools?

I am creating virtualbox redhat box with packer with the template attached below. Everything is fine except that when the host is created and rebooted, the eth0 network adapter does not start as it is created with ONBOOT=no in /etc/sysconfig/network-scripts. However, if I open the UI of the box and manually trigger ifup eth0, it starts fine, ssh becomes available and the process completes as expected. However, I need to use it in a jenkins pipeline so there is no option someone can go and start the network interface manually. The question, is there any way to change the ONBOOT option to yes for the network adapter with virtualbox manage commands, or trigger the ifup eth0 command somehow. Either option may solve the problem.
{
"variables": {
"build_base": ".",
"isref_machine":"create-ova-caf",
"build_name":"virtual-box-jenkins",
"output_name":"packer-virtual-box",
"disk_size":"40000",
"ram":"1024",
"disk_adapter":"ide"
},
"builders":[
{
"name": "{{user `build_name`}}",
"type": "virtualbox-iso",
"guest_os_type": "Other_64",
"iso_url": "rhelis74_1710051533.iso",
"iso_checksum": "",
"iso_checksum_type": "none",
"hard_drive_interface":"{{user `disk_adapter`}}",
"ssh_username": "root",
"ssh_password": "Secret1.0",
"shutdown_command": "shutdown -P now",
"guest_additions_mode":"disable",
"boot_wait": "3s",
"boot_command": [ "auto<enter>"],
"ssh_timeout": "40m",
"headless":
"true",
"vm_name": "{{user `output_name`}}",
"disk_size": "{{user `disk_size`}}",
"output_directory":"{{user `build_base`}}/output-{{build_name}}",
"format": "ovf",
"vrdp_bind_address": "0.0.0.0",
"vboxmanage": [
["modifyvm", "{{.Name}}","--nictype1","virtio"],
["modifyvm", "{{.Name}}","--memory","{{ user `ram`}}"]
],
"skip_export":true,
"keep_registered": true
}
],
"provisioners": [
{
"type":"shell",
"inline": ["ls"]
}
]
}
To change the network interface boot settings to onboot=yes, we need to create an anaconda kickstart script or copy one from an existing machine and change the configurations in it and pass it as
"boot_command": [ "<esc><wait>",
"vmlinuz initrd=initrd.img net.ifnames=0 biosdevname=0 ",
"ks=hd:fd0:/anaconda-ks.cfg",
"<enter>"
],
and in anaconda file
network --bootproto=dhcp --device=eth0 --onboot=on --ipv6=auto --activate

Unable to read XBee Api frame on Raspberry Pi 3 with Node-RED v0.17.3

I hope you'll help me to find out what's wrong with my configuration.
I have to ZigBee nodes, one connected via usb to my mac and one connected via tx/rx ports to the raspberry pi 3.
I wrote two scripts, one that sends Xee Api frame packets (from mac) and one that reads packets (to the pi). The two scripts are based on the python-xbee library.
The scripts are the following - on mac:
import serial
from xbee import XBee, ZigBee
serial_port = serial.Serial('/dev/tty.usbserial-A5025UGJ', 9600)
xbee = ZigBee(serial_port, escaped=True)
# coordinator = 00 13 A2 00 40 8B B1 5A
while True:
try:
# Send AT packet
xbee.send('tx',frame_id='A', dest_addr_long='\x00\x13\xA2\x00\x40\x8B\xB1\x5A', data='test')
parameter = xbee.wait_read_frame()
print 'parameter='
print parameter
except KeyboardInterrupt:
break
serial_port.close()
On Pi:
import serial
from xbee import XBee, ZigBee
serial_port = serial.Serial('/dev/serial0', 9600)
xbee = ZigBee(serial_port, escaped=True)
while True:
try:
# Receive AT packet
parameter = xbee.wait_read_frame()
print 'parameter='
print parameter
except KeyboardInterrupt:
break
serial_port.close()
The output of the first script is the following (the sender):
parameter=
{'retries': '\x00', 'frame_id': 'A', 'deliver_status': '\x00',
'dest_addr': '\x00\x00', 'discover_status': '\x00', 'id': 'tx_status'}
The output of the second script is the following (the receiver):
parameter= {'source_addr_long': '\x00\x13\xa2\x00#\x8b\xb1L',
'rf_data': 'test', 'source_addr': '\xa3\x19', 'id': 'rx',
'options': '\x01'}
Now if I start Node-Red 0.17.3 and I use the "serial input" module, connected to a debug output module, i cannot see anything incoming if the newline is base on the char "\n". The port is the same of the script (/dev/serial0).
[
{
"id": "e6aa5379.9fd8c",
"type": "debug",
"z": "35e84ae.5ae88b6",
"name": "",
"active": true,
"console": "false",
"complete": "true",
"x": 432.5,
"y": 213,
"wires": []
},
{
"id": "63563843.bba178",
"type": "serial in",
"z": "35e84ae.5ae88b6",
"name": "",
"serial": "fbf0b4fa.9b2918",
"x": 209.5,
"y": 201,
"wires": [
[
"e6aa5379.9fd8c"
]
]
},
{
"id": "fbf0b4fa.9b2918",
"type": "serial-port",
"z": "",
"serialport": "/dev/serial0",
"serialbaud": "9600",
"databits": "8",
"parity": "none",
"stopbits": "1",
"newline": "\\n",
"bin": "false",
"out": "char",
"addchar": false
}
]
If I change the configuration of "serial in" node, setting the split "after a timeour of 5000 ms" and deliver "binary buffers", this is the result in debug view:
[126,0,125,49,144,0,125,51,162,0,64,139,177,76,163,25,1,112,114,111,118,97,13]
Does anyone know how to find the correct way to split input with XBee API frames?
I don't know anything about node-red, but you need to parse the stream of bytes to extract the frames. It would require more work to escape the data going in and out, but I think you can use API mode 2 (ATAP = 2) where the start of frame byte (0x7E) is escaped when appearing in the frame, so you could potentially split on that byte.

Data ingestion task : Hadoop running in local instead of remote Hadoop EMR cluster

I have setup a multi-node druid cluster with:
1) 1 node running as coordinator and overlord (m4.xl)
2) 2 nodes each running historical and middle managers both. (r3.2xl)
3) 1 node running broker (r3.2xl)
Now I have an EMR cluster running which I want to use for ingestion tasks, the problem is whenever I try to submit a job via the CURL command, the job always starts as local hadoop job in both the middle managers instead of being submitted to the remote EMR cluster. My data lies in S3 and also S3 is configured for deep storage as well.
I have also copied all the jars from EMR master to hadoop-dependencies/hadoop-client/2.7.3/
Druid version: 0.9.2
EMR version: 5.2
Please find attached indexing job, common runtime properties and middle manager runtime properties.
Q1) How to get the job to submit to remote EMR cluster.
Q2) Logs for
the indexing task are not coming on overlord:8090, how to enable it.
File: data_index.json
{
"type": "index_hadoop",
"spec": {
"ioConfig": {
"type": "hadoop",
"inputSpec": {
"type": "static",
"paths": "s3n://<kjcnskd>smallTest"
}
},
"dataSchema": {
"dataSource": "multi_value_test_01",
"granularitySpec": {
"type": "uniform",
"segmentGranularity": "day",
"queryGranularity": "none",
"intervals": [
"2011-09-12/2017-09-13"
]
},
"parser": {
"type": "string",
"parseSpec": {
"format": "tsv",
"delimiter": "\u0001",
"listDelimiter": "|",
"columns": [
"article_type",
"brand",
"gender",
"brand_type",
"master_category",
"supply_type",
"business_unit",
"testdim",
"date",
"week",
"month",
"year",
"style_id",
"live_styles",
"non_live_styles",
"broken_style",
"new_season_styles",
"live_styles_qty",
"non_live_styles_qty",
"broken_style_qty",
"new_season_styles_qty"
],
"dimensionsSpec": {
"dimensions": [
"article_type",
"brand",
"gender",
"brand_type",
"master_category",
"supply_type",
"business_unit",
"testdim",
"week",
"month",
"year",
"style_id"
]
},
"timestampSpec": {
"column": "date",
"format": "yyyyMMdd"
}
}
},
"metricsSpec": [
{
"name": "live_styles",
"type": "doubleSum",
"fieldName": "live_styles"
},
{
"name": "non_live_styles",
"type": "doubleSum",
"fieldName": "non_live_styles"
},
{
"name": "broken_style",
"type": "doubleSum",
"fieldName": "broken_style"
},
{
"name": "new_season_styles",
"type": "doubleSum",
"fieldName": "new_season_styles"
},
{
"name": "live_styles_qty",
"type": "doubleSum",
"fieldName": "live_styles_qty"
},
{
"name": "broken_style_qty",
"type": "doubleSum",
"fieldName": "broken_style_qty"
},
{
"name": "new_season_styles_qty",
"type": "doubleSum",
"fieldName": "new_season_styles_qty"
}
]
},
"tuningConfig": {
"type": "hadoop",
"partitionsSpec": {
"type": "hashed",
"targetPartitionSize": 5000000
},
"jobProperties": {
"fs.s3.awsAccessKeyId": "XXXXXXXXXXXXXX",
"fs.s3.awsSecretAccessKey": "XXXXXXXXXXXXXX",
"fs.s3.impl": "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
"fs.s3n.awsAccessKeyId": "XXXXXXXXXXXXXX",
"fs.s3n.awsSecretAccessKey": "XXXXXXXXXXXXXX",
"fs.s3n.impl": "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
"io.compression.codecs": "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec"
}
}
}
}
File: common.runtime.properties
#
# Licensed to Metamarkets Group Inc. (Metamarkets) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Metamarkets licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
#
# Extensions
#
# This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list
# based on your particular setup.
druid.extensions.loadList=["druid-kafka-eight", "druid-s3-extensions", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "mysql-metadata-storage"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.
druid.extensions.hadoopDependenciesDir=hadoop-dependencies/hadoop-client/2.7.3
#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.service.host=10.0.1.152
druid.zk.paths.base=/druid
#
# Metadata storage
#
# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
#druid.metadata.storage.type=derby
#druid.metadata.storage.connector.connectURI=jdbc:derby://metadata.store.ip:1527/var/druid/metadata.db;create=true
#druid.metadata.storage.connector.host=metadata.store.ip
#druid.metadata.storage.connector.port=1527
# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://10.0.1.140:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=druid123
# For PostgreSQL (make sure to additionally include the Postgres extension):
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...
#
# Deep storage
#
# For local disk (only viable in a cluster if this is a network mount):
#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments
# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
#druid.storage.type=hdfs
#druid.storage.storageDirectory=/druid/segments
# For S3:
druid.storage.type=s3
druid.storage.bucket=asfvdcs
druid.storage.baseKey=druid/segments
druid.s3.accessKey=XXXXXXXXXXXX
druid.s3.secretKey=XXXXXXXXXXXX
#
# Indexing service logs
#
# For local disk (only viable in a cluster if this is a network mount):
druid.indexer.logs.type=file
druid.indexer.logs.directory=var/druid/indexing-logs
# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
#druid.indexer.logs.type=hdfs
#druid.indexer.logs.directory=/druid/indexing-logs
# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=testashutosh
#druid.indexer.logs.s3Prefix=druid/indexing-logs
#
# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator
#
# Monitoring
#
druid.monitoring.monitors=["com.metamx.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info
File: middle manager runtime.properties
druid.service=druid/middleManager
druid.port=8091
# Number of tasks per middleManager
druid.worker.capacity=3
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
druid.server.http.numThreads=25
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=2
# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=hdfs://ip-10-0-1-xxx.ap-southeast-1.compute.internal:8020/tmp/druid-indexing
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.7.3"]
druid.indexer.runner.type=remote
You need to tell Druid about the Hadoop cluster. To quote the manual:
Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml) on the classpath of your Druid nodes. You can do this by copying them into conf/druid/_common/core-site.xml, conf/druid/_common/hdfs-site.xml, and so on.
If you have already done that, than it would indicate in issue with one of the config files (happened to me).

Chef::Exceptions::ChecksumMismatch when installing nginx-1.7.8 from source

I get the following error when running vagrant up --provision to set up my development environment with vagrant...
==> default: [2014-12-08T20:33:51+00:00] ERROR: remote_file[http://nginx.org/download/nginx-1.7.8.tar.gz] (nginx::source line 58) had an error: Chef::Exceptions::ChecksumMismatch: Checksum on resource (0510af) does not match checksum on content (12f75e)
My chef JSON has the following for nginx:
"nginx": {
"version": "1.7.8",
"user": "deploy",
"init_style": "init",
"modules": [
"http_stub_status_module",
"http_ssl_module",
"http_gzip_static_module"
],
"passenger": {
"version": "4.0.53",
"gem_binary": "/home/vagrant/.rbenv/shims/gem"
},
"configure_flags": [
"--add-module=/home/vagrant/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/passenger-3.0.18/ext/nginx"
],
"gzip_types": [
"text/plain",
"text/html",
"text/css",
"text/xml",
"text/javascript",
"application/json",
"application/x-javascript",
"application/xml",
"application/xml+rss"
]}
and Cheffile has the following cookbook:
cookbook 'nginx'
How do I resolve the Checksum mismatch?
The nginx cookbook requires you to edit the checksum attribute when using another version of nginx. The remote_file resource that is causing you an error is:
remote_file nginx_url do
source nginx_url
checksum node['nginx']['source']['checksum']
path src_filepath
backup false
end
You need to update the checksum value. Specifically node['nginx']['source']['checksum'].
So in your JSON, you would add this line:
"source": {"checksum": "insert checksum here" }
Edit: As pointed out in the comments, the checksum is SHA256. You can generate the checksum of the file like so:
shasum -a 256 nginx-1.7.8.tar.gz

Resources