How do I bridge 2 Telegram --> WhatsApp groups? - telegram

I'm bridging 2 Telegram groups with groups on WhatsApp.
I made it work for 1 group but having issue with multiple groups or channels.
According to docs this should work.
matterbridge.toml file (tokens aren't valid):
[whatsapp.dummygroup]
Number="+4475611"
SessionFile="sessi0281.gob"
QrOnWhiteTerminal=false
RemoteNickFormat="{NICK} via WhatsApp: "
[telegram.dummygroup]
Token="536IjmE"
Server="mark"
RemoteNickFormat="{NICK} via WhatsApp: "
MessageFormat="HTMLNick"
[whatsapp.seconddummygroup]
Number="+447581"
SessionFile="sessi0281.gob"
QrOnWhiteTerminal=false
RemoteNickFormat="{NICK} via Telegram: "
[telegram.seconddummygroup]
Token="522697RnIbMwQ"
Server="mark"
RemoteNickFormat="{NICK} via WhatsApp: "
MessageFormat="HTMLNick"
[[gateway]]
name="gateway1"
enable=true
[[gateway.inout]]
account="telegram.dummygroup"
channel="-7576"
[[gateway.inout]]
account="whatsapp.dummygroup"
channel="120#g.us"
[[gateway]]
name="gateway2"
enable=true
[[gateway.inout]]
account="telegram.seconddummygroup"
channel="-7985"
[[gateway.inout]]
account="whatsapp.seconddummygroup"
channel="12307#g.us"
output in debug: SendMessage failed: failed to get group members: websocket not connected
Each gateway work when the other one is commented out. How do I make both gateways
to work ?

Related

Replication factor: 3 larger than available brokers: 1 in #EmbeddedKafka

I want to test kafka - transaction.
kafkaTemplate.executeInTransaction { tx ->
tx.sendDefault("abacaba") // Should I do .get() ??
tx.sendDefault("abacaba")
}
And I get next log when test is starting:
org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
2023-01-27 16:18:17.831 INFO 81975 --- [quest-handler-4] kafka.server.ZkAdminManager
: [Admin Manager on Broker 0]: Error processing create topic request
CreatableTopic(name='__transaction_state', numPartitions=50, replicationFactor=3,
assignments=[], configs=[CreateableTopicConfig(name='compression.type',
value='uncompressed'), CreateableTopicConfig(name='cleanup.policy', value='compact'),
CreateableTopicConfig(name='min.insync.replicas', value='2'),
CreateableTopicConfig(name='segment.bytes', value='104857600'),
CreateableTopicConfig(name='unclean.leader.election.enable', value='false')])
I try settings replication factor but it don't work :(
Help me, please.
You didn't say in your question that you deal with an #EmbeddedKafka. See its JavaDocs for more info:
/**
* #return the number of brokers
*/
#AliasFor("value")
int count() default 1;
When you have a enough brokers in the cluster, then you can ask for replication factor on this or that topic, but not more than a number of brokers of course.

Lua - encrypt / decrypt commands with aes cbc to pair with a Panasonic TV

I’m trying to rework a script I found online to control a Panasonic TV, which requires a secure/encrypted pairing to occur so I can control it remotely. (The full code here -> https://forum.logicmachine.net/showthread.php?tid=232&pid=16580#pid16580)
Because it seems to be built on LuaJIT and has some other proprietary Lua elements; I’m trying to find alternatives that will allow it to work with the 5.1 Lua install on a Vera Home Automation controller (a relatively closed system).
Also, and perhaps most important for me is that I’d love to make as much of the converted code have minimal requirements to call external modules. I should add I’ve only recently started learning Lua, but one way I like to learn is to convert/repurpose code I find online..
So far i’ve managed to find alternatives for a number of the modules being used, e.g
encdec.base64dec -> Lua Base64 Encode
lmcore.hextostr -> https://github.com/tst2005/binascii/blob/master/binascii.lua
storage.set -> Alternative found in Vera Home Controllers
storage.get -> Alternative found in Vera Home Controllers
bit.ban -> Bitware module in Vera Home Controllers
bit.bxor -> Bitware module in Vera Home Controllers
Where I’m stuck is with the following..
aes:new
aes.cipher
user.aes
encdec.hmacsha256
Here’s an extract of the code where the above are used.
function encrypt_soap_payload(data, key, hmac_key, iv)
payload = '000000000000'
n = #data
payload = payload .. string.char(bit.band(bit.rshift(n, 24), 0xFF))
payload = payload .. string.char(bit.band(bit.rshift(n, 16), 0xFF))
payload = payload .. string.char(bit.band(bit.rshift(n, 8), 0xFF))
payload = payload .. string.char(bit.band(n, 0xFF))
payload = payload .. data
aes_cbc, err = aes:new(key, nil, aes.cipher(128, 'cbc'), { iv = iv }, nil, 1)
ciphertext = aes_cbc:encrypt(payload)
sig = encdec.hmacsha256(ciphertext, hmac_key, true)
encrypted_payload = encdec.base64enc(ciphertext .. sig)
return encrypted_payload
end
function decrypt_soap_payload(data, key, hmac_key, iv)
aes_cbc, err = aes:new(key, nil, aes.cipher(128, 'cbc'), { iv = iv }, nil, 0)
decrypted = aes_cbc:decrypt(encdec.base64dec(data))
decrypted = string.gsub(string.sub(lmcore.strtohex(decrypted), 33), '%x%x', function(value) return string.char(tonumber(value, 16)) end)
return decrypted
end
I can get the the point where I can create the parameters for the payload encrypt request (example below), it’s the encryption/decryption I can do..
data="1234"
key="\\S„ßÍ}/Ìa5!"
hmac_key="¹jz¹2¸F\r}òcžÎ„ 臧.ª˜¹=¤µæŸ"
iv=" {¬£áæ‚2žâ3ÐÞË€ú "
I’ve found an aes.lua module online, but that requires loads of others modules most notably ffi.lua. Ideally I’d like to avoid using that. I also came across this aes128.lua but i’m not sure how that handles all the other parameters e.g cbc etc. Finally there’s this aes256ecb.lua script, could that be converted to aes 128 cbc and then used in the above?
Is anyone aware (or maybe has) a Lua script that can handle the aes cbc requirements above ?
Many thanks !
In the end I found out that I could do aes.cbc by calling openssl from the command line, e.g.
local payload = "ENTER HERE"
Local key = "ENTER HERE"
local iv = "ENTER HERE"
local buildsslcommand = "openssl enc -aes-128-cbc -nosalt -e -a -A "..payload.." -K "..key.." -iv "..iv
-- print("Command to send = " ..buildsslcommand)
local file = assert(io.popen(buildsslcommand, 'r'))
local output = file:read('*all')
file:close()
-- print(string.len(output)) --> just count what's returned.
-- print(output) -- > Prints the output of the command.
FYI - It looks like I could do encdec.hmacsha256 via openSSL as well, but I’ve not been able to do that :-( ..

data channel lock error while configuring flume with multiple channels

I have tried to fan out the flow from one source to two channels.Also I specified different dataDirs and checkpointDirs properties for each channel as in the channel lock error while configuring flume's multiple sources using FILE channels question.I have used a multiplexing channel selector. I have get the following error.
18/08/23 16:21:37 **ERROR file.FileChannel: Failed to start the file channel** [channel=fileChannel1_2]
java.io.IOException: Cannot lock /root/.flume/file-channel/data. The directory is already locked. [channel=fileChannel1_2]
at org.apache.flume.channel.file.Log.lock(Log.java:1169)
at org.apache.flume.channel.file.Log.<init>(Log.java:336)
at org.apache.flume.channel.file.Log.<init>(Log.java:76)
at org.apache.flume.channel.file.Log$Builder.build(Log.java:276)
at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:281)
at unAndReset(FutureTask.java:308) .....
My configuration file as follws.
agent1.sinks=hdfs-sink1_1 hdfs-sink1_2
agent1.sources=source1_1
agent1.channels=fileChannel1_1 fileChannel1_2
agent1.channels.fileChannel1_1.type=file
agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001
agent1.channels.fileChannel1_1.dataDir=/mnt/alpha_data/
agent1.channels.fileChannel1_1.checkpointOnClose=true
agent1.channels.fileChannel1_1.dataOnClose=true
agent1.sources.source1_1.type=spooldir
agent1.sources.source1_1.spoolDir=/home/ABC/
agent1.sources.source1_1.recursiveDirectorySearch=true
agent1.sources.source1_1.fileSuffix=.COMPLETED
agent1.sources.source1_1.basenameHeader = true
agent1.sinks.hdfs-sink1_1.type=hdfs
agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA
agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_2.type=file
agent1.channels.fileChannel1_2.capacity=200000
agent1.channels.fileChannel1_2.transactionCapacity=1000
agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001
agent1.channels.fileChannel1_2.dataDir=/mnt/beta_data/
agent1.channels.fileChannel1_2.checkpointOnClose=true
agent1.channels.fileChannel1_2.dataOnClose=true
agent1.sinks.hdfs-sink1_2.type=hdfs
agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ
agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false
agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2
agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1
agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2
agent1.sources.source1_1.selector.type=multiplexing
agent1.sources.source1_1.selector.header=basenameHeader
agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1
agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2
can someone give any solution for that.
Try to set a channel for default property in multiplexing selector
agent1.sources.source1_1.selector.default=fileChannel1_1
Data channel lock error was corrected. But still could't do the multiplexing. Code as follows.
agent1.sinks=hdfs-sink1_1 hdfs-sink1_2 hdfs-sink1_3
agent1.sources=source1_1
agent1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3
agent1.channels.fileChannel1_1.type=file
agent1.channels.fileChannel1_1.capacity=200000
agent1.channels.fileChannel1_1.transactionCapacity=1000
agent1.channels.fileChannel1_1.checkpointDir=/home/Flume/alpha/001
agent1.channels.fileChannel1_1.dataDirs=/home/Flume/alpha_data
agent1.channels.fileChannel1_1.checkpointOnClose=true
agent1.channels.fileChannel1_1.dataOnClose=true
agent1.sources.source1_1.type=spooldir
agent1.sources.source1_1.spoolDir=/home/ABC/
agent1.sources.source1_1.recursiveDirectorySearch=true
agent1.sources.source1_1.fileSuffix=.COMPLETED
agent1.sources.source1_1.basenameHeader = true
agent1.sources.source1_1.basenameHeaderKey = basename
agent1.sinks.hdfs-sink1_1.type=hdfs
agent1.sinks.hdfs-sink1_1.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_1.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/CA
agent1.sinks.hdfs-sink1_1.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_1.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_1.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_1.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_1.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_1.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_1.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_2.type=file
agent1.channels.fileChannel1_2.capacity=200000
agent1.channels.fileChannel1_2.transactionCapacity=1000
agent1.channels.fileChannel1_2.checkpointDir=/home/Flume/beta/001
agent1.channels.fileChannel1_2.dataDirs=/home/Flume/beta_data
agent1.channels.fileChannel1_2.checkpointOnClose=true
agent1.channels.fileChannel1_2.dataOnClose=true
agent1.sinks.hdfs-sink1_2.type=hdfs
agent1.sinks.hdfs-sink1_2.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_2.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/AZ
agent1.sinks.hdfs-sink1_2.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_2.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_2.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_2.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_2.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_2.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_2.hdfs.useLocalTimeStamp=false
agent1.channels.fileChannel1_3.type=file
agent1.channels.fileChannel1_3.capacity=200000
agent1.channels.fileChannel1_3.transactionCapacity=10
agent1.channels.fileChannel1_3.checkpointDir=/home/Flume/gamma/001
agent1.channels.fileChannel1_3.dataDirs=/home/Flume/gamma_data
agent1.channels.fileChannel1_3.checkpointOnClose=true
agent1.channels.fileChannel1_3.dataOnClose=true
agent1.sinks.hdfs-sink1_3.type=hdfs
agent1.sinks.hdfs-sink1_3.hdfs.filePrefix = %{basename}
agent1.sinks.hdfs-sink1_3.hdfs.path=hdfs://10.44.209.44:9000/flume_sink/KT
agent1.sinks.hdfs-sink1_3.hdfs.batchSize=1000
agent1.sinks.hdfs-sink1_3.hdfs.rollSize=268435456
agent1.sinks.hdfs-sink1_3.hdfs.rollInterval=0
agent1.sinks.hdfs-sink1_3.hdfs.rollCount=50000000
agent1.sinks.hdfs-sink1_3.hdfs.fileType=DataStream
agent1.sinks.hdfs-sink1_3.hdfs.writeFormat=Text
agent1.sinks.hdfs-sink1_3.hdfs.useLocalTimeStamp=false
agent1.sources.source1_1.channels=fileChannel1_1 fileChannel1_2 fileChannel1_3
agent1.sinks.hdfs-sink1_1.channel=fileChannel1_1
agent1.sinks.hdfs-sink1_2.channel=fileChannel1_2
agent1.sinks.hdfs-sink1_3.channel=fileChannel1_3
agent1.sources.source1_1.selector.type=replicating
agent1.sources.source1_1.selector.header=basename
agent1.sources.source1_1.selector.mapping.CA=fileChannel1_1
agent1.sources.source1_1.selector.mapping.AZ=fileChannel1_2
agent1.sources.source1_1.selector.default=fileChannel1_3

Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR when use openstacksdk to create_server

When I create the openstack server, I get bellow Exception:
Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR
My code is bellow:
server_args = {
"name":server_name,
"image_id":image_id,
"flavor_id":flavor_id,
"networks":[{"uuid":network.id}],
"admin_password": admin_password,
}
try:
server = user_conn.conn.compute.create_server(**server_args)
server = user_conn.conn.compute.wait_for_server(server)
except Exception as e: # there I except the Exception
raise e
When create_server, my server_args data is bellow:
{'flavor_id': 'd4424892-4165-494e-bedc-71dc97a73202', 'networks': [{'uuid': 'da4e3433-2b21-42bb-befa-6e1e26808a99'}], 'admin_password': '123456', 'name': '133456', 'image_id': '60f4005e-5daf-4aef-a018-4c6b2ff06b40'}
My openstacksdk version is 0.9.18.
In the end, I find the flavor data is too big for openstack compute node, so I changed it to a small flavor, so I create success.

Munin alert/notification's not being executed

I have a custom notification script which I want to supply with data from munin whenever a critical event occurs. Unfortunately I was not able to get it working, while following the official docs. The notification script it self is tested and works fine when called from the shell with fake data. It's permissions are on 755 so execution shouldn't be an issue either. Hence the contact hook probably not being called.
What I did is this:
# /etc/munin/munin-conf.d/custom.cnf
[...]
contact.slack.command MUNIN_SERVICESTATE="${var:worst}" MUNIN_HOST="${var:host}" MUNIN_SERVICE="${var:graph_title}" MUNIN_GROUP=${var:group} /usr/local/bin/notify_slack_munin
contact.slack.always_send warning critical
contact.slack.text ${if:cfields \u000A* CRITICALs:${loop<,>:cfields ${var:label} is ${var:value} (outside range [${var:crange}])${if:extinfo : ${var:extinfo}}}.}${if:wfields \u000A* WARNINGs:${loop<,>:wfields ${var:label} is ${var:value} (outside range [${var:wrange}])${if:extinfo : ${var:extinfo}}}.}${if:ufields \u000A* UNKNOWNs:${loop<,>:ufields ${var:label} is ${var:value}${if:extinfo : ${var:extinfo}}}.}${if:fofields \u000A* OKs:${loop<,>:fofields ${var:label} is ${var:value}${if:extinfo : ${var:extinfo}}}.}
Above those lines the nodes and dir's are defined which works fine. But the notification isn't going out. Do you have any idea on what it could be?
I just played with the same script found originally on gist and got it working after fixing two trivial mistakes:
make sure that you put the contacts in the config before the host tree. I put it at the end of the configuration file and it wasn't called at all until I moved it to where the sample contacts are placed
make sure that the command in the Munin configuration uses the full filename, so if you put the script in /usr/local/bin/notify_slack_munin then check that there is no file extension on the file
As written in the Munin Guide, watch the munin-limits.log to see what is happening.
Final note, if you have contact.slack.always_send warning critical it will push a notification repeatedly, not just on severity changes.
For reference (in case the gist link brakes), the script that calls the Slack webhook is the following (slightly modified for clarification):
#!/bin/bash
# Slack notification script for Munin
# Mark Matienzo (#anarchivist)
# https://gist.github.com/anarchivist/58a905515b2eb2b42fe6
#
# To use:
# 1) Create a new incoming webhook for Slack
# 2) Edit the configuration variables that start with "SLACK_" below
# 3) Add the following to your munin configuration before the host tree
# in the part where sample contacts are listed:
#
# # -- Slack contact configuration
# # notify_slack_munin.sh is the full file name
# contact.slack.command MUNIN_SERVICESTATE="${var:worst}" MUNIN_HOST="${var:host}" MUNIN_SERVICE="${var:graph_title}" MUNIN_GROUP=${var:group} /usr/local/bin/notify_slack_munin.sh
# # This line will spam Slack with notifications even if no state change happens
# contact.slack.always_send warning critical
# # note: This has to be on one line for munin to parse properly
# contact.slack.text ${if:cfields \u000A* CRITICALs:${loop<,>:cfields ${var:label} is ${var:value} (outside range [${var:crange}])${if:extinfo : ${var:extinfo}}}.}${if:wfields \u000A* WARNINGs:${loop<,>:wfields ${var:label} is ${var:value} (outside range [${var:wrange}])${if:extinfo : ${var:extinfo}}}.}${if:ufields \u000A* UNKNOWNs:${loop<,>:ufields ${var:label} is ${var:value}${if:extinfo : ${var:extinfo}}}.}${if:fofields \u000A* OKs:${loop<,>:fofields ${var:label} is ${var:value}${if:extinfo : ${var:extinfo}}}.}
SLACK_CHANNEL="#insert-your-channel"
SLACK_WEBHOOK_URL="https://hooks.slack.com/services/insert/your/hookURL"
SLACK_USERNAME="munin"
SLACK_ICON_EMOJI=":munin:"
# If you want to test the script, you may have to comment this out to avoid hanging console
input=`cat`
#Set the message icon based on service state
if [ "$MUNIN_SERVICESTATE" = "CRITICAL" ]
then
ICON=":exclamation:"
COLOR="danger"
elif [ "$MUNIN_SERVICESTATE" = "WARNING" ]
then
ICON=":warning:"
COLOR="warning"
elif [ "$MUNIN_SERVICESTATE" = "ok" ]
then
ICON=":white_check_mark:"
COLOR="good"
elif [ "$MUNIN_SERVICESTATE" = "OK" ]
then
ICON=":white_check_mark:"
COLOR="good"
elif [ "$MUNIN_SERVICESTATE" = "UNKNOWN" ]
then
ICON=":question:"
COLOR="#00CCCC"
else
ICON=":white_medium_square:"
COLOR="#CCCCCC"
fi
# Generate the JSON payload
PAYLOAD="{\"channel\": \"${SLACK_CHANNEL}\", \"username\": \"${SLACK_USERNAME}\", \"icon_emoji\": \"${SLACK_ICON_EMOJI}\", \"attachments\": [{\"color\": \"${COLOR}\", \"fallback\": \"Munin alert - ${MUNIN_SERVICESTATE}: ${MUNIN_SERVICE} on ${MUNIN_HOST}\", \"pretext\": \"${ICON} Munin alert - ${MUNIN_SERVICESTATE}: ${MUNIN_SERVICE} on ${MUNIN_HOST} in ${MUNIN_GROUP} - <http://central/munin/|View Munin>\", \"fields\": [{\"title\": \"Severity\", \"value\": \"${MUNIN_SERVICESTATE}\", \"short\": \"true\"}, {\"title\": \"Service\", \"value\": \"${MUNIN_SERVICE}\", \"short\": \"true\"}, {\"title\": \"Host\", \"value\": \"${MUNIN_HOST}\", \"short\": \"true\"}, {\"title\": \"Current Values\", \"value\": \"${input}\", \"short\": \"false\"}]}]}"
#Send message to Slack
curl -sX POST -o /dev/null --data "payload=${PAYLOAD}" $SLACK_WEBHOOK_URL 2>&1

Resources