How to append file in puppet - nginx

I have puppet code for nginx.conf .
The file is created by source => puppet://path to file which contain the required file contents.
I don't want to disturb this file because it is for default setting.
I have to append this nginx.conf file which can be deployed on
specific node where it is required.
So I have written the separate module which is responsible for new changes.
But this module is dependent on previous module which contain the nginx.conf file.
if ! defined(File['/etc/nginx/nginx.conf']) {
file { '/etc/nginx/nginx.conf' :
ensure => present,
owner => root,
group => root,
mode => '0644',
source => 'puppet:///modules/path/to/file/nginx_default.conf',
require => Package[ 'nginx' ],
notify => Service[ 'nginx'],
}
}
How could I append the nginx.conf file without disturbing above code?

I would recommend using Nginx modules from Puppet Forge the main benefit of the modules is that you don't have to reinvent the wheel, you can reuse the modules or adapt them to your needs.
This will still allow you to have a default nginx.conf (as a template) and by using classes you would be able to repurpose the nginx.conf template to your liking.
i.e:
host_1.pp:
class { 'nginx':
# Fix for "upstream sent too big header ..." errors
fastcgi_buffers => '8 8k',
fastcgi_buffer_size => '8k',
ssl_ciphers => 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256',
upstream => {
fpmbackend => 'server unix:/var/run/php-fpm-www.sock',
},
}
host_2.pp:
class { 'nginx':
# Fix for "upstream sent too big header ..." errors
fastcgi_buffers => '8 8k',
fastcgi_buffer_size => '36k',
upstream => {
fpmbackend => 'server unix:/var/run/php-fpm-host2.sock',
},
}
However if you still want to use your modules you can setup the nginx.conf as a template and have it populated with variables of you choosing based on the environment/host of your choosing.
This will make the least changes to your code.
Although IMO in the long run using correct community modules will pay off better for you and our team.

I did use the exec to append the file, as there were many restrictions to try other ways like adding any new module.
I have created one file containing appending lines and then removed it.
include existing::module
if ! defined (File["/new/path/for/temp/file/nginx_append.conf"])
file{"/new/path/for/temp/file/nginx_append.conf":
ensure => present,
mode => 755,
owner => 'root',
group => 'root',
source => 'puppet:///modules/module-name/nginx_append.conf',
}
}
exec {"nginx.conf":
cwd => '/new/path/for/tenter code hereemp/file',
command => "/bin/cat /new/path/for/temp/file/nginx_append.conf >> /etc/nginx/nginx.conf && rm /new/path/for/temp/file/nginx_append.conf",
require => [ Service["nginx"]],
}
Thanks MichalT for your support...

Related

DDEV and D8, httpClient used for internal request fails to connect

I've a multisite installation of Drupal 8, the "main" website expose some REST webservices, locally i've some troubles on testing them, because there's no way for the various sites to see each other, when i try to do something like that
try {
$response = $this->httpClient->get($this->baseUri . '/myendpoint', [
'headers' => [
'Accept' => 'application/json',
'Content-type' => 'application/hal+json',
],
'query' => [
'_format' => 'json',
'myparameters' => 'value'
],
]);
$results = $this->serializer->decode($response->getBody(), 'json');
}
catch (RequestException $e) {
$this->logger->warning($e->getMessage());
}
return $results;
I always receive a timeout and there's no way i can make it work, i've my main website with the usual url project.ddev.site (and $this->baseUri is https://myproject.ddev.site ) and all the other websites are in the form subsite.ddev.local
If i ssh in the project and run ping myproject.ddev.site i see 172.19.0.6
I don't understand why they cannot see each other...
Just for other people who can have a similar problem: my issue was with xdebug i have it with the autoconnect, so when the request from the subsite to the main site was made, it get stuck somewhere (phpstorm didn't stop anywhere by the way) so it made the request time out.
By disabling, or configuring only for the subdomain, and avoiding it to accept the external connenction from unconfigured servers (in phpstorm) it started working, still have to do some work as i need to debug "both sides" of the request, but in this way i can work with that...
I've not thought before to try disabling xdebug because actually it didn't came into my mind...

How to fix curl_error: SSL: no alternative certificate subject name matches target host name 'api.telegram.org'

I am using telegram.php to connect my bot. When I use sendmessage all of thing is ok in my logs but I do not receive anything from the bot.
When I check my log there is a problem like this:
ok: False
curl_error_code: 51
curl_error: SSL: no alternative certificate subject name matches target host name 'api.telegram.org'
I donit know what to do to fix it.
I don't know this telegram bot, but I see that it uses GuzzleHttp.
During the initialization it doesn't accept any configuration Request::initialize()
public static function initialize(Telegram $telegram)
{
if (!($telegram instanceof Telegram)) {
throw new TelegramException('Invalid Telegram pointer!');
}
self::$telegram = $telegram;
self::setClient(new Client(['base_uri' => self::$api_base_uri]));
}
you should check its documentation. I see that there are a lot of setters which makes you able to overwrite the default settings.
What you need is to set the the \GuzzleHttp\RequestOptions::VERIFY to false in the client config:
$this->client = new \GuzzleHttp\Client([
'base_uri' => 'someAccessPoint',
\GuzzleHttp\RequestOptions::HEADERS => [
'User-Agent' => 'some-special-agent',
],
'defaults' => [
\GuzzleHttp\RequestOptions::CONNECT_TIMEOUT => 5,
\GuzzleHttp\RequestOptions::ALLOW_REDIRECTS => true,
],
\GuzzleHttp\RequestOptions::VERIFY => false,
]);
For fix this problem copy this Url to browser and set webhook:
https://api.telegram.org/botTOKEN/setWebhook?url=https://yourwebsite.com
Solution 2 of The Error
Let’s follow these simple steps:
Download this bundle of root certificates: https://curl.haxx.se/ca/cacert.pem
Put in any location of your server.
Open php.ini and add this line:
curl.cainfo = "[the_location]\cacert.pem"
Restart your webserver.
That’s it. 🙂

Migrating Mysql data into Elasticsearch using logstash for kibana

I'm new to kibana.I am working with data migration from MySQL to elasticsearch.How can i do this? Is using jdbc input plugin is the only process??
Here is the logstash.conf file where i specified the input and output:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/kibana"
jdbc_user => "xxx"
jdbc_password => "xxxxx"
jdbc_driver_library => "/root/mysql-connector-java-5.1.30/mysql-connector-java-5.1.30-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM datalog"
}
}
output {
elasticsearch {
"hosts" => "localhost:9200"
}
stdout { codec => rubydebug }
}
after running the above file by using ./logstash -f logstash.conf we are getting the below error:
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console

Silverstripe 3 Extending Error Mail with HTTP_X_FORWARDED_FOR

By default the error mail only takes these server variables from log.php:
protected static $log_globals = array(
'_SERVER' => array(
'HTTP_ACCEPT',
'HTTP_ACCEPT_CHARSET',
'HTTP_ACCEPT_ENCODING',
'HTTP_ACCEPT_LANGUAGE',
'HTTP_REFERRER',
'HTTP_USER_AGENT',
'HTTPS',
'REMOTE_ADDR',
),
);
How do I add 'HTTP_X_FORWARDED_FOR' to my error e-mails without modifying the core files?
This is actually possible via the new configuration system in Silverstripe. Have a YAML config file with the following:
SS_Log:
log_globals:
'_SERVER':
- 'HTTP_X_FORWARDED_FOR'
This adds HTTP_X_FORWARDED_FOR to the _SERVER array on the log_globals static variable.

laravel development environment sqlite database does not exist

Trying to use sqlite in development environment. It seems to detect the environment correctly but when I try to migrate to development.sqlite I get exception thrown "database does not exist"
artisan command
php artisan migrate --env=development
bootstrap/start.php
$env = $app->detectEnvironment(array(
'development' => array('localhost'),
));
app/config/development/database.php
<?php
return array(
'default' => 'sqlite',
'connections' => array(
'sqlite' => array(
'driver' => 'sqlite',
'database' => __DIR__.'/../database/development.sqlite',
'prefix' => '',
)
)
);
As far as I know laravel is supposed to create the file if it does not exist but since it didn't I tried manually creating the file and still get the exception thrown.
UPDATE: Maybe something not right with the env because the same thing happens if I try ':memory' for the database.
UPDATE 2: I tried running the sample unit test but add to TestCase.php
/**
* Default preparation for each test
*
*/
public function setUp()
{
parent::setUp(); // Don't forget this!
$this->prepareForTests();
}
/**
* Creates the application.
*
* #return Symfony\Component\HttpKernel\HttpKernelInterface
*/
public function createApplication()
{
$unitTesting = true;
$testEnvironment = 'testing';
return require __DIR__.'/../../bootstrap/start.php';
}
/**
* Migrates the database and set the mailer to 'pretend'.
* This will cause the tests to run quickly.
*
*/
private function prepareForTests()
{
Artisan::call('migrate');
Mail::pretend(true);
}
And this too gives the same exception though the testing env is already shipped with laravel. So I'll see if I can find any new issues on that.
Wow, typos and wrong paths.
Copying the sqlite array from config/database.php into config/development/database.php I forgot to change the path to the development.sqlite file from
__DIR__.'/../database/development.sqlite'
to
__DIR__.'/../../database/development.sqlite'
And for the in memory test it should have been
':memory:'
instead of
':memory'
I noticed that my database.php file had the following
'sqlite' => [
'driver' => 'sqlite',
'database' => env('DB_DATABASE', database_path('database.sqlite')),
'prefix' => '',
],
I changed it to read the following, and it worked just fine.
'sqlite' => [
'driver' => 'sqlite',
'database' => database_path('database.sqlite'),
'prefix' => '',
],
One of the problem which I faced was I use "touch storage/database.sqlite" in terminal, so database is created in Storage folder instead of database folder.
in my config/database.php path is database_path('database.sqlite')
'sqlite' => [
'driver' => 'sqlite',
'database' => database_path('database.sqlite'),
'prefix' => '',
],
than I use command "php artisan migrate" which gave me error "Database (/Applications/MAMP/htdocs/FOLDER_NAME/database/database.sqlite) does
not exist."
so it's obvious database file is not in database folder as It was generated in Storage folder, so copy "database.sqlite" from storage folder or run command "touch database/database.sqlite"
Hope that helps.!!
Well, my answer is kinda outdated, but anyway. I faced the same problem, but with Laravel 5, I am using Windows 7 x64. First I manually created SQLite database called 'db' and placed it into storage directory, then fixed my .env file like this:
APP_ENV=local
APP_DEBUG=true
APP_KEY=oBxQMkpqbENPb07bLccw6Xv7opAiG3Jp
DB_HOST=localhost
DB_DATABASE='db'
DB_USERNAME=''
DB_PASSWORD=''
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
MAIL_DRIVER=smtp
MAIL_HOST=mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null`
I thought it would fix my problems, but the command line keeps telling me that database doesn't exist. And then I just checked the path to db in my database.php file and this is why I put database file into storage directory. But nothing changed. And finally I checked db's extension and it was .db, not .sqlite as default extension you see in your sqlite block in database.php. So this is how I reconfigured sqlite piece:
'sqlite' => [
'driver' => 'sqlite',
'database' => storage_path().'/db.db',
'prefix' => '',
],
And of course don't forget to set sqlite as default database in your database.php file. Good luck!
For me it was that path to database had to be '/var/www/html' + location to the database in your project. In my case database was stored in database/db.sqlite so DB_DATABASE='/var/www/html/database/db.sqlite'
I had the same error while running a GitHub action test workflow.
For me the solution was to define the relative path to the database archive into the workflow file:
on:
...
env:
DB_CONNECTION: sqlite
DB_DATABASE: database/database.sqlite
jobs:
laravel-tests:
...
I think that the previous answers reduce the importance of the config and most likely the developers wanted to get the database file like this:
'sqlite' => [
'driver' => 'sqlite',
'url' => env('DATABASE_URL'),
'database' => database_path(env('DB_DATABASE', 'database').'.sqlite'), // <- like this
'prefix' => '',
'foreign_key_constraints' => env('DB_FOREIGN_KEYS', true),
],
Tested on Laravel 9.x

Resources