JFrog xray ancestor for docker - artifactory

I am currently deployed two A and B artifacts in artifactory and those are indexed in xray. B is the base image is in A. I was in xray ui looking at B, but then in ancestor tab I am not finding A as ancestor listed.
Please let me know anything I am missing.
Edit:
Basically my aim is to fetch the ancestors of certain artifacts to ascertain component relations, running some ui regression tests. It deals with currently types - npm, maven, pypi and docker. Any pointers towards any similar issues with these types are appreciated.

What you see in the UI related to the way Xray index docker images, I'll explain.
when Docker image is been indexed in Xray it indexes the manifest.json (abstraction of the docker image) as the root parent and the layers as it's descendants.
Note: the ancestor/descendants view you see in the UI is relation based on checksum
If docker B has 2 layers once it becomes the base layer of docker A it will be shown as 1 layer in Docker A (with different checksum)
In the example, you provided above you have :
Docker-A (manifest.json) Docker-B (manifest.json)
/ \ / \
/ \ / \
1/ 2\ 1/ 2\
Base Layer(B)+Another Layer(B) Another Layer(A) Base Layer(B) Another Layer(B)
Manifest of Docker-A is not the ancestor of manifest Docker-B, therefore, you will not see it in the UI.
If, for example, Docker-B will have only one layer and it the base layer of docker-A then the same layer with the same checksum will be shown in Docker-A
Docker-A (manifest.json) Docker-B (manifest.json)
/ \ |
/ \ |
1/ 2\ 1|
Base Layer(B) Another Layer(A) Base layer(B)
In this case, if you check the ancestor's tab of the Base Layer(B) you'll see both Docker-A and Docker-B

Related

update wso2 deployment.toml configuration at the docker runtime

I am new to docker.
I can create image using dockerfile and successfully call WSO2-API
I have hardcoded configuration in deployment.toml file
I want to update this information at the docker runtime for different env - DEV,QA etc
deployment.toml file content -
[server]
offset = 22
How to update .toml file config at the runtime ?
https://ei.docs.wso2.com/en/7.2.0/micro-integrator/setup/dynamic_server_configurations/#environment-variables
Here it says you can mention like-
offset = "${VariableName}"
but what do I mention in my dockerfile to update these variables at runtime ?
I want to update this information at the docker runtime for different env - DEV,QA etc
There are multiple ways to achieve this, here are at least two we commonly use in our deployment
Using a template for the config files
Basically the idea is to mount the deployment.toml (or other config files / folders) as configmap values in Kubernetes or volume in pure docker.
For each environment you can template the configuration using any deployment tool (Maven, Puppet, Ansible, any cloud devops,...). This approach allows you to update the configuration templates without needing a new image.
Template the configuration in the entrypoint
Create an entrypoint script, which templates the configuration based on env variables - e. g. using the sed utility) and then starts the application. Then use that entrypoint in the Dockerfile
This approach doesn't need external configuration (volumes, templates) but if the template needs to be updated, you need a new image.
Edit:
I haven't seen using the env variables in the deployment.toml before as refered in the question, must be something new for wso2. But if it is supported, then it can make your life easier just to specify the env variables in the pod. (oh this is you may be missing)
specify the ENV value in the Dockerfile for the default value
run the docker with your defined value (-e parameter for pure docker or defined environment in the compose or deployment config)
Define the Variable using ARG option in Dockerfile.
Example:
ARG VariableName
Now the value can be given at runtime as below.
docker build --build-arg VariableName=0 .
For more details on how to use ARG in dockerfile, please refer https://docs.docker.com/engine/reference/builder/#arg

Flow: resolving modules in a monorepo that uses Yarn workspaces

We have a monorepo that uses Yarn’s ‘workspaces’ feature, meaning that whenever possible, Yarn will hoist dependencies to the monorepo's root node_modules directory rather than keep them in the individual package's node_modules dir. This relies on Node’s module resolving algorithm, which continues to search for modules in node_modules directories up the dir tree until it finds the required module.
When using Flow types in a file that imports another package (internal or external to the monorepo), running Flow inside the package that contains that file causes a Cannot resolve <package-name> error to be thrown. It seems like Flow uses a different module resolving algorithm, and fails since the installed modules are hoisted to the root dir and Flow does not continue to search up the dir tree.
Is there a way around this other than running Flow from the root? Running from the root is less than optimal because it does not allow different settings for different packages in the monorepo.
Node version: 10.8.0
flow-bin version: 0.78.0
I also ran into this problem
To fix it need update .flowconfig:
[include]
../../node_modules/
FS struct:
/project_root
--/node_modules
--/packages
----/module1
------.flowconfig
Pick the components to be hoisted by hand with a directive like:
"nohoist": ["**/npm-package", "**/npm-package/**"]
or select them with an exclude glob:
"nohoist": [
"**/!(my-site|my-cms|someones-components)"
]
See my answer to another question for more information.

Is it possible to use diff and patch tools to achieve the desired result?

I have a dir D-1.0.3 and D-1.0.5 with files A and B (and other subdirectories and files within D-x.x.x) with the following version tree of file A and B (alike in terms of versioning, also the other subdirs and files)
1.0.3 - 1.0.5
|
1.0.3.1 (head)
I would like to apply changes from 1.0.5 to 1.0.3.1 using diff and patch tool as i don't have access to git or svn tools associated to the files.
Is this possible using the unified diff format (or whatever)?How can I achieve that if possible (the command set i need to execute)?
I have checked that there is no adds, deletes or rename of the filename associated to the changes.
Many thanks!
yes, you can use diff to capture the difference between two directory trees. There are a few pitfalls:
in case you have new files (not in the older tree), you should use the -N option
if directories were added or removed, diff will not tell you about the files within those directories. I use a Perl script makepatch (an older version than on CPAN) which works around that problem.
For the simple case (no added/removed directories), you would use
diff -Naur olddirectory newdirectory >myfile.diff
and I would apply it in the top-level of the directory to be patched,
patch -p1 < myfile.diff
to eliminate problems due to the actual name of that directory. The -p1 option discards the name of the top-level directory.
To recap, assuming that these are the names of your directories, all subdirectories of the current directory:
diff -Naur 1.0.3 - 1.0.5 >mydiff.diff
cd 1.0.3.1 && patch -p1 <mydiff.diff

How to move artifacts in artifactory?

Can someone help me find a way to copy an artifact from one artifactory location to another location, inside the same repository. I understand there are ways to move artifactory between repositories, but my requirement is to move x1 to module-2 as in below folder structure.
+
+- repository
+- com
+- module-1
| +- x1
+- module-2
+- x2
Thank you very much in advance.
The easiest way is using the REST API. Here's an example:
curl -v -X -d "" POST -uusername "http://host:port/artifactory/api/move/repository/com/module-1/x1?to=/repository/com/module2/x1"
You can also use the JFrog CLI. The usage is something like
jfrog rt move repository/com/module-1/x1 repository/com/module2/x1
The cli is especially useful when you want to move multiple files.

With SBT, how do I a specify an alternate project root other than the current directory for running a main class?

Normally SBT looks for the build files at ./build.sbt and ./project/Build.scala. Is it possible to specify an alternate project root, so I can build a project not in the current working directory? I'm looking essentially for an equivalent to mvn -f /path/to/pom.xml but the docs have not provided me with any obvious answers.
(I should note that I want to do this at runtime rather than compile time. Essentially, I want to use sbt run-main to run arbitrary classes from my project, since I hate manual classpath wrangling. For various boring reasons I may need to do this from arbitrary location since the code may need the working directory to be something other than the project directory, so using cd might not do what I want. It's so trivial in Maven - I just assumed, perhaps unfairly, that there would be a simple equivalent in SBT)
I have something like this. I have project definition at X/build.sbt, X/MyOtherDefinitionWithSpecialThing/build.sbt, X/MySuperPublishConfig/build.sbt.
But my point of view to the problem is opposite. Instead of specify location of ./build.sbt and ./project/Build.scala I specify location of path to resources. The result is the same. It looks like:
sourceDirectory <<= (baseDirectory) (_ / ".." / "src")
target <<= (baseDirectory) (_ / ".." / "target")
This is allow to create single project with multiple definitions. This is worked with nested/hierarchical projects. But I use symbolic links (Linux OS) for hierarchical projects.
There is a file tree of one of my SBT plugins. Multiple build definitions and only one src/...
.
|-build.sbt
|-project
|---project
|-----target
|-------...
|---target
|-----...
|-project-0.11
|---build.sbt
|---project
|-----project
|-------target
|---------...
|-----target
|-------...
|-project-0.12
|---build.sbt
|---project
|-----project
|-------target
|---------...
|-----target
|-------...
|-...
|-src
|---main
|-----scala
|-------org
|---------...
|---sbt-test
|-----...
|-target
|---...
If this not solution of your problem please elaborate why you don't want use 'cd' command ;-)
-- For the updated use case:
I use shell wrapper and I have symlink to this one in every SBT project:
#!/bin/sh
#
here=$(cd $(dirname "$0"); pwd)
if [ ! -e "${here}/build.sbt" ]
then
echo build.sbt lost
exit
fi
cd ${here}
LOCAL_BUILD=true sbt-0.12 "$#"
I simply write /path/to/my/project/sbt 'show name' for example or /path/to/my/project/sbt run-main in your case.
As I discovered from this other answer, the sbt-start-script plugin is a better tool for this than sbt run-main. You can simply run sbt stage and you get an invocation script, with classpaths resolved, at target/start. According to the documentation, it needs to be run from the build root directory for inter-project dependencies to work, but for my simple use cases, this doesn't seem to be a problem.

Resources