I'm trying to make the camera follow the contours of a heightmap landscape.
I've added a raycaster that points down, but it is not reporting any changes in the intersection.
I've had to detach the raycaster from the camera, as the camera would rotate the raycaster.
Can anyone tell me what I'm doing wrong here?
How do I get the distance from the raycaster to the terrain?
Full code:
Live demo - the blue vertical line is the raycaster
<script>
// A custom follow the terrain component
//
// the idea is to use a raycaster pointing down to work out the distance between camera and the terrain
// and then adjust the camera's z value accordingly
AFRAME.registerComponent('collider-check', {
dependencies: ['raycaster'],
init: function () {
var myHeight = 2.0;
var cam = this.el.object3D;
this.el.addEventListener('raycaster-intersected', function (evt) {
// I've got the camera here and the intersection, so I should be able to adjust camera to match terrain?
var dist = evt.detail.intersection.distance;
// these values do not change :(
console.log('Raycaster (camera y, distance to terrain, terrain.y)', cam.position.y, dist, evt.detail.intersection.point.y);
});
}
});
// when we move the camera, we drag the raycaster object with us - it's not attached to the camera so it won't rotate the ray
AFRAME.registerComponent('moving', {
schema: { type: 'selector', default: '#theray'},
init: function () {
// this.data is the raycaster component
},
tick: function() {
// camera
var c = this.el.object3D.position;
// set raycaster position to match camera - have shifted it over a bit so we can see it
this.data.setAttribute('position', '' + (c.x - 2.0) + ' ' + (c.y - 2.0) + ' ' + c.z);
}
});
</script>
<a-scene>
<!-- place camera in the middle of our map -->
<a-camera position="6 0.2 6" rotation="0 90 0" moving>
<a-cursor color="#4CC3D9" fuse="true" fuse-timeout="100"></a-cursor>
</a-camera>
<!-- if I attach this raycaster to the camera, it will rotate with the camera - and that's not what we want -->
<a-entity collider-check id='theray' rotation="0 0 0" position="6 1 6" visible="true">
<!-- the aframe inspector barfs on this -->
<a-entity raycaster="objects:.walkonthis;direction:0 -1 0;showLine:true;origin:0 1 0" line="start:0 0 0;end:0 -5 0:color:red;opacity:1.0"></a-entity>
</a-entity>
<!-- the landscape -->
<a-entity heightgrid='xdimension: 12; zdimension: 10; yscale: 0.5; heights:
5 4 3 2 1 1 1 1 2 3 3 6
5 4 3 2 1 1 1 1 2 3 3 3
3 3 0 0 1 1 1 1 2 3 3 3
3 3 1 0 1 1 1 1 2 3 3 3
3 3 2 1 1 1 1 1 2 3 3 3
3 3 2 1 1 1 1 1 2 3 3 3
3 3 2 1 1 1 1 1 2 3 3 3
3 3 1 0 1 1 1 1 2 3 3 3
3 3 0 0 1 1 1 1 2 3 3 3
3 3 0 0 1 1 1 1 2 3 3 6
;
' material="color: #ccc" class='walkonthis'></a-entity>
</a-scene>
Related
Given a bunch of 2D points and a polygon, I want to evaluate which points are on the boundary of the polygon, and which are strictly inside/outside of the polygon.
The 2D points are:
> grp2
x2 y2
1 -5.233762 1.6213203
2 -1.107843 -7.9349705
3 4.918313 8.9073019
4 7.109651 -3.9571781
5 7.304966 -4.3280168
6 6.080564 -3.5817545
7 8.382685 0.4638735
8 6.812215 6.1610483
9 -4.773094 -3.4260797
10 -3.269638 1.1299852
and the vertices of the polygon are:
> dfC
px py
1 7.304966 -4.3280167
2 8.382685 0.4638735
3 6.812215 6.1610483
4 5.854366 7.5499780
5 2.385478 7.0895268
6 -5.233762 1.6213203
7 -4.773094 -3.4260797
8 -1.107843 -7.9349705
The plot of the situation looks like following:
Clearly, there are 3 points inside the polygon, 1 point outside and 6 points on the edge (as is also evident from the data points).
Now I am using point.in.polygon to estimate this. According to the documentation of package sp, this should return 'integer array; values are: 0: point is strictly exterior to pol; 1: point is strictly interior to pol; 2: point lies on the relative interior of an edge of pol; 3: point is a vertex of pol.'
But my code is not being able to detect the points which are vertices of the polygon:
> point.in.polygon(grp2$x2,grp2$y2,dfC$px,dfC$py)
[1] 0 0 0 1 0 1 0 0 0 1
How can I resolve this problem?
The points are not equal. For example, grp2$x2[1] == -5.23376158438623 but for fpC$px[6] == -5.23376157160271. These are not equal. As the comments suggest, you will have more luck if you round the values:
grp3 <- round(grp2, 3)
dfC3 <- round(dfC, 3)
point.in.polygon(grp3$x2,grp3$y2,dfC3$px,dfC3$py)
# [1] 3 3 0 1 3 1 3 3 3 1
Now
grp3[1, ]
# x2 y2
# 1 -5.234 1.621
fpc3[6, ]
# px py
# 6 -5.234 1.621
Changing the number of decimals to 4 or 5 gives the same results as 3. For floating point numbers to be equal, they must match exactly over all 14 decimal places.
I am trying to visualize a block of two different color particles and I want to add transparency to the points. I write a file (cubo.d) with this data (example):
1 1 1 ar
1 1 2 ar
1 2 1 ar
1 2 2 ar
2 1 1 ab
2 1 2 ab
2 2 1 ab
2 2 2 ab
And then I read it with this code in gnuplot:
set border 0
unset tics
unset key
set view equal xyz
set style fill transparent solid 0.2 noborder
spin(name) = name[1:2] eq "ar" ? 0xcdca00 : name [1:2] eq "ab" ? 0x000000 : 0x888888
splot 'cubo.d' using 1:2:3:(spin(strcol(4))) w p pt 5 ps 2
But the points doesn't have transparency.
I tried adding fs solid 0.2 and also lc rgb 0x888888 after w p pt 5 ps 2 but those doesn't work either.
The four yellow particles are hidding part of the black particles behind
The scheme for transparent colors is
lc rgb 0xaarrggbb
Check help colorspec.
I'm new using gnuplot and i would like to replicate this plot: https://images.app.goo.gl/DqygL2gfk3jZ7jsK6
I have a file.dat with continuous value between 0 and 100 and i would like to plot it, subdivided in intervals ( pident> 98, 90 < pident < 100...) Etc. And on y-axis the total occurrences.
I looked everywhere finding a way but still I cannot do it.
Thank you !
sample of the data, with the value and the counts:
33.18 5
43.296 1
33.19 1
27.168 5
71.429 11
30.698 9
47.934 1
43.299 3
30.699 3
37.092 2
24.492 2
24.493 2
24.494 7
47.938 1
24.497 1
37.097 8
37.099 2
33.824 7
51.111 15
59.025 2
62.553 2
62.554 2
57.867 2
33.826 2
62.555 1
33.827 5
62.556 2
33.828 1
59.028 1
46.429 11
51.117 1
75.158 2
27.621 1
27.623 1
27.624 2
37.5 113
37.6 2
32.313 8
27.626 3
37.7 3
32.314 1
67.797 3
27.628 2
32.316 2
37.9 1
61.044 1
43.81 5
32.317 8
32.318 2
43.82 4
32.319 2
43.83 2
37.551 3
61.048 1
48.993 6
29.43 2
This is the code tried so far (where i also calculate the mean):
#!/usr/bin/gnuplot -persist
set noytics
# Find the mean
mean= system("awk '{sum+=$1*$2; tot+=$2} END{print sum/tot}' hist.dat")
set arrow 1 from mean,0 to mean, graph 1 nohead ls 1 lc rgb "blue"
set label 1 sprintf(" Mean: %s", mean) at mean, screen 0.1
# Histogram
binwidth=10
bin(x,width)=width*floor(x/width)
plot 'hist.dat' using (bin($1,binwidth)):(1.0) smooth freq with boxes
This is the result:
The following script takes your data and sums up the second column within the defined bins.
If you have values of equal 100 in the first column, those values would be in the bin 100-<110.
With Bin(x) = floor(x/BinWidth)*BinWidth + BinWidth*0.5, the bins are shifted by half a binwidth to let the boxes on the x-axis range from the beginning of the bin to the end of the bin (and not centered at the beginning of the respective bin).
If you explicitely want to have xtics labels like in the example graph you've shown, i.e. 10-<20, 20-<30 etc. you would have to fiddle around with the xtic labels.
Edit: Forgot the mean value. There is no need for calling awk. Gnuplot can do this for you as well, check help stats.
Code:
### create histogram
reset session
$Data <<EOD
33.18 5
43.296 1
33.19 1
27.168 5
71.429 11
30.698 9
47.934 1
43.299 3
30.699 3
37.092 2
24.492 2
24.493 2
24.494 7
47.938 1
24.497 1
37.097 8
37.099 2
33.824 7
51.111 15
59.025 2
62.553 2
62.554 2
57.867 2
33.826 2
62.555 1
33.827 5
62.556 2
33.828 1
59.028 1
46.429 11
51.117 1
75.158 2
27.621 1
27.623 1
27.624 2
37.5 113
37.6 2
32.313 8
27.626 3
37.7 3
32.314 1
67.797 3
27.628 2
32.316 2
37.9 1
61.044 1
43.81 5
32.317 8
32.318 2
43.82 4
32.319 2
43.83 2
37.551 3
61.048 1
48.993 6
29.43 2
EOD
# Histogram
BinWidth = 10
Bin(x) = floor(x/BinWidth)*BinWidth + BinWidth*0.5
# Mean
stats $Data u ($1*$2):2 nooutput
mean = STATS_sum_x/STATS_sum_y
set arrow 1 from mean, graph 0 to mean, graph 1 nohead lw 2 lc rgb "red" front
set label 1 sprintf("Mean: %.1f", mean) at mean, graph 1 offset 1,-0.7
set xlabel "Identity / %"
set xrange [0:100]
set xtics 10 out
set ylabel "The number of blast hits"
set style fill solid 0.3
set boxwidth BinWidth
set key noautotitle
set grid x,y
plot $Data using (Bin($1)):2 smooth freq with boxes lc "blue"
### end of code
Result:
I am very new to R, so I apologise if this looks simple to someone.
I try to to join two files and then perform a one-sided Fisher's exact test to determine if there is a greater burden of qualifying variants in casefile or controlfile.
casefile:
GENE CASE_COUNT_HET CASE_COUNT_CH CASE_COUNT_HOM CASE_TOTAL_AC
ENSG00000124209 1 0 0 1
ENSG00000064703 1 1 0 9
ENSG00000171408 1 0 0 1
ENSG00000110514 1 1 1 12
ENSG00000247077 1 1 1 7
controlfile:
GENE CASE_COUNT_HET CASE_COUNT_CH CASE_COUNT_HOM CASE_TOTAL_AC
ENSG00000124209 1 0 0 1
ENSG00000064703 1 1 0 9
ENSG00000171408 1 0 0 1
ENSG00000110514 1 1 1 12
ENSG00000247077 1 1 1 7
ENSG00000174776 1 1 0 2
ENSG00000076864 1 0 1 13
ENSG00000086015 1 0 1 25
I have this script:
#!/usr/bin/env Rscript
library("argparse")
suppressPackageStartupMessages(library("argparse"))
parser <- ArgumentParser()
parser$add_argument("--casefile", action="store")
parser$add_argument("--casesize", action="store", type="integer")
parser$add_argument("--controlfile", action="store")
parser$add_argument("--controlsize", action="store", type="integer")
parser$add_argument("--outfile", action="store")
args <- parser$parse_args()
case.dat<-read.delim(args$casefile, header=T, stringsAsFactors=F, sep="\t")
names(case.dat)[1]<-"GENE"
control.dat<-read.delim(args$controlfile, header=T, stringsAsFactors=F, sep="\t")
names(control.dat)[1]<-"GENE"
dat<-merge(case.dat, control.dat, by="GENE", all.x=T, all.y=T)
dat[is.na(dat)]<-0
dat$P_DOM<-0
dat$P_REC<-0
for(i in 1:nrow(dat)){
#Dominant model
case_count<-dat[i,]$CASE_COUNT_HET+dat[i,]$CASE_COUNT_HOM
control_count<-dat[i,]$CONTROL_COUNT_HET+dat[i,]$CONTROL_COUNT_HOM
if(case_count>args$casesize){
case_count<-args$casesize
}else if(case_count<0){
case_count<-0
}
if(control_count>args$controlsize){
control_count<-args$controlsize
}else if(control_count<0){
control_count<-0
}
mat<-cbind(c(case_count, (args$casesize-case_count)), c(control_count, (args$controlsize-control_count)))
dat[i,]$P_DOM<-fisher.test(mat, alternative="greater")$p.value
and problem starts in here:
case_count<-dat[i,]$CASE_COUNT_HET+dat[i,]$CASE_COUNT_HOM
control_count<-dat[i,]$CONTROL_COUNT_HET+dat[i,]$CONTROL_COUNT_HOM
the result of case_count and control_count is NULL values, however corresponding columns in both input files are NOT empty.
I tried to run the script above with assigning absolute numbers (1000 and 2000) to variables case_count and control_count , and the script worked without issues.
The main purpose of the code:
https://github.com/mhguo1/TRAPD
Run burden testing This script will run the actual burden testing. It
performs a one-sided Fisher's exact test to determine if there is a
greater burden of qualifying variants in cases as compared to controls
for each gene. It will perform this burden testing under a dominant
and a recessive model.
It requires R; the script was tested using R v3.1, but any version of
R should work. The script should be run as: Rscript burden.R
--casefile casecounts.txt --casesize 100 --controlfile controlcounts.txt --controlsize 60000 --output burden.out.txt
The script has 5 required options:
--casefile: Path to the counts file for the cases, as generated in Step 2A
--casesize: Number of cases that were tested in Step 2A
--controlfile: Path to the counts file for the controls, as generated in Step 2B
--controlsize: Number of controls that were tested in Step 2B. If using ExAC or gnomAD, please refer to the respective documentation for
total sample size
--output: Output file path/name Output: A tab delimited file with 10 columns:
#GENE: Gene name CASE_COUNT_HET: Number of cases carrying heterozygous qualifying variants in a given gene CASE_COUNT_CH: Number of cases
carrying potentially compound heterozygous qualifying variants in a
given gene CASE_COUNT_HOM: Number of cases carrying homozygous
qualifying variants in a given gene. CASE_TOTAL_AC: Total AC for a
given gene. CONTROL_COUNT_HET: Approximate number of controls carrying
heterozygous qualifying variants in a given gene CONTROL_COUNT_HOM:
Number of controlss carrying homozygous qualifying variants in a given
gene. CONTROL_TOTAL_AC: Total AC for a given gene. P_DOM: p-value
under the dominant model. P_REC: p-value under the recessive model.
I try to run genetic variant burden test with vcf files and external gnomAD controls. I found this repo suitable and trying to fix bugs now in it.
as a newbie in R statistics, I will be happy about any suggestion. Thank you!
If you want all row in two file. You can use full join with by = "GENE" and suffix as you wish
library(dplyr)
z <- outer_join(case_file, control_file, by = "GENE", suffix = c(".CASE", ".CONTROL"))
GENE CASE_COUNT_HET.CASE CASE_COUNT_CH.CASE CASE_COUNT_HOM.CASE CASE_TOTAL_AC.CASE
1 ENSG00000124209 1 0 0 1
2 ENSG00000064703 1 1 0 9
3 ENSG00000171408 1 0 0 1
4 ENSG00000110514 1 1 1 12
5 ENSG00000247077 1 1 1 7
6 ENSG00000174776 NA NA NA NA
7 ENSG00000076864 NA NA NA NA
8 ENSG00000086015 NA NA NA NA
CASE_COUNT_HET.CONTROL CASE_COUNT_CH.CONTROL CASE_COUNT_HOM.CONTROL CASE_TOTAL_AC.CONTROL
1 1 0 0 1
2 1 1 0 9
3 1 0 0 1
4 1 1 1 12
5 1 1 1 7
6 1 1 0 2
7 1 0 1 13
8 1 0 1 25
If you want only GENE that are in both rows, use inner_join
z <- inner_join(case_file, control_file, by = "GENE", suffix = c(".CASE", ".CONTROL"))
GENE CASE_COUNT_HET.CASE CASE_COUNT_CH.CASE CASE_COUNT_HOM.CASE CASE_TOTAL_AC.CASE
1 ENSG00000124209 1 0 0 1
2 ENSG00000064703 1 1 0 9
3 ENSG00000171408 1 0 0 1
4 ENSG00000110514 1 1 1 12
5 ENSG00000247077 1 1 1 7
CASE_COUNT_HET.CONTROL CASE_COUNT_CH.CONTROL CASE_COUNT_HOM.CONTROL CASE_TOTAL_AC.CONTROL
1 1 0 0 1
2 1 1 0 9
3 1 0 0 1
4 1 1 1 12
5 1 1 1 7
I have a data that looks like this:
Cluster_Combined Cluster_1 Cluster_2 Cluster_3 Cluster_4 Cluster_6 Cluster_10
G-protein coupled receptor signaling pathway (15) 2 6 0 4 3 1 0
GTP catabolic process (69) 1 0 0 0 2 0 0
activin receptor signaling pathway (17) 0 2 0 0 0 0 0
acute inflammatory response (7) 2 1 0 0 1 0 0
acute-phase response (8) 5 2 1 0 2 0 0
aging (5) 2 1 2 0 1 0 1
Which I want to create the heat map, based on the values above, where columns refer to the cluster name and row the ontology terms.
Now I have the code below
library(gplots);
dat <- read.table("http://dpaste.com/1505883/plain/",sep="\t",header=T);
hmcols <- rev(redgreen(2750));
heatmap.2(as.matrix(dat),scale="row",cols=hmcols,trace="none",dendrogram="none",keysize=1);
Although it does generate the plot, it gave me the following error:
Error in csep + 0.5 : non-numeric argument to binary operator
Furthermore, I cannot see the red-green effect in the plot.
How can I remove the error?
There is no "cols=" argument to heatmap.2(...). Try col=hmcols.
heatmap.2(as.matrix(dat),scale="row",col=hmcols,trace="none",dendrogram="none",keysize=1)