Compare commits

..

45 Commits

Author SHA1 Message Date
Matthieu Baumann
bc1096fce3 Aladin lite: v3.2.0
Features:
- The use of vite as a project manager
- Enhance the MOC rendering perf and add possibility to draw the perimeter
- Many fixes such as the footprints rendering for all sky projections
- A line rasterizer enabling the thickness' grid line changes
2023-08-13 14:10:44 +02:00
Matthieu Baumann
7d5696228d remove engines in package json 2023-08-12 19:14:44 +02:00
Matthieu Baumann
08699a9bd5 Merge branch 'develop' into develop 2023-08-12 19:00:28 +02:00
Matthieu Baumann
fc6a09e373 Fix rebase 2023-08-12 15:26:54 +02:00
Matthieu Baumann
a8a86a2952 Merge branch 'develop' into develop 2023-08-12 15:18:38 +02:00
Matthieu Baumann
cc958bfa2d Corrected ESASky link in README
Redo PR #104 commit directly on develop

Co-authored-by: imbasimba <https://www.henriknorman.com>
2023-08-12 15:06:13 +02:00
Matthieu Baumann
6c4ddce6b0 Merge pull request #113 from cds-astro/features/lineRasterizer
Features/line rasterizer
2023-08-12 15:04:17 +02:00
Matthieu Baumann
a4e4ec85af fix tests 2023-08-12 14:57:41 +02:00
Matthieu Baumann
b31bb18027 points to votable repo 2023-08-12 14:51:56 +02:00
Matthieu Baumann
de66d28061 points to forked github repo 2023-08-12 14:20:54 +02:00
Matthieu Baumann
ffdeb0ac2a fix footprints example 2023-08-11 18:16:05 +02:00
Matthieu Baumann
56e6fa80d5 fix lat=0.0 parallel 2023-08-11 15:52:46 +02:00
Matthieu Baumann
31348c12c6 fix some cuts default init problems 2023-08-10 11:35:23 +02:00
onekiloparsec
c1b2bd24b9 Reverting the insertion of /.idea into the gitignore.
It can be anaged individually.
2023-08-03 13:31:34 +02:00
Manon
46573a23da fix draw 2023-08-01 10:08:15 +02:00
Matthieu Baumann
3bba90f3d1 adapt tests 2023-07-28 13:31:34 +02:00
Matthieu Baumann
c58876e21d remove some logs 2023-07-28 13:31:34 +02:00
Matthieu Baumann
9ef1f2ac09 cargo fix 2023-07-28 13:31:34 +02:00
Matthieu Baumann
a2a09c7506 fix fill MOC 2023-07-28 13:31:34 +02:00
Matthieu Baumann
bd9845fab1 Enhance moc render with new optional parameter: perimeter, edge (default) and fill, with a fillColor javascript param 2023-07-28 13:31:34 +02:00
Matthieu Baumann
bb7513a959 wip perimeter moc draw 2023-07-28 13:31:34 +02:00
Matthieu Baumann
526cf51c4c enhance grid wip 2023-07-28 13:31:34 +02:00
Matthieu Baumann
163dd7d762 first commit 2023-07-28 13:31:34 +02:00
Cédric Foellmi
121f4345bc Update src/js/gui/ContextMenu.js
Fixed the missing canvas parameter of the refactored `relMouseCoords` function.

Co-authored-by: Matthieu Baumann <baumannmatthieu0@gmail.com>
2023-07-19 04:01:47 +02:00
Cédric Foellmi
0665f2b65f Update src/js/View.js
Fixed the missing canvas parameter of the refactored `relMouseCoords` function.

Co-authored-by: Matthieu Baumann <baumannmatthieu0@gmail.com>
2023-07-19 04:01:38 +02:00
Cédric Foellmi
a58fb1dd8a Update src/js/View.js
Fixed the missing canvas parameter of the refactored `relMouseCoords` function.

Co-authored-by: Matthieu Baumann <baumannmatthieu0@gmail.com>
2023-07-19 04:01:29 +02:00
Cédric Foellmi
466472a1a7 Update src/js/View.js
Fixed the missing canvas parameter of the refactored `relMouseCoords` function.

Co-authored-by: Matthieu Baumann <baumannmatthieu0@gmail.com>
2023-07-19 04:01:14 +02:00
Cédric Foellmi
540f4e33be Update src/js/View.js
Fixed the missing canvas parameter of the refactored `relMouseCoords` function.

Co-authored-by: Matthieu Baumann <baumannmatthieu0@gmail.com>
2023-07-19 04:01:04 +02:00
Cédric Foellmi
0b92b6d1db Update src/js/GenericPointer.js
Fixed the missing canvas parameter of the refactored `relMouseCoords` function.

Co-authored-by: Matthieu Baumann <baumannmatthieu0@gmail.com>
2023-07-19 04:00:38 +02:00
Cédric Foellmi
06dcc126f9 Fixed import
Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:16 +02:00
Cédric Foellmi
04e552b7c3 Making a first test passing!
Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:16 +02:00
Cédric Foellmi
1bee9c8b77 Ignoring .idea (PyCharm/WebStorm)
Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:16 +02:00
Cédric Foellmi
c77f2aeda8 Making a dedicated dependency-free Constants file to avoid Utils import Aladin.js!
Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:16 +02:00
Cédric Foellmi
57c1b8423d Linting
Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:16 +02:00
Cédric Foellmi
ebf2d06f31 Using the Utils methods also in the examples
Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:16 +02:00
Cédric Foellmi
5d0ec40612 Fixing imports
Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:16 +02:00
Cédric Foellmi
82b2eb0423 Using the new Utils method
Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:16 +02:00
Cédric Foellmi
2dc6f17c7d Renaming Utils.js into Utils.ts and correcting related imports.
Note that the content of Utils.ts has also been changed. More pecisely:

- `relMouseCoords` has been set as a function of the Utils object, to avoid attaching it to canvas prototype. This was very custom way of doing, and makes testing very complicated to run, while providing no real value.

- Removed the jQuery dependency and made `urlParam` a function of the Utils object.

- Added some types when possible / easy. But TS already reveal the misuse of some function sur as `parseInt` and `parseFloat` (which act on strings, not numbers)

Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:11:15 +02:00
Cédric Foellmi
402e270015 Introducing typescript & vitest
In order to push upward the code quailty of the Javascript code, I propose to introduce Typescript and Vitest for writing unit tests.

This first commit simply introduce the right dependencies and configuration.

Note the presence of “happy-dom” dev dep for manipulating the window object (which is involved in the first test written on Utils).

Signed-off-by: Cédric Foellmi <cedric@onekiloparsec.dev>
2023-07-18 12:07:21 +02:00
Matthieu Baumann
7cfbb83883 Add possibility to download the PNG export from javascript API 2023-07-11 10:53:07 +02:00
MARCHAND MANON
ba8acf4a99 fix color of context menu 2023-07-04 18:25:12 +02:00
MARCHAND MANON
d570301b1d add the target button in svg format 2023-07-04 18:25:12 +02:00
Matthieu Baumann
41f381ab2b add filter option for Catalog object #80 2023-06-06 10:26:37 +02:00
Matthieu Baumann
dbf8c8dbeb remove terser from dev dep 2023-05-26 18:51:07 +02:00
Matthieu Baumann
5ef2991310 Conditional export for CJS (require)
Requested in #93

Co-authored-by: diego-ge
2023-05-26 18:47:57 +02:00
138 changed files with 7975 additions and 5410 deletions

View File

@@ -32,4 +32,6 @@ jobs:
run: |
npm run build
- name: "Run some tests"
run: npm test
run: |
npm run test:build
npm run test:unit

4
.gitignore vendored
View File

@@ -8,4 +8,6 @@ package-lock.json
src/core/target/
src/core/Cargo.lock
AladinLiteAssets.tar.gz
aladin-lite*.tgz
.vscode

View File

@@ -6,7 +6,7 @@ Aladin Lite is a Web application which enables HiPS visualization from the brows
See [A&A 578, A114 (2015)](https://arxiv.org/abs/1505.02291) and [IVOA HiPS Recommendation](http://ivoa.net/documents/HiPS/index.html) for more details about the HiPS standard.
Aladin Lite is built to be easily embeddable in any web page. It powers astronomical portals like [ESASky](https://almascience.eso.org/asax/), [ESO Science Archive portal](http://archive.eso.org/scienceportal/) and [ALMA Portal](https://almascience.eso.org/asax/).
Aladin Lite is built to be easily embeddable in any web page. It powers astronomical portals like [ESASky](https://sky.esa.int/), [ESO Science Archive portal](http://archive.eso.org/scienceportal/) and [ALMA Portal](https://almascience.eso.org/asax/).
More details on [Aladin Lite documentation page](http://aladin.u-strasbg.fr/AladinLite/doc/).

100
assets/target.svg Normal file
View File

@@ -0,0 +1,100 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="5.4163427mm"
height="5.4163427mm"
viewBox="0 0 5.4163429 5.4163429"
version="1.1"
id="svg5"
xml:space="preserve"
inkscape:export-filename="target.png"
inkscape:export-xdpi="500"
inkscape:export-ydpi="500"
inkscape:version="1.2.2 (b0a8486541, 2022-12-01)"
sodipodi:docname="target.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
id="namedview7"
pagecolor="#ffffff"
bordercolor="#000000"
borderopacity="0.25"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1"
inkscape:document-units="mm"
showgrid="false"
inkscape:zoom="20.70196"
inkscape:cx="3.7919116"
inkscape:cy="9.8541396"
inkscape:window-width="2560"
inkscape:window-height="1367"
inkscape:window-x="2560"
inkscape:window-y="0"
inkscape:window-maximized="1"
inkscape:current-layer="layer1" /><defs
id="defs2"><inkscape:path-effect
effect="powerclip"
id="path-effect4310"
is_visible="true"
lpeversion="1"
inverse="true"
flatten="false"
hide_clip="false"
message="Utilise la règle de remplissage « fill-rule: evenodd » de la boîte de dialogue &lt;b&gt;Fond et contour&lt;/b&gt; en l'absence de résultat de mise à plat après une conversion en chemin." /><inkscape:path-effect
effect="powerclip"
id="path-effect4302"
is_visible="true"
lpeversion="1"
inverse="true"
flatten="false"
hide_clip="false"
message="Utilise la règle de remplissage « fill-rule: evenodd » de la boîte de dialogue &lt;b&gt;Fond et contour&lt;/b&gt; en l'absence de résultat de mise à plat après une conversion en chemin." /></defs><g
inkscape:label="Calque 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-140.08642,-154.46187)"><circle
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:0.33835;stroke-dasharray:none;stroke-opacity:1"
id="path2746"
cx="142.79459"
cy="157.17004"
r="2.0865982" /><circle
style="display:none;fill:none;fill-opacity:1;stroke:#000000;stroke-width:0.164818;stroke-dasharray:none;stroke-opacity:1"
id="circle4300"
cx="142.79459"
cy="157.17004"
r="0.50933534" /><circle
style="display:none;fill:none;fill-opacity:1;stroke:#000000;stroke-width:0.164818;stroke-dasharray:none;stroke-opacity:1"
id="circle4308"
cx="142.79459"
cy="157.17004"
r="0.50933534" /><circle
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:0.3084;stroke-dasharray:none;stroke-opacity:1"
id="path4262"
cx="142.79459"
cy="157.17004"
r="0.95304745" /><path
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:0.365001;stroke-dasharray:none;stroke-opacity:1"
d="m 143.43355,154.60018 v 2.66686"
id="path308"
clip-path="none"
transform="matrix(0.84492788,0,0,0.84492788,21.603582,23.835872)"
sodipodi:nodetypes="cc" /><path
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:0.3084;stroke-dasharray:none;stroke-opacity:1"
d="m 145.50276,157.17004 h -2.2533"
id="path406"
clip-path="none"
sodipodi:nodetypes="cc" /><path
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:0.3084;stroke-dasharray:none;stroke-opacity:1"
d="m 142.33972,157.17004 h -2.2533"
id="path408"
clip-path="none"
sodipodi:nodetypes="cc" /><path
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:0.3084;stroke-dasharray:none;stroke-opacity:1"
d="m 142.79459,157.62491 v 2.2533"
id="path410"
clip-path="none"
sodipodi:nodetypes="cc" /></g></svg>

After

Width:  |  Height:  |  Size: 4.1 KiB

View File

@@ -0,0 +1,45 @@
<!doctype html>
<html>
<head>
</head>
<body>
<script src="https://code.jquery.com/jquery-1.10.1.min.js"></script>
<div id="aladin-lite-div" style="width: 1024px; height: 768px"></div>
Show sources with proper motion greater than:
<input id='slider' style='vertical-align:middle;width:60vw;' step='1' min='0' max='10' type='range' value='0'>
<span id='pmVal' >0 mas/yr</span><br><br><div id='aladin-lite-div' style='width: 500px;height: 500px;'></div>
<script type="module">
import A from '../src/js/A.js';
let aladin;
A.init.then(() => {
var pmThreshold = 0;
var slider = document.getElementById('slider');
slider.oninput = function() {
pmThreshold = this.value;
$('#pmVal').html(pmThreshold + ' mas/yr');
cat.reportChange();
}
var myFilterFunction = function(source) {
var hpmag = parseFloat(source.data['Hpmag']);
if (isNaN(hpmag)) {
return false;
}
return hpmag>pmThreshold;
}
aladin = A.aladin('#aladin-lite-div', {target: 'M 45', fov: 5});
var cat = A.catalogFromVizieR('I/311/hip2', 'M 45', 5, {onClick: 'showTable', filter: myFilterFunction});
aladin.addCatalog(cat);
});
</script>
</body>
</html>

View File

@@ -21,7 +21,9 @@
<body>
<script src="https://code.jquery.com/jquery-1.10.1.min.js"></script>
<script type="text/javascript">
let aladin;
</script>
<div id="aladin-lite-div" style="width:100vw;height:100vh;">
<div id="calibCircle" style="display: none;"></div>
<div id="explain" class="aladin-box"></div>
@@ -225,8 +227,8 @@
<script type="module">
import A from '../src/js/A.js';
import {Utils} from '../src/js/Utils';
let aladin;
A.init.then(() => {
var hipsDir="http://alasky.u-strasbg.fr/CDS_P_Coronelli";
aladin = A.aladin("#aladin-lite-div", {showSimbadPointerControl: true, realFullscreen: true, fov: 100, allowFullZoomout: true, showReticle: false });
@@ -495,7 +497,7 @@
deleteOverlayTimeout = undefined;
}
isDrawing = true;
points.push([drawOverlayCanvas.relMouseCoords(e)]);
points.push([Utils.relMouseCoords(drawOverlayCanvas.imageCanvas, e)]);
});
@@ -504,7 +506,7 @@
e.preventDefault();
drawOverlayCtx.clearRect(0, 0, drawOverlayCtx.canvas.width, drawOverlayCtx.canvas.height);
points[points.length-1].push(drawOverlayCanvas.relMouseCoords(e));
points[points.length-1].push(Utils.relMouseCoords(drawOverlayCanvas.imageCanvas, e));
drawOverlayCtx.beginPath();

View File

@@ -11,7 +11,7 @@
import A from '../src/js/A.js';
let aladin;
A.init.then(() => {
aladin = A.aladin('#aladin-lite-div', {target: 'M 45', fov: 5});
aladin = A.aladin('#aladin-lite-div', {target: 'M 45', fov: 5, showContextMenu: true});
const cat = A.catalogFromVizieR('I/311/hip2', 'M 45', 5, {onClick: 'showTable'});
aladin.addCatalog(cat);
});

View File

@@ -11,7 +11,7 @@
let aladin;
A.init.then(() => {
// Start up Aladin Lite
aladin = A.aladin('#aladin-lite-div', {survey: "CDS/P/DSS2/color", target: 'M 31', fov: 0.2});
aladin = A.aladin('#aladin-lite-div', {survey: "CDS/P/DSS2/color", target: 'M 31', fov: 3});
var overlay = A.graphicOverlay({color: '#ee2345', lineWidth: 3});
aladin.addOverlay(overlay);
overlay.addFootprints([

View File

@@ -0,0 +1,23 @@
<!doctype html>
<html>
<head>
</head>
<body>
<div id="aladin-lite-div" style="width: 500px; height: 400px"></div>
<script type="module">
import A from '../src/js/A.js';
var vmc_cepheids = 'https://archive.eso.org/tap_cat/sync?REQUEST=doQuery&LANG=ADQL&MAXREC=401&FORMAT=votable&QUERY=SELECT%20*%20from%20vmc_er4_yjks_cepheidCatMetaData_fits_V3%20where%20%20CONTAINS(POINT(%27%27,RA2000,DEC2000),%20CIRCLE(%27%27,80.894167,-69.756111,2.7))=1';
var pessto = 'https://archive.eso.org/tap_cat/sync?REQUEST=doQuery&LANG=ADQL&MAXREC=3&FORMAT=votable&QUERY=SELECT%20*%20from%20safcat.PESSTO_TRAN_CAT_V3%20where%20CONTAINS(POINT(%27%27,TRANSIENT_RAJ2000,TRANSIENT_DECJ2000),%20CIRCLE(%27%27,80.894167,-69.756111,2.7))=1';
var aladin;
A.init.then(() => {
aladin = A.aladin('#aladin-lite-div', {survey: 'P/DSS2/red', target: 'LMC', fov: 5});
aladin.addCatalog(A.catalogFromURL('https://vizier.u-strasbg.fr/viz-bin/votable?-source=HIP2&-c=LMC&-out.add=_RAJ,_DEJ&-oc.form=dm&-out.meta=DhuL&-out.max=9999&-c.rm=180', {sourceSize:12, color: '#f08080'}));
aladin.addCatalog(A.catalogFromURL(vmc_cepheids, {onClick: 'showTable', sourceSize:14, color: '#fff080'}));
aladin.addCatalog(A.catalogFromURL(pessto, {onClick: 'showPopup', sourceSize:14, color: '#00f080'}), undefined, true);
});
</script>
</body>
</html>

View File

@@ -13,11 +13,11 @@
var aladin = A.aladin('#aladin-lite-div', {target: '05 37 58 +08 17 35', fov: 12, backgroundColor: 'rgb(120, 0, 0)'});
var cat = A.catalog({sourceSize: 20});
aladin.addCatalog(cat);
cat.addSources([A.source(83.784490, 09.934156, {name: 'Meissa'}), A.source(88.792939, 7.407064, {name: 'Betelgeuse'}), A.source(81.282764, 6.349703, {name: 'Bellatrix'})]);
cat.addSources([A.source(83.784490, 9.934156, {name: 'Meissa'}), A.source(88.792939, 7.407064, {name: 'Betelgeuse'}), A.source(81.282764, 6.349703, {name: 'Bellatrix'})]);
var msg;
// define function triggered when a source is hovered
aladin.on('objectHovered', function(object) {
var msg;
if (object) {
msg = 'You hovered object ' + object.data.name + ' located at ' + object.ra + ', ' + object.dec;
}
@@ -37,7 +37,6 @@
// define function triggered when an object is clicked
var objClicked;
aladin.on('objectClicked', function(object) {
var msg;
if (object) {
objClicked = object;
object.select();

View File

@@ -14,11 +14,11 @@
<form class="pure-form pure-form-stacked">
<fieldset>
<label for="option-gdr3-flux-color-map" class="pure-radio">
<input id="option-gdr3-flux-color-map" type="radio" name="img-hips" value="CDS/P/DM/flux-color-Rp-G-Bp/I/350/gaiaedr3" checked>
<input id="option-gdr3-flux-color-map" type="radio" name="img-hips" value="CDS/P/DM/flux-color-Rp-G-Bp/I/350/gaiaedr3">
Gaia DR3 flux map
</label>
<label for="option-gdr3-density-map" class="pure-radio">
<input id="option-gdr3-density-map" type="radio" name="img-hips" value="CDS/P/DM/I/350/gaiaedr3">
<input id="option-gdr3-density-map" type="radio" name="img-hips" value="CDS/P/DM/I/350/gaiaedr3" checked>
Gaia DR3 density map
</label>
<label for="option-DSS-map" class="pure-radio">
@@ -50,7 +50,7 @@
const fluxMap = aladin.createImageSurvey('gdr3-color-flux-map', 'Gaia DR3 flux map', 'https://alasky.u-strasbg.fr/ancillary/GaiaEDR3/color-Rp-G-Bp-flux-map', 'equatorial', 7);
const densityMap = aladin.createImageSurvey('gdr3-density-map', 'Gaia DR3 density map', 'https://alasky.u-strasbg.fr/ancillary/GaiaEDR3/density-map', 'equatorial', 7, {imgFormat: 'fits'});
aladin.setImageSurvey(fluxMap);
aladin.setImageSurvey(densityMap);
var hipsCats = {
//'gdr3': A.catalogHiPS('https://axel.u-strasbg.fr/HiPSCatService/I/355/gaiadr3', { name: 'Gaia DR3 sources', shape: 'circle', sourceSize: 8, color: '#d66bae' }),
@@ -62,7 +62,7 @@
aladin.addCatalog(hipsCats['simbad']);
//aladin.addCatalog(hipsCats['gdr3']);
cmDensMapChanged = false;
//cmDensMapChanged = false;
// listen changes on HiPS image background selection
$('input[type=radio][name=img-hips]').change(function () {
if (this.value == 'CDS/P/DM/I/350/gaiaedr3') {

View File

@@ -8,12 +8,12 @@
import A from '../src/js/A.js';
let aladin;
A.init.then(() => {
aladin = A.aladin('#aladin-lite-div', {projection: "TAN", target: '15 16 57.636 -60 55 7.49', showCooGrid: true, fov: 90});
aladin = A.aladin('#aladin-lite-div', {projection: "TAN", target: '15 16 57.636 -60 55 7.49', showCooGrid: true, fov: 90, fullScreen: true});
var moc_0_99 = A.MOCFromURL("./gw/gw_0.9.fits",{ name: "GW 90%", color: "#ff0000", opacity: 0.5, lineWidth: 1, adaptativeDisplay: true});
var moc_0_95 = A.MOCFromURL("./gw/gw_0.6.fits",{ name: "GW 60%", color: "#00ff00", opacity: 0.5, lineWidth: 1, adaptativeDisplay: true});
var moc_0_5 = A.MOCFromURL("./gw/gw_0.3.fits",{ name: "GW 30%", color: "#00ffff", opacity: 0.5, lineWidth: 1, adaptativeDisplay: false});
var moc_0_2 = A.MOCFromURL("./gw/gw_0.1.fits",{ name: "GW 10%", color: "#ff00ff", opacity: 0.5, lineWidth: 1, adaptativeDisplay: false});
var moc_0_99 = A.MOCFromURL("./gw/gw_0.9.fits",{ name: "GW 90%", color: "#ff0000", opacity: 0.7, lineWidth: 5, perimeter: true});
var moc_0_95 = A.MOCFromURL("./gw/gw_0.6.fits",{ name: "GW 60%", color: "#00ff00", opacity: 0.8, lineWidth: 5, perimeter: true});
var moc_0_5 = A.MOCFromURL("./gw/gw_0.3.fits",{ name: "GW 30%", color: "#00ffff", opacity: 1.0, lineWidth: 5, perimeter: true});
var moc_0_2 = A.MOCFromURL("./gw/gw_0.1.fits",{ name: "GW 10%", color: "#ff00ff", opacity: 1.0, lineWidth: 5, perimeter: true});
aladin.addMOC(moc_0_99);
aladin.addMOC(moc_0_95);

View File

@@ -12,8 +12,8 @@
A.init.then(() => {
aladin = A.aladin('#aladin-lite-div', {target: 'galactic center'});
let hsc = aladin.createImageSurvey('hips gaia', "hips gaia name", "./hips/gaia/", undefined, undefined, {colormap:"viridis"});
aladin.setBaseImageLayer(hsc);
let survey = aladin.createImageSurvey('hips gaia', "hips gaia name", "./hips/gaia", undefined, undefined, {colormap:"viridis"});
aladin.setBaseImageLayer(survey);
});
</script>

View File

@@ -21,6 +21,9 @@
"7":[131423,131439,131443,131523,131556,131557,131580,131581,132099,132612,132613,132624,132625,132627,132637,
132680,132681,132683,132709,132720,132721,132904,132905,132948,132952,132964,132968,133008,133009,133012,135252,135256,135268,135316,135320,135332,135336,148143,148152,148154,149507,149520
,149522,149523,149652,149654,149660,149662,149684,149686,149692,149694,149695,150120,150122,150208,150210,150216,150218,150240,150242,150243,155748,155752,155796,155800,155812,155816]};
//var json = {"3":[517],
//"4":[2065, 2067]};
var moc = A.MOCFromJSON(json, {opacity: 0.25, color: 'magenta', lineWidth: 1, adaptativeDisplay: false});
aladin.addMOC(moc);
});

View File

@@ -12,14 +12,14 @@
let aladin;
A.init.then(() => {
aladin = A.aladin('#aladin-lite-div', {target: '00 00 00 +07 00 00', fov: 130, survey: 'P/Mellinger/color'});
var moc11 = A.MOCFromURL('http://skies.esac.esa.int/HST/NICMOS/Moc.fits', {color: '#84f', lineWidth: 1, opacity: 1.0}, (moc) => {
var moc11 = A.MOCFromURL('http://skies.esac.esa.int/HST/NICMOS/Moc.fits', {color: '#84f', lineWidth: 3, opacity: 1.0}, (moc) => {
// moc is ready
console.log(moc.contains(205.9019247, +2.4492764));
console.log(moc.contains(-205.9019247, +2.4492764));
});
var moc10 = A.MOCFromURL('https://alasky.unistra.fr/MocServer/query?ivorn=ivo%3A%2F%2FCDS%2FV%2F139%2Fsdss9&get=moc&order=11&fmt=fits', {color: '#aabbcc', opacity: 0.1, lineWidth: 1});
var moc9 = A.MOCFromURL('https://alasky.unistra.fr/MocServer/query?ivorn=ivo%3A%2F%2FCDS%2FV%2F139%2Fsdss9&get=moc&order=4&fmt=fits', {color: '#00ff00', opacity: 0.5, lineWidth: 1});
var moc10 = A.MOCFromURL('https://alasky.unistra.fr/MocServer/query?ivorn=ivo%3A%2F%2FCDS%2FV%2F139%2Fsdss9&get=moc&order=11&fmt=fits', {color: '#ffffff', perimeter: true, fillColor: '#aabbcc', opacity: 0.1, fill: true, lineWidth: 3});
var moc9 = A.MOCFromURL('https://alasky.unistra.fr/MocServer/query?ivorn=ivo%3A%2F%2FCDS%2FV%2F139%2Fsdss9&get=moc&order=4&fmt=fits', {color: '#00ff00', opacity: 0.5, lineWidth: 3, perimeter: true});
aladin.addMOC(moc11);
aladin.addMOC(moc10);

View File

@@ -12,7 +12,7 @@
A.init.then(() => {
aladin = A.aladin('#aladin-lite-div', {target: '14 18 16.868 +56 44 29.37', fov: 360, projection: 'AIT', showContextMenu: true});
const c1 = A.catalogFromURL('https://raw.githubusercontent.com/bmatthieu3/SKA-Discovery-Service-Mockup/aladin/ObsCore/ObsCore_003.xml', {onClick: 'showTable'});
const c1 = A.catalogFromURL('https://raw.githubusercontent.com/VisIVOLab/SKA-Discovery-Service-Mockup/main/ObsCore/ObsCore_003.xml', {onClick: 'showTable'});
aladin.addCatalog(c1);
const c2 = A.catalogFromVizieR('B/assocdata/obscore', '14 18 16.868 +56 44 29.37', 100, {onClick: 'showTable', limit: 1000});

View File

@@ -11,7 +11,7 @@
A.init.then(() => {
// Start up Aladin Lite
aladin = A.aladin('#aladin-lite-div', {survey: "CDS/P/DSS2/color", target: 'Sgr a*', fov: 0.5});
let aladin = A.aladin('#aladin-lite-div', {survey: "CDS/P/DSS2/color", target: 'Sgr a*', fov: 0.5});
// This table contains a s_region column containing stcs expressed regions
// that are automatically parsed
aladin.addCatalog(A.catalogFromURL('https://aladin.cds.unistra.fr/AladinLite/doc/API/examples/data/alma-footprints.xml', {onClick: 'showTable'}));

View File

@@ -2,7 +2,7 @@
"homepage": "https://aladin.u-strasbg.fr/",
"name": "aladin-lite",
"type": "module",
"version": "3.1.0",
"version": "3.2.0",
"description": "An astronomical HiPS visualizer in the browser",
"author": "Thomas Boch and Matthieu Baumann",
"license": "GPL-3",
@@ -15,7 +15,8 @@
],
"exports": {
".": {
"import": "./dist/aladin.js"
"import": "./dist/aladin.js",
"require": "./dist/aladin.umd.cjs"
}
},
"repository": {
@@ -33,22 +34,25 @@
"scripts": {
"wasm": "wasm-pack build ./src/core --target web --release --out-name core -- --features webgl2",
"predeploy": "npm run build && npm pack",
"deploy": "./deploy.sh",
"deploy": "./deploy-dbg.sh",
"build": "npm run wasm && vite build && cp examples/index.html dist/index.html",
"dev": "npm run build && vite",
"serve": "npm run dev",
"preview": "vite preview",
"test": "cd src/core && cargo test --release --features webgl2"
"test:build": "cd src/core && cargo test --release --features webgl2",
"test:unit": "vitest run"
},
"devDependencies": {
"npm": "^8.19.2",
"terser": "^5.17.6",
"happy-dom": "^8.9.0",
"npm": "^9.8.1",
"typescript": "^5.0.4",
"vite": "^4.3.8",
"vite-plugin-css-injected-by-js": "^3.1.1",
"vite-plugin-glsl": "^1.1.2",
"vite-plugin-top-level-await": "^1.3.1",
"vite-plugin-wasm": "^3.2.2",
"vite-plugin-wasm-pack": "^0.1.12"
"vite-plugin-wasm-pack": "^0.1.12",
"vitest": "^0.32.2"
},
"dependencies": {
"autocompleter": "^6.1.3",

View File

@@ -3,7 +3,7 @@ name = "aladin-lite"
description = "Aladin Lite v3 introduces a new graphical engine written in Rust with the use of WebGL"
license = "BSD-3-Clause"
repository = "https://github.com/cds-astro/aladin-lite"
version = "3.1.1"
version = "3.2.0"
authors = ["baumannmatthieu0@gmail.com", "matthieu.baumann@astro.unistra.fr"]
edition = "2018"
@@ -18,17 +18,16 @@ members = [
crate-type = ["cdylib"]
[dependencies]
getrandom = {version="0.2", features = ["js"]}
rand = {version = "0.8.5", features = ["getrandom"]}
futures = "0.3.12"
js-sys = "0.3.47"
wasm-bindgen-futures = "0.4.20"
cgmath = "*"
cdshealpix = "0.6.4"
moclib = { package = "moc", version = "0.10.1" }
serde = { version = "^1.0.59", features = ["derive"] }
serde_json = "1.0"
serde-wasm-bindgen = "0.4"
healpix = { package = "cdshealpix", git = "https://github.com/bmatthieu3/cds-healpix-rust", branch = "polygonIntersectVertices" }
#moclib = { package = "moc", git = "https://github.com/cds-astro/cds-moc-rust", branch = "main" }
moclib = { package = "moc", git = "https://github.com/bmatthieu3/cds-moc-rust", branch = "cellsWithUnidirectionalNeigs" }
serde = { version = "^1.0.183", features = ["derive"] }
serde_json = "1.0.104"
serde-wasm-bindgen = "0.5"
console_error_panic_hook = "0.1.7"
enum_dispatch = "0.3.8"
wasm-bindgen = "0.2.79"
@@ -42,7 +41,8 @@ fitsrs = "0.2.9"
wcs = "0.2.8"
colorgrad = "0.6.2"
image-decoder = { package = "image", version = "0.24.2", default-features = false, features = ["jpeg", "png"] }
votable = "0.2.3"
votable = { package = "votable", git = "https://github.com/cds-astro/cds-votable-rust", branch = "main"}
lyon = "1.0.1"
[features]
webgl1 = [

View File

@@ -1,6 +1,6 @@
use wasm_bindgen::prelude::*;
#[wasm_bindgen(raw_module = "../../js/Color")]
#[wasm_bindgen(raw_module = "../../js/Color.js")]
extern "C" {
pub type Color;
@@ -8,10 +8,11 @@ extern "C" {
pub fn hexToRgb(hex: String) -> JsValue;
#[wasm_bindgen(static_method_of = Color)]
pub fn hexToRgba(hex: String) -> JsValue;
#[wasm_bindgen(static_method_of = Color)]
pub fn rgbToHex(r: u8, g: u8, b: u8) -> String;
}
#[derive(Debug, Clone, Copy)]
#[derive(Deserialize, Serialize)]
#[derive(Debug, Clone, Copy, Deserialize, Serialize)]
#[wasm_bindgen]
pub struct ColorRGB {
pub r: f32,
@@ -20,8 +21,7 @@ pub struct ColorRGB {
}
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Copy)]
#[derive(Deserialize, Serialize)]
#[derive(Debug, Clone, Copy, Deserialize, Serialize)]
#[serde(rename_all = "camelCase")]
#[wasm_bindgen]
pub struct ColorRGBA {
@@ -45,7 +45,6 @@ impl<'a> Mul<f32> for &'a ColorRGB {
}
}
/*
#[wasm_bindgen]
impl Color {
@@ -96,4 +95,4 @@ impl TryFrom<JsValue> for ColorRGBA {
Ok(c)
}
}
}

View File

@@ -106,7 +106,7 @@ pub const NUM_COOSYSTEM: usize = 2;
impl CooSystem {
#[inline]
pub fn to<S>(&self, coo_system: &Self) -> &Matrix4<S>
pub fn to<S>(&self, coo_system: Self) -> &Matrix4<S>
where
S: BaseFloat + CooBaseFloat,
{

View File

@@ -12,6 +12,8 @@ use super::color::ColorRGB;
pub struct GridCfg {
#[serde(default = "default_color")]
pub color: Option<ColorRGB>,
#[serde(default = "default_thickness")]
pub thickness: Option<f32>,
pub opacity: Option<f32>,
#[serde(default = "default_labels")]
pub show_labels: Option<bool>,
@@ -39,6 +41,10 @@ fn default_label_size() -> Option<f32> {
None
}
fn default_thickness() -> Option<f32> {
None
}
fn default_fmt() -> Option<AngleSerializeFmt> {
None
}

View File

@@ -47,6 +47,7 @@ pub struct HiPSProperties {
tile_size: i32,
formats: Vec<ImageExt>,
dataproduct_subtype: Option<Vec<String>>,
is_planetary_body: Option<bool>,
bitpix: Option<i32>,
@@ -63,62 +64,62 @@ pub struct HiPSProperties {
}
impl HiPSProperties {
#[inline]
#[inline(always)]
pub fn get_url(&self) -> &str {
&self.url
}
#[inline]
#[inline(always)]
pub fn get_max_order(&self) -> u8 {
self.max_order
}
#[inline]
#[inline(always)]
pub fn get_min_order(&self) -> Option<u8> {
self.min_order
}
#[inline]
#[inline(always)]
pub fn get_bitpix(&self) -> Option<i32> {
self.bitpix
}
#[inline]
#[inline(always)]
pub fn get_formats(&self) -> &[ImageExt] {
&self.formats[..]
}
#[inline]
#[inline(always)]
pub fn get_tile_size(&self) -> i32 {
self.tile_size
}
#[inline]
#[inline(always)]
pub fn get_frame(&self) -> CooSystem {
self.frame
}
#[inline]
#[inline(always)]
pub fn get_sky_fraction(&self) -> Option<f32> {
self.sky_fraction
}
#[inline]
#[inline(always)]
pub fn get_initial_fov(&self) -> Option<f64> {
self.hips_initial_fov
}
#[inline]
#[inline(always)]
pub fn get_initial_ra(&self) -> Option<f64> {
self.hips_initial_ra
}
#[inline]
#[inline(always)]
pub fn get_initial_dec(&self) -> Option<f64> {
self.hips_initial_dec
}
#[inline]
#[inline(always)]
pub fn get_dataproduct_subtype(&self) -> &Option<Vec<String>> {
&self.dataproduct_subtype
}
@@ -131,7 +132,7 @@ pub enum ImageExt {
Fits,
Jpeg,
Png,
Webp
Webp,
}
impl std::fmt::Display for ImageExt {
@@ -140,7 +141,7 @@ impl std::fmt::Display for ImageExt {
ImageExt::Fits => write!(f, "fits"),
ImageExt::Png => write!(f, "png"),
ImageExt::Jpeg => write!(f, "jpg"),
ImageExt::Webp => write!(f, "webp")
ImageExt::Webp => write!(f, "webp"),
}
}
}
@@ -192,7 +193,7 @@ use crate::colormap::CmapLabel;
pub struct HiPSColor {
// transfer function called before evaluating the colormap
pub stretch: TransferFunction,
// low cut
// low cut
pub min_cut: Option<f32>,
// high cut
pub max_cut: Option<f32>,

View File

@@ -1,76 +1,90 @@
use wasm_bindgen::prelude::wasm_bindgen;
use super::color::{Color, ColorRGB};
use super::color::{Color, ColorRGBA};
#[derive(Clone, Debug)]
#[wasm_bindgen]
pub struct MOC {
uuid: String,
opacity: f32,
line_width: f32,
is_showing: bool,
color: ColorRGB,
adaptative_display: bool,
pub line_width: f32,
pub perimeter: bool,
pub filled: bool,
pub edges: bool,
pub show: bool,
pub color: ColorRGBA,
pub fill_color: ColorRGBA,
}
use crate::{color::ColorRGB, Abort};
use std::convert::TryInto;
use crate::Abort;
#[wasm_bindgen]
impl MOC {
#[wasm_bindgen(constructor)]
pub fn new(uuid: String, opacity: f32, line_width: f32, is_showing: bool, hex_color: String, adaptative_display: bool) -> Self {
let color = Color::hexToRgb(hex_color);
let color = color.try_into().unwrap_abort();
pub fn new(
uuid: String,
opacity: f32,
line_width: f32,
perimeter: bool,
filled: bool,
edges: bool,
show: bool,
hex_color: String,
fill_color: String,
) -> Self {
let parse_color = |color_hex_str: String, opacity: f32| -> ColorRGBA {
let rgb = Color::hexToRgb(color_hex_str);
let rgb: ColorRGB = rgb.try_into().unwrap_abort();
ColorRGBA {
r: rgb.r,
g: rgb.g,
b: rgb.b,
a: opacity,
}
};
let color = parse_color(hex_color, 1.0);
let fill_color = parse_color(fill_color, opacity);
Self {
uuid,
opacity,
line_width,
perimeter,
filled,
fill_color,
edges,
color,
is_showing,
adaptative_display
show,
}
}
#[wasm_bindgen(setter)]
pub fn set_is_showing(&mut self, is_showing: bool) {
self.is_showing = is_showing;
}
}
impl MOC {
pub fn get_uuid(&self) -> &String {
&self.uuid
}
pub fn get_color(&self) -> &ColorRGB {
&self.color
}
pub fn get_opacity(&self) -> f32 {
self.opacity
}
pub fn get_line_width(&self) -> f32 {
self.line_width
}
pub fn is_showing(&self) -> bool {
self.is_showing
}
pub fn is_adaptative_display(&self) -> bool {
self.adaptative_display
}
}
impl Default for MOC {
fn default() -> Self {
Self {
uuid: String::from("moc"),
opacity: 1.0,
line_width: 1.0,
is_showing: true,
color: ColorRGB {r: 1.0, g: 0.0, b: 0.0},
adaptative_display: true,
perimeter: false,
edges: true,
filled: false,
show: true,
color: ColorRGBA {
r: 1.0,
g: 0.0,
b: 0.0,
a: 1.0,
},
fill_color: ColorRGBA {
r: 1.0,
g: 0.0,
b: 0.0,
a: 1.0,
},
}
}
}
}

View File

@@ -1,8 +1,8 @@
pub mod bitmap;
pub mod canvas;
pub mod fits;
pub mod format;
pub mod html;
pub mod canvas;
pub mod raw;
pub trait ArrayBuffer: AsRef<js_sys::Object> + std::fmt::Debug {
@@ -179,10 +179,10 @@ impl ArrayBuffer for ArrayF64 {
}
}
use self::html::HTMLImage;
use self::canvas::Canvas;
use wasm_bindgen::JsValue;
use self::html::HTMLImage;
use super::Texture2DArray;
use wasm_bindgen::JsValue;
pub trait Image {
fn tex_sub_image_3d(
&self,
@@ -211,7 +211,7 @@ where
}
}
use std::{rc::Rc, io::Cursor};
use std::{io::Cursor, rc::Rc};
impl<I> Image for Rc<I>
where
I: Image,
@@ -252,7 +252,7 @@ where
}
#[cfg(feature = "webgl2")]
use crate::image::format::{R16I, R32I, R8UI, R64F};
use crate::image::format::{R16I, R32I, R64F, R8UI};
use crate::image::format::{R32F, RGB8U, RGBA8U};
use bitmap::Bitmap;
@@ -298,16 +298,17 @@ impl Image for ImageType {
offset: &Vector3<i32>,
) -> Result<(), JsValue> {
match self {
ImageType::FitsImage { raw_bytes: raw_bytes_buf } => {
ImageType::FitsImage {
raw_bytes: raw_bytes_buf,
} => {
let num_bytes = raw_bytes_buf.length() as usize;
let mut raw_bytes = Vec::with_capacity(num_bytes);
unsafe { raw_bytes.set_len(num_bytes); }
let mut raw_bytes = vec![0; num_bytes];
raw_bytes_buf.copy_to(&mut raw_bytes[..]);
let mut bytes_reader = Cursor::new(raw_bytes.as_slice());
let fits_img = Fits::from_byte_slice(&mut bytes_reader)?;
fits_img.tex_sub_image_3d(textures, offset)?
},
}
ImageType::Canvas { canvas } => canvas.tex_sub_image_3d(textures, offset)?,
ImageType::ImageRgba8u { image } => image.tex_sub_image_3d(textures, offset)?,
ImageType::ImageRgb8u { image } => image.tex_sub_image_3d(textures, offset)?,
@@ -323,4 +324,4 @@ impl Image for ImageType {
Ok(())
}
}
}

View File

@@ -4,8 +4,6 @@ extern crate serde_json;
extern crate futures;
extern crate wasm_streams;
pub mod text;
pub mod image;
mod object;
pub mod shader;
@@ -14,6 +12,7 @@ pub mod webgl_ctx;
#[macro_use]
pub mod log;
pub use log::log;
pub mod colormap;
pub use colormap::{Colormap, Colormaps};

View File

@@ -1,171 +0,0 @@
#[derive(Serialize, Deserialize)]
pub struct LetterTexPosition {
pub x_min: u32,
pub x_max: u32,
pub y_min: u32,
pub y_max: u32,
pub x_advance: u32,
pub y_advance: u32,
pub w: u32,
pub h: u32,
pub bound_xmin: f32,
pub bound_ymin: f32,
}
use std::collections::HashMap;
use serde::{Serialize, Deserialize};
pub struct Font {
pub bitmap: Vec<u8>,
pub letters: HashMap<char, LetterTexPosition>,
}
pub const TEX_SIZE: usize = 256;
mod tests {
#[test]
pub fn rasterize_font() {
#[derive(PartialEq)]
struct Letter {
pub l: char,
pub w: u32,
pub h: u32,
pub x_advance: u32,
pub y_advance: u32,
pub bitmap: Vec<u8>,
pub bounds: fontdue::OutlineBounds,
}
use std::cmp::Ordering;
impl PartialOrd for Letter {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
let w_cmp = other.w.cmp(&self.w);
if Ordering::Equal == w_cmp {
Some(other.h.cmp(&self.h))
} else {
Some(w_cmp)
}
}
}
use super::TEX_SIZE;
use super::LetterTexPosition;
use std::collections::HashMap;
use std::io::Write;
// Read the font data.
let font = include_bytes!("../resources/arial.ttf") as &[u8];
// Parse it into the font type.
let font = fontdue::Font::from_bytes(font, fontdue::FontSettings::default()).unwrap();
// Rasterize and get the layout metrics for the letter 'g' at 17px.
let mut w = 0;
let mut h = 0;
let mut letters = Vec::new();
for c in 0_u8..255_u8 {
let (metrics, bitmap) = font.rasterize(c as char, 16.0);
letters.push(Letter {
w: metrics.width as u32,
h: metrics.height as u32,
x_advance: metrics.advance_width as u32,
y_advance: metrics.advance_height as u32,
bounds: metrics.bounds,
l: c as char,
bitmap,
});
h += metrics.height;
w = std::cmp::max(w, metrics.width);
}
letters.sort_unstable_by(|l, r| {
let w_cmp = r.w.cmp(&l.w);
if Ordering::Equal == w_cmp {
r.h.cmp(&l.h)
} else {
w_cmp
}
});
let mut letters_tex = HashMap::new();
let mut x_min = 0;
let mut y_min = 0;
let mut size_col = letters[0].w;
let mut img = vec![0; TEX_SIZE * TEX_SIZE * 4];
for Letter {
l,
w,
h,
x_advance,
y_advance,
bitmap,
bounds,
} in letters.into_iter()
{
let mut i = 0;
let mut y_max = y_min + h;
if y_max >= TEX_SIZE as u32 {
y_min = 0;
y_max = h;
x_min += size_col;
size_col = w;
}
// Draw here the letter in the tex
let x_max = x_min + w;
letters_tex.insert(
l,
LetterTexPosition {
x_min,
x_max,
y_min,
y_max,
x_advance,
y_advance,
w: x_max - x_min,
h: y_max - y_min,
bound_xmin: bounds.xmin,
bound_ymin: bounds.ymin,
},
);
for y in (y_min as usize)..(y_max as usize) {
for x in (x_min as usize)..(x_max as usize) {
img[4 * (x + TEX_SIZE * y)] = bitmap[i];
img[4 * (x + TEX_SIZE * y) + 1] = bitmap[i];
img[4 * (x + TEX_SIZE * y) + 2] = bitmap[i];
img[4 * (x + TEX_SIZE * y) + 3] = bitmap[i];
i += 1;
}
}
y_min += h;
}
/* Save the jpeg file */
use std::fs::File;
use std::io::BufWriter;
let file = File::create("letters.png").unwrap();
let ref mut w = BufWriter::new(file);
let mut encoder = png::Encoder::new(w, TEX_SIZE as u32, TEX_SIZE as u32); // Width is 2 pixels and height is 1.
encoder.set_color(png::ColorType::Rgba);
encoder.set_depth(png::BitDepth::Eight);
let mut writer = encoder.write_header().unwrap();
writer.write_image_data(&img).unwrap(); // Save
/* Save the letters position */
let letters_tex_serialized = serde_json::to_string(&letters_tex).unwrap();
let mut file = File::create("letters.json").unwrap();
write!(file, "{}", letters_tex_serialized).unwrap();
}
}

View File

@@ -11,7 +11,9 @@ js-sys = "0.3.47"
wasm-bindgen-futures = "0.4.20"
cgmath = "*"
itertools-num = "0.1.3"
healpix = { package = "cdshealpix", git = 'https://github.com/cds-astro/cds-healpix-rust', branch = 'master' }
#healpix = { package = "cdshealpix", git = 'https://github.com/cds-astro/cds-healpix-rust', branch = 'master' }
cdshealpix = { path = "../../../cds-healpix-rust" }
serde = { version = "^1.0.59", features = ["derive"] }
serde_json = "1.0"
serde-wasm-bindgen = "0.4"

File diff suppressed because it is too large Load Diff

View File

@@ -6,11 +6,13 @@
use al_task_exec::Executor;
pub type TaskExecutor = Executor<TaskType, TaskResult>;
pub use crate::renderable::catalog::Source;
use crate::math::lonlat::LonLat;
use crate::math::lonlat::LonLatT;
pub enum TaskResult {
TableParsed {
name: String,
sources: Box<[Source]>,
sources: Box<[LonLatT<f32>]>,
},
/*TileSentToGPU {
tile: Tile,
@@ -55,6 +57,7 @@ where
}
use serde::de::DeserializeOwned;
use std::pin::Pin;
use std::task::{Context, Poll};
impl<T> Stream for ParseTableTask<T>
@@ -93,19 +96,25 @@ where
/*use rand::rngs::StdRng;
use rand::Rng;
use rand::SeedableRng;*/
pub struct BuildCatalogIndex {
pub sources: Vec<Source>,
pub struct BuildCatalogIndex<T>
where
T: LonLat<f32> + Clone,
{
pub sources: Vec<T>,
num_sorted_sources: usize,
i: usize,
j: usize,
merging: bool,
new_sorted_sources: Vec<Source>,
new_sorted_sources: Vec<T>,
ready: bool,
chunk_size: usize,
prev_num_sorted_sources: usize,
}
impl BuildCatalogIndex {
pub fn new(sources: Vec<Source>) -> Self {
impl<T> BuildCatalogIndex<T>
where
T: LonLat<f32> + Clone,
{
pub fn new(sources: Vec<T>) -> Self {
let num_sorted_sources = 0;
let merging = false;
let new_sorted_sources = vec![];
@@ -130,7 +139,12 @@ impl BuildCatalogIndex {
const CHUNK_OF_SOURCES_TO_SORT: usize = 1000;
const CHUNK_OF_SORTED_SOURCES_TO_MERGE: usize = 20000;
use crate::Abort;
impl Stream for BuildCatalogIndex {
impl<T> Unpin for BuildCatalogIndex<T> where T: LonLat<f32> + Clone {}
impl<T> Stream for BuildCatalogIndex<T>
where
T: LonLat<f32> + Clone,
{
type Item = ();
/// Attempt to resolve the next item in the stream.
@@ -149,11 +163,16 @@ impl Stream for BuildCatalogIndex {
//let mut rng = StdRng::seed_from_u64(0);
// Get the chunk to sort
(&mut self.sources[a..b]).sort_unstable_by(|s1, s2| {
let (s1_lon, s1_lat) = s1.lonlat();
let (s2_lon, s2_lat) = s2.lonlat();
let s1_lonlat = s1.lonlat();
let s2_lonlat = s2.lonlat();
let idx1 = cdshealpix::nested::hash(7, s1_lon as f64, s1_lat as f64);
let idx2 = cdshealpix::nested::hash(7, s2_lon as f64, s2_lat as f64);
let (s1_lon, s1_lat) =
(s1_lonlat.lon().to_radians(), s1_lonlat.lat().to_radians());
let (s2_lon, s2_lat) =
(s2_lonlat.lon().to_radians(), s2_lonlat.lat().to_radians());
let idx1 = healpix::nested::hash(7, s1_lon as f64, s1_lat as f64);
let idx2 = healpix::nested::hash(7, s2_lon as f64, s2_lat as f64);
let ordering = idx1.partial_cmp(&idx2).unwrap_abort();
match ordering {
@@ -197,11 +216,16 @@ impl Stream for BuildCatalogIndex {
} else {
let s1 = &self.sources[self.j];
let s2 = &self.sources[self.i];
let (s1_lon, s1_lat) = s1.lonlat();
let (s2_lon, s2_lat) = s2.lonlat();
let s1_lonlat = s1.lonlat();
let s2_lonlat = s2.lonlat();
let p1 = cdshealpix::nested::hash(7, s1_lon as f64, s1_lat as f64);
let p2 = cdshealpix::nested::hash(7, s2_lon as f64, s2_lat as f64);
let (s1_lon, s1_lat) =
(s1_lonlat.lon().to_radians(), s1_lonlat.lat().to_radians());
let (s2_lon, s2_lat) =
(s2_lonlat.lon().to_radians(), s2_lonlat.lat().to_radians());
let p1 = healpix::nested::hash(7, s1_lon as f64, s1_lat as f64);
let p2 = healpix::nested::hash(7, s2_lon as f64, s2_lat as f64);
if p1 <= p2 {
let v = self.sources[self.j].clone();
self.j += 1;
@@ -260,29 +284,29 @@ where
texture: &Texture,
image: I,
texture_array: Rc<Texture2DArray>,
conf: &HiPSConfig,
cfg: &HiPSConfig,
) -> ImageTile2GpuTask<I> {
// Index of the texture in the total set of textures
let texture_idx = texture.idx();
// Index of the slice of textures
let num_textures_by_slice = conf.num_textures_by_slice();
let num_textures_by_slice = cfg.num_textures_by_slice();
let idx_slice = texture_idx / num_textures_by_slice;
// Index of the texture in its slice
let idx_in_slice = texture_idx % num_textures_by_slice;
// Index of the column of the texture in its slice
let num_textures_by_side_slice = conf.num_textures_by_side_slice();
let num_textures_by_side_slice = cfg.num_textures_by_side_slice();
let idx_col_in_slice = idx_in_slice / num_textures_by_side_slice;
// Index of the row of the texture in its slice
let idx_row_in_slice = idx_in_slice % num_textures_by_side_slice;
// Row and column indexes of the tile in its texture
let (idx_col_in_tex, idx_row_in_tex) = cell.get_offset_in_texture_cell(conf);
let (idx_col_in_tex, idx_row_in_tex) = cell.get_offset_in_texture_cell(cfg.delta_depth());
// The size of the global texture containing the tiles
let texture_size = conf.get_texture_size();
let texture_size = cfg.get_texture_size();
// The size of a tile in its texture
let tile_size = conf.get_tile_size();
let tile_size = cfg.get_tile_size();
// Offset in the slice in pixels
let offset = Vector3::new(
@@ -303,4 +327,4 @@ where
.tex_sub_image_3d(&self.texture_array, &self.offset)?;
Ok(true)
}
}
}

View File

@@ -1,13 +1,20 @@
use cgmath::{Vector2, Vector4, Matrix4};
use cgmath::{Matrix4, Vector2};
use crate::math::projection::coo_space::{XYNDC, XYZWWorld, XYZWModel};
use crate::math::spherical::FieldOfViewType;
use crate::math::projection::coo_space::{XYZWModel, XYZWWorld, XYNDC};
use crate::math::sph_geom::region::{Intersection, PoleContained, Region};
use crate::math::{projection::Projection, sph_geom::bbox::BoundingBox};
use crate::LonLatT;
use cgmath::Vector3;
use crate::ProjectionType;
use std::iter;
fn ndc_to_world(
ndc_coo: &[XYNDC],
ndc_to_clip: &Vector2<f64>,
clip_zoom_factor: f64,
projection: &ProjectionType
projection: &ProjectionType,
) -> Option<Vec<XYZWWorld>> {
// Deproject the FOV from ndc to the world space
let mut world_coo = Vec::with_capacity(ndc_coo.len());
@@ -48,30 +55,28 @@ fn linspace(a: f64, b: f64, num: usize) -> Vec<f64> {
res
}
const NUM_VERTICES_WIDTH: usize = 4;
const NUM_VERTICES_HEIGHT: usize = 4;
const NUM_VERTICES_WIDTH: usize = 3;
const NUM_VERTICES_HEIGHT: usize = 3;
const NUM_VERTICES: usize = 4 + 2 * NUM_VERTICES_WIDTH + 2 * NUM_VERTICES_HEIGHT;
// This struct belongs to the CameraViewPort
pub struct FieldOfViewVertices {
ndc_coo: Vec<XYNDC>,
world_coo: Option<Vec<XYZWWorld>>,
model_coo: Option<Vec<XYZWModel>>,
pub struct FieldOfView {
// Vertices
ndc_vertices: Vec<XYNDC>,
world_vertices: Option<Vec<XYZWWorld>>,
model_vertices: Option<Vec<XYZWModel>>,
// Meridians and parallels contained
// in the field of view
great_circles: FieldOfViewType,
//moc: [Option<HEALPixCoverage>; al_api::coo_system::NUM_COOSYSTEM],
//depth: u8,
reg: Region,
}
use crate::ProjectionType;
impl FieldOfViewVertices {
impl FieldOfView {
pub fn new(
// ndc to clip parameters
ndc_to_clip: &Vector2<f64>,
clip_zoom_factor: f64,
mat: &Matrix4<f64>,
center: &Vector4<f64>,
projection: &ProjectionType
// rotation
rotation_mat: &Matrix4<f64>,
// projection
projection: &ProjectionType,
) -> Self {
let mut x_ndc = linspace(-1., 1., NUM_VERTICES_WIDTH + 2);
@@ -88,83 +93,143 @@ impl FieldOfViewVertices {
y_ndc.extend(linspace(1., -1., NUM_VERTICES_HEIGHT + 2));
y_ndc.pop();
let mut ndc_coo = Vec::with_capacity(NUM_VERTICES);
let mut ndc_vertices = Vec::with_capacity(NUM_VERTICES);
for idx_vertex in 0..NUM_VERTICES {
ndc_coo.push(Vector2::new(x_ndc[idx_vertex], y_ndc[idx_vertex]));
ndc_vertices.push(Vector2::new(x_ndc[idx_vertex], y_ndc[idx_vertex]));
}
let world_coo = ndc_to_world(&ndc_coo, ndc_to_clip, clip_zoom_factor, projection);
let model_coo = world_coo
let world_vertices = ndc_to_world(&ndc_vertices, ndc_to_clip, clip_zoom_factor, projection);
let model_vertices = world_vertices
.as_ref()
.map(|world_coo| world_to_model(world_coo, mat));
.map(|world_vertex| world_to_model(world_vertex, rotation_mat));
let great_circles = if let Some(vertices) = &model_coo {
FieldOfViewType::new_polygon(vertices, center)
let reg = if let Some(vertices) = &model_vertices {
Region::from_vertices(vertices, &rotation_mat.z)
} else {
FieldOfViewType::Allsky
Region::AllSky
};
FieldOfViewVertices {
ndc_coo,
world_coo,
model_coo,
great_circles,
// Allsky case
FieldOfView {
ndc_vertices,
world_vertices,
model_vertices,
reg,
}
}
pub fn set_fov(
// Update the vertices
pub fn set_aperture(
&mut self,
ndc_to_clip: &Vector2<f64>,
clip_zoom_factor: f64,
w2m: &Matrix4<f64>,
center: &Vector4<f64>,
projection: &ProjectionType
rotate_mat: &Matrix4<f64>,
projection: &ProjectionType,
) {
self.world_coo = ndc_to_world(&self.ndc_coo, ndc_to_clip, clip_zoom_factor, projection);
self.set_rotation(w2m, center);
self.world_vertices = ndc_to_world(
&self.ndc_vertices,
ndc_to_clip,
clip_zoom_factor,
projection,
);
self.set_rotation(rotate_mat);
}
pub fn set_rotation(
&mut self,
w2m: &Matrix4<f64>,
center: &Vector4<f64>,
) {
if let Some(world_coo) = &self.world_coo {
self.model_coo = Some(world_to_model(world_coo, w2m));
pub fn set_rotation(&mut self, rotate_mat: &Matrix4<f64>) {
if let Some(world_vertices) = &self.world_vertices {
self.model_vertices = Some(world_to_model(world_vertices, rotate_mat));
} else {
self.model_coo = None;
self.model_vertices = None;
}
self.set_great_circles(center);
}
fn set_great_circles(&mut self, center: &Vector4<f64>) {
if let Some(vertices) = &self.model_coo {
self.great_circles = FieldOfViewType::new_polygon(vertices, center);
if let Some(vertices) = &self.model_vertices {
self.reg = Region::from_vertices(vertices, &rotate_mat.z);
} else {
self.great_circles = FieldOfViewType::Allsky;
self.reg = Region::AllSky;
}
}
/*pub fn get_depth(&self) -> u8 {
self.depth
}*/
// Interface over the region object
pub fn contains(&self, lonlat: &LonLatT<f64>) -> bool {
self.reg.contains(lonlat)
}
pub fn intersects_parallel(&self, lat: f64) -> Intersection {
self.reg.intersects_parallel(lat)
}
pub fn intersects_meridian(&self, lon: f64) -> Intersection {
self.reg.intersects_meridian(lon)
}
pub fn intersects_great_circle(&self, n: &Vector3<f64>) -> Intersection {
self.reg.intersects_great_circle(n)
}
pub fn intersects_great_circle_arc(
&self,
lonlat1: &LonLatT<f64>,
lonlat2: &LonLatT<f64>,
) -> Intersection {
self.reg.intersects_great_circle_arc(lonlat1, lonlat2)
}
// Accessors
pub fn get_bounding_box(&self) -> &BoundingBox {
match &self.reg {
Region::AllSky => &crate::math::sph_geom::bbox::ALLSKY_BBOX,
Region::Polygon { bbox, .. } => bbox,
}
}
pub fn get_vertices(&self) -> Option<&Vec<XYZWModel>> {
self.model_coo.as_ref()
self.model_vertices.as_ref()
}
pub fn get_bounding_box(&self) -> &BoundingBox {
self.great_circles.get_bounding_box()
pub fn is_intersecting_zero_meridian(&self) -> bool {
match &self.reg {
Region::AllSky => true,
Region::Polygon {
is_intersecting_zero_meridian,
..
} => *is_intersecting_zero_meridian,
}
}
pub fn is_allsky(&self) -> bool {
matches!(self.reg, Region::AllSky)
}
pub fn contains_pole(&self) -> bool {
self.great_circles.contains_pole()
match &self.reg {
Region::AllSky => true,
Region::Polygon { poles, .. } => *poles != PoleContained::None,
}
}
pub fn _type(&self) -> &FieldOfViewType {
&self.great_circles
pub fn contains_north_pole(&self) -> bool {
match &self.reg {
Region::AllSky => true,
Region::Polygon { poles, .. } => {
*poles == PoleContained::North || *poles == PoleContained::Both
}
}
}
pub fn contains_south_pole(&self) -> bool {
match &self.reg {
Region::AllSky => true,
Region::Polygon { poles, .. } => {
*poles == PoleContained::South || *poles == PoleContained::Both
}
}
}
pub fn contains_both_poles(&self) -> bool {
match &self.reg {
Region::AllSky => true,
Region::Polygon { poles, .. } => *poles == PoleContained::Both,
}
}
}
use crate::math::{projection::Projection, spherical::BoundingBox};
use std::iter;

View File

@@ -1,5 +1,63 @@
pub mod viewport;
use crate::math::lonlat::LonLat;
use crate::math::projection::coo_space::XYZWModel;
pub use viewport::{CameraViewPort, UserAction};
pub mod fov;
pub use fov::FieldOfViewVertices;
pub use fov::FieldOfView;
pub mod view_hpx_cells;
use crate::CooSystem;
use crate::HEALPixCoverage;
use crate::ProjectionType;
pub fn build_fov_coverage(
depth: u8,
fov: &FieldOfView,
camera_center: &XYZWModel,
camera_frame: CooSystem,
frame: CooSystem,
proj: &ProjectionType,
) -> HEALPixCoverage {
if let Some(vertices) = fov.get_vertices() {
// The vertices coming from the camera are in a specific coo sys
// but cdshealpix accepts them to be given in ICRS coo sys
let vertices_iter = vertices
.iter()
.map(|v| crate::coosys::apply_coo_system(camera_frame, frame, v));
// Check if the polygon is too small with respect to the angular size
// of a cell at depth order
let fov_bbox = fov.get_bounding_box();
let d_lon = fov_bbox.get_lon_size();
let d_lat = fov_bbox.get_lat_size();
let size_hpx_cell = crate::healpix::utils::MEAN_HPX_CELL_RES[depth as usize];
if d_lon < size_hpx_cell && d_lat < size_hpx_cell {
// Polygon is small and this may result in a moc having only a few cells
// One can build the moc from a list of cells
// This particular case avoids falling into a panic in cdshealpix
// See https://github.com/cds-astro/cds-moc-rust/issues/3
let hpx_idxs_iter = vertices_iter.map(|v| {
let (lon, lat) = crate::math::lonlat::xyzw_to_radec(&v);
::healpix::nested::hash(depth, lon.0, lat.0)
});
HEALPixCoverage::from_fixed_hpx_cells(depth, hpx_idxs_iter, Some(vertices.len()))
} else {
// The polygon is not too small for the depth asked
let inside_vertex = crate::coosys::apply_coo_system(camera_frame, frame, camera_center);
// Prefer to query from_polygon with depth >= 2
let moc =
HEALPixCoverage::from_3d_coos(depth, vertices_iter, &inside_vertex.truncate());
moc
}
} else {
let biggest_fov_rad = proj.aperture_start().to_radians();
let lonlat = camera_center.lonlat();
HEALPixCoverage::from_cone(&lonlat, biggest_fov_rad * 0.5, depth)
}
}

View File

@@ -0,0 +1,288 @@
use crate::healpix::cell::HEALPixCell;
use crate::healpix::cell::MAX_HPX_DEPTH;
use crate::camera::XYZWModel;
use crate::math::projection::*;
use crate::HEALPixCoverage;
use std::ops::Range;
use al_api::cell::HEALPixCellProjeted;
pub fn project(
cell: HEALPixCellProjeted,
camera: &CameraViewPort,
projection: &ProjectionType,
) -> Option<HEALPixCellProjeted> {
match projection {
ProjectionType::Hpx(_) => {
let tri_idx_in_collignon_zone = |x: f64, y: f64| -> u8 {
let zoom_factor = camera.get_clip_zoom_factor() as f32;
let x = (((x as f32) / camera.get_width()) - 0.5) * zoom_factor;
let y = (((y as f32) / camera.get_height()) - 0.5) * zoom_factor;
let x_zone = ((x + 0.5) * 4.0).floor() as u8;
x_zone + 4 * ((y > 0.0) as u8)
};
let is_in_collignon = |_x: f64, y: f64| -> bool {
let y = (((y as f32) / camera.get_height()) - 0.5)
* (camera.get_clip_zoom_factor() as f32);
!(-0.25..=0.25).contains(&y)
};
if is_in_collignon(cell.vx[0], cell.vy[0])
&& is_in_collignon(cell.vx[1], cell.vy[1])
&& is_in_collignon(cell.vx[2], cell.vy[2])
&& is_in_collignon(cell.vx[3], cell.vy[3])
{
let all_vertices_in_same_collignon_region =
tri_idx_in_collignon_zone(cell.vx[0], cell.vy[0])
== tri_idx_in_collignon_zone(cell.vx[1], cell.vy[1])
&& (tri_idx_in_collignon_zone(cell.vx[0], cell.vy[0])
== tri_idx_in_collignon_zone(cell.vx[2], cell.vy[2]))
&& (tri_idx_in_collignon_zone(cell.vx[0], cell.vy[0])
== tri_idx_in_collignon_zone(cell.vx[3], cell.vy[3]));
if !all_vertices_in_same_collignon_region {
None
} else {
Some(cell)
}
} else {
Some(cell)
}
}
_ => Some(cell),
}
}
pub(super) struct ViewHpxCells {
hpx_cells: [HpxCells; NUM_COOSYSTEM],
reg_frames: [u8; NUM_COOSYSTEM],
}
impl ViewHpxCells {
pub(super) fn new() -> Self {
let reg_frames = [0; NUM_COOSYSTEM];
let hpx_cells = [
HpxCells::new(CooSystem::ICRS),
HpxCells::new(CooSystem::GAL),
];
Self {
hpx_cells,
reg_frames,
}
}
pub(super) fn register_frame(
&mut self,
camera_depth: u8,
fov: &FieldOfView,
center: &XYZWModel,
camera_frame: CooSystem,
proj: &ProjectionType,
// survey frame
frame: CooSystem,
) {
self.reg_frames[frame as usize] += 1;
if self.reg_frames[frame as usize] == 1 {
// a new frame has been added
self.update(camera_depth, fov, center, camera_frame, proj);
}
}
pub(super) fn unregister_frame(
&mut self,
camera_depth: u8,
fov: &FieldOfView,
center: &XYZWModel,
camera_frame: CooSystem,
proj: &ProjectionType,
// survey frame
frame: CooSystem,
) {
if self.reg_frames[frame as usize] > 0 {
self.reg_frames[frame as usize] -= 1;
}
if self.reg_frames[frame as usize] == 0 {
// a frame has been deleted
self.update(camera_depth, fov, center, camera_frame, proj);
}
}
pub(super) fn update(
&mut self,
camera_depth: u8,
fov: &FieldOfView,
center: &XYZWModel,
camera_frame: CooSystem,
proj: &ProjectionType,
) {
for (frame, num_req) in self.reg_frames.iter().enumerate() {
// if there are surveys/camera requesting the coverage
if *num_req > 0 {
self.hpx_cells[frame].update(camera_depth, fov, center, camera_frame, proj);
}
}
}
pub(super) fn get_cells<'a>(
&'a mut self,
depth: u8,
frame: CooSystem,
) -> impl Iterator<Item = &'a HEALPixCell> {
self.hpx_cells[frame as usize].get_cells(depth)
}
pub(super) fn get_cov(&self, frame: CooSystem) -> &HEALPixCoverage {
self.hpx_cells[frame as usize].get_cov()
}
}
// Contains the cells being in the FOV for a specific
pub struct HpxCells {
frame: CooSystem,
// the set of cells all depth
cells: Vec<HEALPixCell>,
// An index vector referring to the indices of each depth cells
idx_rng: [Option<Range<usize>>; MAX_HPX_DEPTH as usize + 1],
// Coverage created in the frame
cov: HEALPixCoverage,
}
impl Default for HpxCells {
fn default() -> Self {
Self::new(CooSystem::ICRS)
}
}
use crate::camera::CameraViewPort;
use al_api::coo_system::{CooSystem, NUM_COOSYSTEM};
use super::FieldOfView;
impl HpxCells {
pub fn new(frame: CooSystem) -> Self {
let cells = Vec::new();
let cov = HEALPixCoverage::empty(29);
let idx_rng = Default::default();
Self {
cells,
idx_rng,
cov,
frame,
}
}
// This method is called whenever the user does an action
// that moves the camera.
// Everytime the user moves or zoom, the views must be updated
// The new cells obtained are used for sending new requests
fn update(
&mut self,
camera_depth: u8,
fov: &FieldOfView,
center: &XYZWModel,
camera_frame: CooSystem,
proj: &ProjectionType,
) {
// Compute the new coverage for that frame
self.cov =
super::build_fov_coverage(camera_depth, fov, center, camera_frame, self.frame, proj);
// Clear the old cells
self.cells.clear();
self.idx_rng = Default::default();
// Compute the cells at the tile_depth
let tile_depth_cells_iter = self
.cov
.flatten_to_fixed_depth_cells()
.map(|idx| HEALPixCell(camera_depth, idx));
let num_past = self.cells.len();
self.cells.extend(tile_depth_cells_iter);
let num_cur = self.cells.len();
self.idx_rng[camera_depth as usize] = Some(num_past..num_cur);
}
// Accessors
// depth MUST be < to camera tile depth
pub fn get_cells<'a>(&'a mut self, depth: u8) -> impl Iterator<Item = &'a HEALPixCell> {
let Range { start, end } = if let Some(idx) = self.idx_rng[depth as usize].as_ref() {
idx.start..idx.end
} else {
// compute the cells from the coverage
let degraded_moc = self.cov.degraded(depth);
let cells_iter = degraded_moc
.flatten_to_fixed_depth_cells()
.map(|idx| HEALPixCell(depth, idx));
// add them and store the cells for latter reuse
let num_past = self.cells.len();
self.cells.extend(cells_iter);
let num_cur = self.cells.len();
self.idx_rng[depth as usize] = Some(num_past..num_cur);
num_past..num_cur
};
self.cells[start..end].iter()
}
/*
#[inline(always)]
pub fn num_of_cells(&self, depth: u8) -> usize {
if let Some(rng) = &self.idx_rng[depth as usize] {
rng.end - rng.start
} else {
0
}
}
*/
/*#[inline]
pub fn get_depth(&self) -> u8 {
self.depth
}*/
/*#[inline]
pub fn get_frame(&self) -> &CooSystem {
&self.frame
}*/
/*#[inline]
pub fn is_new(&self, cell: &HEALPixCell) -> bool {
if let Some(&is_cell_new) = self.cells.get(cell) {
is_cell_new
} else {
false
}
}*/
#[inline(always)]
pub fn get_cov(&self) -> &HEALPixCoverage {
&self.cov
}
/*#[inline]
pub fn is_there_new_cells_added(&self) -> bool {
//self.new_cells.is_there_new_cells_added()
self.is_new_cells_added
}*/
/*#[inline]
pub fn has_view_changed(&self) -> bool {
//self.new_cells.is_there_new_cells_added()
!self.view_unchanged
}*/
}

View File

@@ -6,14 +6,13 @@ pub enum UserAction {
Starting = 4,
}
use super::fov::FieldOfViewVertices;
use crate::math::{
projection::coo_space::XYZWModel,
spherical::BoundingBox,
projection::domain::sdf::ProjDef
};
use cgmath::{Matrix4, Vector2};
use super::{fov::FieldOfView, view_hpx_cells::ViewHpxCells};
use crate::healpix::cell::HEALPixCell;
use crate::healpix::coverage::HEALPixCoverage;
use crate::math::{projection::coo_space::XYZWModel, projection::domain::sdf::ProjDef};
use cgmath::{Matrix4, Vector2};
pub struct CameraViewPort {
// The field of view angle
aperture: Angle<f64>,
@@ -45,7 +44,10 @@ pub struct CameraViewPort {
// The vertices in model space of the camera
// This is useful for computing views according
// to different image surveys
vertices: FieldOfViewVertices,
fov: FieldOfView,
// A data structure storing HEALPix cells contained in the fov
// for different frame and depth
view_hpx_cells: ViewHpxCells,
// A flag telling whether the camera has been moved during the frame
moved: bool,
@@ -63,7 +65,7 @@ pub struct CameraViewPort {
// A reference to the WebGL2 context
gl: WebGlContext,
system: CooSystem,
coo_sys: CooSystem,
reversed_longitude: bool,
}
use al_api::coo_system::CooSystem;
@@ -71,7 +73,7 @@ use al_core::WebGlContext;
use crate::{
coosys,
math::{angle::Angle, projection::Projection, rotation::Rotation, spherical::FieldOfViewType},
math::{angle::Angle, projection::Projection, rotation::Rotation},
};
use crate::LonLatT;
@@ -79,12 +81,16 @@ use cgmath::{SquareMatrix, Vector4};
use wasm_bindgen::JsCast;
const MAX_DPI_LIMIT: f32 = 3.0;
use crate::Abort;
use crate::math;
use crate::time::Time;
use crate::Abort;
use crate::ArcDeg;
impl CameraViewPort {
pub fn new(gl: &WebGlContext, system: CooSystem, projection: &ProjectionType) -> CameraViewPort {
pub fn new(
gl: &WebGlContext,
coo_sys: CooSystem,
projection: &ProjectionType,
) -> CameraViewPort {
let last_user_action = UserAction::Starting;
let aperture = Angle(projection.aperture_start());
@@ -120,7 +126,7 @@ impl CameraViewPort {
let ndc_to_clip = Vector2::new(1.0, (height as f64) / (width as f64));
let clip_zoom_factor = 1.0;
let vertices = FieldOfViewVertices::new(&ndc_to_clip, clip_zoom_factor, &w2m, &center, projection);
let fov = FieldOfView::new(&ndc_to_clip, clip_zoom_factor, &w2m, projection);
let gl = gl.clone();
let is_allsky = true;
@@ -130,7 +136,8 @@ impl CameraViewPort {
let tile_depth = 0;
let camera = CameraViewPort {
let view_hpx_cells = ViewHpxCells::new();
CameraViewPort {
// The field of view angle
aperture,
center,
@@ -153,10 +160,9 @@ impl CameraViewPort {
// Internal variable used for projection purposes
ndc_to_clip,
clip_zoom_factor,
// The vertices in model space of the camera
// This is useful for computing views according
// to different image surveys
vertices,
// The field of view
fov,
view_hpx_cells,
// A flag telling whether the camera has been moved during the frame
moved,
// A flag telling if the camera has zoomed during the frame
@@ -173,13 +179,44 @@ impl CameraViewPort {
// A reference to the WebGL2 context
gl,
// coo system
system,
coo_sys,
// a flag telling if the viewport has a reversed longitude axis
reversed_longitude,
};
camera.set_canvas_size();
}
}
camera
pub fn register_view_frame(&mut self, frame: CooSystem, proj: &ProjectionType) {
self.view_hpx_cells.register_frame(
self.tile_depth,
&self.fov,
&self.center,
self.coo_sys,
proj,
frame,
);
}
pub fn unregister_view_frame(&mut self, frame: CooSystem, proj: &ProjectionType) {
self.view_hpx_cells.unregister_frame(
self.tile_depth,
&self.fov,
&self.center,
self.coo_sys,
proj,
frame,
);
}
pub fn get_cov(&self, frame: CooSystem) -> &HEALPixCoverage {
self.view_hpx_cells.get_cov(frame)
}
pub fn get_hpx_cells<'a>(
&'a mut self,
depth: u8,
frame: CooSystem,
) -> impl Iterator<Item = &'a HEALPixCell> {
self.view_hpx_cells.get_cells(depth, frame)
}
fn recompute_scissor(&self) {
@@ -211,27 +248,15 @@ impl CameraViewPort {
let h = (br_s.y - tr_s.y).min(self.height as f64);
// Specify a scissor here
self.gl.scissor((tl_s.x as i32).max(0), (tl_s.y as i32).max(0), w as i32, h as i32);
self.gl.scissor(
(tl_s.x as i32).max(0),
(tl_s.y as i32).max(0),
w as i32,
h as i32,
);
}
fn set_canvas_size(&self) {
let canvas = self.gl
.canvas()
.unwrap_abort()
.dyn_into::<web_sys::HtmlCanvasElement>()
.unwrap_abort();
canvas.set_width(self.width as u32);
canvas.set_height(self.height as u32);
// Once the canvas size is changed, we have to set the viewport as well
self.gl.viewport(0, 0, self.width as i32, self.height as i32);
}
pub fn contains_pole(&self) -> bool {
self.vertices.contains_pole()
}
pub fn set_screen_size(&mut self, width: f32, height: f32, projection: &ProjectionType) {
fn set_canvas_size(&self, width: f32, height: f32) {
let canvas = self
.gl
.canvas()
@@ -239,8 +264,15 @@ impl CameraViewPort {
.dyn_into::<web_sys::HtmlCanvasElement>()
.unwrap_abort();
self.width = (width as f32) * self.dpi;
self.height = (height as f32) * self.dpi;
// grid canvas
let document = web_sys::window().unwrap_abort().document().unwrap_abort();
let grid_canvas = document
// Inside it, retrieve the canvas
.get_elements_by_class_name("aladin-gridCanvas")
.get_with_index(0)
.unwrap_abort()
.dyn_into::<web_sys::HtmlCanvasElement>()
.unwrap_abort();
canvas
.style()
@@ -250,17 +282,38 @@ impl CameraViewPort {
.style()
.set_property("height", &format!("{}px", height))
.unwrap_abort();
grid_canvas
.style()
.set_property("width", &format!("{}px", width))
.unwrap_abort();
grid_canvas
.style()
.set_property("height", &format!("{}px", height))
.unwrap_abort();
canvas.set_width(self.width as u32);
canvas.set_height(self.height as u32);
grid_canvas.set_width(self.width as u32);
grid_canvas.set_height(self.height as u32);
// Once the canvas size is changed, we have to set the viewport as well
self.gl
.viewport(0, 0, self.width as i32, self.height as i32);
}
pub fn set_screen_size(&mut self, width: f32, height: f32, projection: &ProjectionType) {
self.width = (width as f32) * self.dpi;
self.height = (height as f32) * self.dpi;
self.aspect = width / height;
// Compute the new clip zoom factor
self.compute_ndc_to_clip_factor(projection);
self.vertices.set_fov(
self.fov.set_aperture(
&self.ndc_to_clip,
self.clip_zoom_factor,
&self.w2m,
&self.center,
projection,
);
let proj_area = projection.get_area();
@@ -269,22 +322,16 @@ impl CameraViewPort {
self,
));
// Update the size of the canvas
self.set_canvas_size();
self.set_canvas_size(width, height);
// Once it is done, recompute the scissor
self.recompute_scissor();
}
pub fn compute_ndc_to_clip_factor(&mut self, proj: &ProjectionType) {
self.ndc_to_clip = if self.height < self.width {
Vector2::new(
1.0,
(self.height as f64) / (self.width as f64)
)
Vector2::new(1.0, (self.height as f64) / (self.width as f64))
} else {
Vector2::new(
(self.width as f64) / (self.height as f64),
1.0,
)
Vector2::new((self.width as f64) / (self.height as f64), 1.0)
};
let bounds_size_ratio = proj.bounds_size_ratio();
@@ -310,8 +357,15 @@ impl CameraViewPort {
};
let can_unzoom_more = match proj {
ProjectionType::Tan(_) | ProjectionType::Mer(_) | ProjectionType::Air(_) | ProjectionType::Stg(_) | ProjectionType::Car(_) | ProjectionType::Cea(_) | ProjectionType::Cyp(_) | ProjectionType::Hpx(_) => false,
_ => true
ProjectionType::Tan(_)
| ProjectionType::Mer(_)
| ProjectionType::Air(_)
| ProjectionType::Stg(_)
| ProjectionType::Car(_)
| ProjectionType::Cea(_)
| ProjectionType::Cyp(_)
| ProjectionType::Hpx(_) => false,
_ => true,
};
let aperture_start: Angle<f64> = ArcDeg(proj.aperture_start()).into();
@@ -326,7 +380,7 @@ impl CameraViewPort {
self.clip_zoom_factor = if let Some(p0) = proj.world_to_clip_space(&v0) {
if let Some(p1) = proj.world_to_clip_space(&v1) {
(0.5*(p1.x - p0.x).abs()).min(1.0)
(0.5 * (p1.x - p0.x).abs()).min(1.0)
} else {
(aperture / aperture_start).0
}
@@ -346,14 +400,10 @@ impl CameraViewPort {
// Project this vertex into the screen
self.moved = true;
self.zoomed = true;
self.time_last_move = Time::now();
self.vertices.set_fov(
&self.ndc_to_clip,
self.clip_zoom_factor,
&self.w2m,
&self.center,
proj
);
self.fov
.set_aperture(&self.ndc_to_clip, self.clip_zoom_factor, &self.w2m, proj);
let proj_area = proj.get_area();
self.is_allsky = !proj_area.is_in(&math::projection::ndc_to_clip_space(
&Vector2::new(-1.0, -1.0),
@@ -364,6 +414,15 @@ impl CameraViewPort {
// recompute the scissor with the new aperture
self.recompute_scissor();
// compute the hpx cells
self.view_hpx_cells.update(
self.tile_depth,
&self.fov,
&self.center,
self.get_coo_system(),
proj,
);
}
fn compute_tile_depth(&mut self) {
@@ -394,64 +453,70 @@ impl CameraViewPort {
self.tile_depth
}
pub fn rotate(&mut self, axis: &cgmath::Vector3<f64>, angle: Angle<f64>) {
pub fn rotate(
&mut self,
axis: &cgmath::Vector3<f64>,
angle: Angle<f64>,
proj: &ProjectionType,
) {
// Rotate the axis:
let drot = Rotation::from_axis_angle(axis, angle);
self.w2m_rot = drot * self.w2m_rot;
self.update_rot_matrices();
self.update_rot_matrices(proj);
}
pub fn set_center(&mut self, lonlat: &LonLatT<f64>, system: &CooSystem) {
pub fn set_center(&mut self, lonlat: &LonLatT<f64>, coo_sys: CooSystem, proj: &ProjectionType) {
let icrs_pos: Vector4<_> = lonlat.vector();
let view_pos = coosys::apply_coo_system(
system,
self.get_system(),
&icrs_pos,
);
let view_pos = coosys::apply_coo_system(coo_sys, self.get_coo_system(), &icrs_pos);
let rot = Rotation::from_sky_position(&view_pos);
// Apply the rotation to the camera to go
// to the next lonlat
self.set_rotation(&rot);
self.set_rotation(&rot, proj);
}
fn set_rotation(&mut self, rot: &Rotation<f64>) {
fn set_rotation(&mut self, rot: &Rotation<f64>, proj: &ProjectionType) {
self.w2m_rot = *rot;
self.update_rot_matrices();
self.update_rot_matrices(proj);
}
pub fn get_field_of_view(&self) -> &FieldOfViewType {
self.vertices._type()
pub fn get_field_of_view(&self) -> &FieldOfView {
&self.fov
}
/*pub fn get_coverage(&mut self, hips_frame: &CooSystem) -> &HEALPixCoverage {
self.vertices.get_coverage(&self.system, hips_frame, &self.center)
}*/
pub fn set_coo_system(&mut self, new_system: CooSystem) {
pub fn set_coo_system(&mut self, new_coo_sys: CooSystem, proj: &ProjectionType) {
// Compute the center position according to the new coordinate frame system
let new_center = coosys::apply_coo_system(&self.system, &new_system, &self.center);
let new_center = coosys::apply_coo_system(self.coo_sys, new_coo_sys, &self.center);
// Create a rotation object from that position
let new_rotation = Rotation::from_sky_position(&new_center);
// Apply it to the center of the view
self.set_rotation(&new_rotation);
self.set_rotation(&new_rotation, proj);
// unregister the coo sys
//self.view_hpx_cells.unregister_frame(self.coo_sys);
// register the new one
//self.view_hpx_cells.register_frame(new_coo_sys);
// recompute the coverage if necessary
self.view_hpx_cells
.update(self.tile_depth, &self.fov, &self.center, new_coo_sys, proj);
// Record the new system
self.system = new_system;
self.coo_sys = new_coo_sys;
}
pub fn set_longitude_reversed(&mut self, reversed_longitude: bool) {
pub fn set_longitude_reversed(&mut self, reversed_longitude: bool, proj: &ProjectionType) {
if self.reversed_longitude != reversed_longitude {
self.rotation_center_angle = -self.rotation_center_angle;
self.update_rot_matrices();
self.update_rot_matrices(proj);
}
self.reversed_longitude = reversed_longitude;
// The camera is reversed => it has moved
self.moved = true;
self.time_last_move = Time::now();
}
pub fn get_longitude_reversed(&self) -> bool {
@@ -492,7 +557,7 @@ impl CameraViewPort {
}
pub fn get_vertices(&self) -> Option<&Vec<XYZWModel>> {
self.vertices.get_vertices()
self.fov.get_vertices()
}
pub fn get_screen_size(&self) -> Vector2<f32> {
@@ -537,10 +602,6 @@ impl CameraViewPort {
&self.center
}
pub fn get_bounding_box(&self) -> &BoundingBox {
self.vertices.get_bounding_box()
}
pub fn is_allsky(&self) -> bool {
self.is_allsky
}
@@ -549,38 +610,45 @@ impl CameraViewPort {
self.time_last_move
}
pub fn get_system(&self) -> &CooSystem {
&self.system
pub fn get_coo_system(&self) -> CooSystem {
self.coo_sys
}
pub fn set_rotation_around_center(&mut self, theta: Angle<f64>) {
pub fn set_rotation_around_center(&mut self, theta: Angle<f64>, proj: &ProjectionType) {
self.rotation_center_angle = theta;
self.update_rot_matrices();
self.update_rot_matrices(proj);
}
pub fn get_rotation_around_center(&self) -> &Angle<f64> {
&self.rotation_center_angle
}
}
use cgmath::Matrix;
use crate::ProjectionType;
use cgmath::Matrix;
//use crate::coo_conversion::CooBaseFloat;
impl CameraViewPort {
// private methods
fn update_rot_matrices(&mut self) {
fn update_rot_matrices(&mut self, proj: &ProjectionType) {
self.w2m = (&(self.w2m_rot)).into();
self.m2w = self.w2m.transpose();
// Update the center with the new rotation
self.update_center();
// Rotate the fov vertices
self.vertices
.set_rotation(&self.w2m, &self.center);
self.fov.set_rotation(&self.w2m);
self.time_last_move = Time::now();
self.last_user_action = UserAction::Moving;
self.moved = true;
// compute the hpx cells
self.view_hpx_cells.update(
self.tile_depth,
&self.fov,
&self.center,
self.get_coo_system(),
proj,
);
}
fn update_center(&mut self) {
@@ -593,6 +661,7 @@ impl CameraViewPort {
// Re-update the model matrix to take into account the rotation
// by theta around the center axis
self.final_rot = center_rot * self.w2m_rot;
self.w2m = (&self.final_rot).into();
self.m2w = self.w2m.transpose();
}

View File

@@ -8,7 +8,7 @@ use al_api::coo_system::CooSystem;
/// The core projections are always performed in icrs j2000
/// so one must call these methods to convert them to icrs before.
#[inline]
pub fn apply_coo_system<S>(c1: &CooSystem, c2: &CooSystem, v1: &Vector4<S>) -> Vector4<S>
pub fn apply_coo_system<S>(c1: CooSystem, c2: CooSystem, v1: &Vector4<S>) -> Vector4<S>
where
S: BaseFloat + CooBaseFloat,
{
@@ -28,15 +28,14 @@ mod tests {
#[test]
fn j2000_to_gal() {
use crate::LonLatT;
use crate::ArcDeg;
use crate::math::lonlat::LonLat;
use super::CooSystem;
use crate::math::lonlat::LonLat;
use crate::ArcDeg;
use crate::LonLatT;
let lonlat: LonLatT<f64> = LonLatT::new(ArcDeg(0.0).into(), ArcDeg(0.0).into());
let gal_lonlat =
super::apply_coo_system(&CooSystem::ICRS, &CooSystem::GAL, &lonlat.vector())
.lonlat();
super::apply_coo_system(CooSystem::ICRS, CooSystem::GAL, &lonlat.vector()).lonlat();
let gal_lon_deg = gal_lonlat.lon().0 * 360.0 / (2.0 * std::f64::consts::PI);
let gal_lat_deg = gal_lonlat.lat().0 * 360.0 / (2.0 * std::f64::consts::PI);
@@ -47,15 +46,14 @@ mod tests {
#[test]
fn gal_to_j2000() {
use crate::LonLatT;
use crate::ArcDeg;
use crate::math::lonlat::LonLat;
use super::CooSystem;
use crate::math::lonlat::LonLat;
use crate::ArcDeg;
use crate::LonLatT;
let lonlat: LonLatT<f64> = LonLatT::new(ArcDeg(0.0).into(), ArcDeg(0.0).into());
let j2000_lonlat =
super::apply_coo_system(&CooSystem::GAL, &CooSystem::ICRS, &lonlat.vector())
.lonlat();
super::apply_coo_system(CooSystem::GAL, CooSystem::ICRS, &lonlat.vector()).lonlat();
let j2000_lon_deg = j2000_lonlat.lon().0 * 360.0 / (2.0 * std::f64::consts::PI);
let j2000_lat_deg = j2000_lonlat.lat().0 * 360.0 / (2.0 * std::f64::consts::PI);
@@ -65,18 +63,17 @@ mod tests {
#[test]
fn j2000_gal_roundtrip() {
use crate::LonLatT;
use crate::ArcDeg;
use crate::math::lonlat::LonLat;
use super::CooSystem;
use crate::math::lonlat::LonLat;
use crate::ArcDeg;
use crate::LonLatT;
let gal_lonlat: LonLatT<f64> = LonLatT::new(ArcDeg(0.0).into(), ArcDeg(0.0).into());
let icrs_pos =
super::apply_coo_system(&CooSystem::GAL, &CooSystem::ICRS, &gal_lonlat.vector());
super::apply_coo_system(CooSystem::GAL, CooSystem::ICRS, &gal_lonlat.vector());
let gal_lonlat =
super::apply_coo_system(&CooSystem::ICRS, &CooSystem::GAL, &icrs_pos);
let gal_lonlat = super::apply_coo_system(CooSystem::ICRS, CooSystem::GAL, &icrs_pos);
let gal_lon_deg = gal_lonlat.lon().0 * 360.0 / (2.0 * std::f64::consts::PI);
let gal_lat_deg = gal_lonlat.lat().0 * 360.0 / (2.0 * std::f64::consts::PI);

View File

@@ -20,7 +20,6 @@ use crate::fifo_cache::Cache;
use query::Query;
use request::{RequestType, Resource};
impl Downloader {
pub fn new() -> Downloader {
let requests = Vec::with_capacity(32);
@@ -42,7 +41,7 @@ impl Downloader {
{
let url = query.url();
if self.cache.contains(url) {
self.queried_cached_urls.push(url.clone());
//self.queried_cached_urls.push(url.clone());
false
} else {
let query_id = query.id();
@@ -52,11 +51,11 @@ impl Downloader {
// The cell is not already requested
if not_already_requested {
self.queried_list.insert(query_id);
let request = T::Request::from(query);
self.requests.push(request.into());
}
not_already_requested
}
}
@@ -95,9 +94,11 @@ impl Downloader {
rscs
}
pub fn cache_rsc(&mut self, rsc: Resource) {
//pub fn get_cached_resources(&mut self) -> Vec<Resource> {}
/*pub fn cache_rsc(&mut self, rsc: Resource) {
self.cache.insert(rsc.url().clone(), rsc);
}
}*/
pub fn delay_rsc(&mut self, rsc: Resource) {
self.queried_cached_urls.push(rsc.url().clone());

View File

@@ -1,6 +1,6 @@
pub type Url = String;
use super::request::{RequestType};
use super::request::RequestType;
pub trait Query: Sized {
type Request: From<Self> + Into<RequestType>;
@@ -11,6 +11,7 @@ pub trait Query: Sized {
pub type QueryId = (&'static str, Url);
use al_core::image::format::ImageFormatType;
#[derive(Eq, Hash, PartialEq, Clone)]
pub struct Tile {
pub cell: HEALPixCell,
pub format: ImageFormatType,
@@ -22,9 +23,9 @@ pub struct Tile {
use crate::{healpix::cell::HEALPixCell, survey::config::HiPSConfig};
impl Tile {
pub fn new(cell: &HEALPixCell, cfg: &HiPSConfig) -> Self {
let hips_url = cfg.get_root_url().clone();
let format = cfg.get_format();
pub fn new(cell: &HEALPixCell, hips_url: String, format: ImageFormatType) -> Self {
//let hips_url = cfg.get_root_url().clone();
//let format = cfg.get_format();
let ext = format.get_ext_file();
let HEALPixCell(depth, idx) = *cell;
@@ -111,7 +112,6 @@ impl Query for Allsky {
}
}
/* ---------------------------------- */
pub struct PixelMetadata {
pub format: ImageFormatType,
@@ -158,10 +158,7 @@ pub struct Moc {
}
impl Moc {
pub fn new(url: String, params: al_api::moc::MOC) -> Self {
Moc {
url,
params,
}
Moc { url, params }
}
}
@@ -176,4 +173,4 @@ impl Query for Moc {
fn id(&self) -> QueryId {
("MOC", self.url().to_string())
}
}
}

View File

@@ -1,5 +1,5 @@
use crate::{healpix::cell::HEALPixCell};
use al_core::image::format::{ChannelType, ImageFormatType, RGBA8U, RGB8U};
use crate::healpix::cell::HEALPixCell;
use al_core::image::format::{ChannelType, ImageFormatType, RGB8U, RGBA8U};
use crate::downloader::query;
use al_core::image::ImageType;
@@ -33,14 +33,10 @@ async fn query_html_image(url: &str) -> Result<HtmlImageElement, JsValue> {
&mut (Box::new(move |resolve, reject| {
// Ask for CORS permissions
image_cloned.set_cross_origin(Some(""));
image_cloned.set_onload(
Some(&resolve)
);
image_cloned.set_onerror(
Some(&reject)
);
image_cloned.set_onload(Some(&resolve));
image_cloned.set_onerror(Some(&reject));
image_cloned.set_src(&url);
}) as Box<dyn FnMut(js_sys::Function, js_sys::Function)>)
}) as Box<dyn FnMut(js_sys::Function, js_sys::Function)>),
);
let _ = JsFuture::from(html_img_elt_promise).await?;
@@ -48,12 +44,12 @@ async fn query_html_image(url: &str) -> Result<HtmlImageElement, JsValue> {
Ok(image)
}
use al_core::image::html::HTMLImage;
use wasm_bindgen::JsValue;
use crate::renderable::Url;
use wasm_bindgen_futures::JsFuture;
use web_sys::{RequestInit, RequestMode, Response, HtmlImageElement};
use al_core::image::html::HTMLImage;
use wasm_bindgen::JsCast;
use wasm_bindgen::JsValue;
use wasm_bindgen_futures::JsFuture;
use web_sys::{HtmlImageElement, RequestInit, RequestMode, Response};
impl From<query::Tile> for TileRequest {
// Create a tile request associated to a HiPS
fn from(query: query::Tile) -> Self {
@@ -82,7 +78,6 @@ impl From<query::Tile> for TileRequest {
debug_assert!(resp_value.is_instance_of::<Response>());
let resp: Response = resp_value.dyn_into()?;*/
/*/// Bitmap version
let blob = JsFuture::from(resp.blob()?).await?.into();
let image = JsFuture::from(window.create_image_bitmap_with_blob(&blob)?)
@@ -93,7 +88,7 @@ impl From<query::Tile> for TileRequest {
Ok(ImageType::JpgImageRgb8u { image })*/
/*
/// Raw image decoding
let buf = JsFuture::from(resp.array_buffer()?).await?;
let raw_bytes = js_sys::Uint8Array::new(&buf).to_vec();
let image = ImageBuffer::<RGB8U>::from_raw_bytes(&raw_bytes[..], 512, 512)?;
@@ -103,7 +98,9 @@ impl From<query::Tile> for TileRequest {
// HTMLImageElement
let image = query_html_image(&url_clone).await?;
// The image has been resolved
Ok(ImageType::HTMLImageRgb8u { image: HTMLImage::<RGB8U>::new(image) })
Ok(ImageType::HTMLImageRgb8u {
image: HTMLImage::<RGB8U>::new(image),
})
}),
ChannelType::RGBA8U => Request::new(async move {
/*let mut opts = RequestInit::new();
@@ -116,7 +113,6 @@ impl From<query::Tile> for TileRequest {
debug_assert!(resp_value.is_instance_of::<Response>());
let resp: Response = resp_value.dyn_into()?;*/
/*/// Bitmap version
let blob = JsFuture::from(resp.blob()?).await?.into();
let image = JsFuture::from(window.create_image_bitmap_with_blob(&blob)?)
@@ -125,7 +121,7 @@ impl From<query::Tile> for TileRequest {
let image = Bitmap::new(image);
Ok(ImageType::PngImageRgba8u { image })*/
/*
/// Raw image decoding
let buf = JsFuture::from(resp.array_buffer()?).await?;
@@ -137,14 +133,21 @@ impl From<query::Tile> for TileRequest {
// HTMLImageElement
let image = query_html_image(&url_clone).await?;
// The image has been resolved
Ok(ImageType::HTMLImageRgba8u { image: HTMLImage::<RGBA8U>::new(image) })
Ok(ImageType::HTMLImageRgba8u {
image: HTMLImage::<RGBA8U>::new(image),
})
}),
ChannelType::R32F | ChannelType::R64F | ChannelType::R32I | ChannelType::R16I | ChannelType::R8UI => Request::new(async move {
ChannelType::R32F
| ChannelType::R64F
| ChannelType::R32I
| ChannelType::R16I
| ChannelType::R8UI => Request::new(async move {
let mut opts = RequestInit::new();
opts.method("GET");
opts.mode(RequestMode::Cors);
let request = web_sys::Request::new_with_str_and_init(&url_clone, &opts).unwrap_abort();
let request =
web_sys::Request::new_with_str_and_init(&url_clone, &opts).unwrap_abort();
let resp_value = JsFuture::from(window.fetch_with_request(&request)).await?;
// `resp_value` is a `Response` object.
debug_assert!(resp_value.is_instance_of::<Response>());
@@ -163,7 +166,9 @@ impl From<query::Tile> for TileRequest {
Ok(ImageType::FitsImage { raw_bytes })
} else {
Err(JsValue::from_str("Response status code not between 200-299."))
Err(JsValue::from_str(
"Response status code not between 200-299.",
))
}
}),
_ => todo!(),
@@ -193,21 +198,30 @@ pub struct Tile {
use crate::Abort;
impl Tile {
#[inline(always)]
pub fn missing(&self) -> bool {
self.image.lock().unwrap_abort().is_none()
}
#[inline(always)]
pub fn get_hips_url(&self) -> &Url {
&self.hips_url
}
#[inline(always)]
pub fn get_url(&self) -> &Url {
&self.url
}
#[inline(always)]
pub fn cell(&self) -> &HEALPixCell {
&self.cell
}
#[inline(always)]
pub fn query(&self) -> query::Tile {
query::Tile::new(&self.cell, self.hips_url.clone(), self.format)
}
}
impl<'a> From<&'a TileRequest> for Option<Tile> {

153
src/core/src/grid/label.rs Normal file
View File

@@ -0,0 +1,153 @@
use crate::math::PI;
use cgmath::Vector3;
use crate::ProjectionType;
use crate::CameraViewPort;
use crate::LonLatT;
use cgmath::InnerSpace;
use crate::math::angle::SerializeFmt;
use crate::math::TWICE_PI;
use crate::grid::XYScreen;
use crate::math::lonlat::LonLat;
use crate::math::angle::ToAngle;
use core::ops::Range;
use cgmath::Vector2;
const OFF_TANGENT: f64 = 35.0;
const OFF_BI_TANGENT: f64 = 5.0;
pub enum LabelOptions {
Centered,
OnSide,
}
#[derive(Debug)]
pub struct Label {
// The position
pub position: XYScreen,
// the string content
pub content: String,
// in radians
pub rot: f64,
}
impl Label {
pub fn from_meridian(
lon: f64,
lat: &Range<f64>,
options: LabelOptions,
camera: &CameraViewPort,
projection: &ProjectionType,
fmt: &SerializeFmt
) -> Option<Self> {
let fov = camera.get_field_of_view();
let d = if fov.contains_north_pole() {
Vector3::new(0.0, 1.0, 0.0)
} else if fov.contains_south_pole() {
Vector3::new(0.0, -1.0, 0.0)
} else {
Vector3::new(0.0, 1.0, 0.0)
};
let lonlat = match options {
LabelOptions::Centered => {
let mut lat = camera.get_center().lat().to_radians();
if lat.abs() > 70.0_f64.to_radians() {
lat = lat.signum() * 70.0_f64.to_radians();
}
LonLatT::new(lon.to_angle(), lat.to_angle())
}
LabelOptions::OnSide => LonLatT::new(lon.to_angle(), lat.start.to_angle())
};
let m1: Vector3<_> = lonlat.vector();
let m2 = (m1 + d * 1e-3).normalize();
//let s1 = projection.model_to_screen_space(&(system.to_icrs_j2000::<f64>() * m1), camera, reversed_longitude)?;
let d1 = projection.model_to_screen_space(&m1.extend(1.0), camera)?;
let d2 = projection.model_to_screen_space(&m2.extend(1.0), camera)?;
//let s2 = projection.model_to_screen_space(&(system.to_icrs_j2000::<f64>() * m2), camera, reversed_longitude)?;
let dt = (d2 - d1).normalize();
let db = Vector2::new(dt.y.abs(), dt.x.abs());
let mut lon = m1.lon().to_radians();
if lon < 0.0 {
lon += TWICE_PI;
}
let content = fmt.to_string(lon.to_angle());
let position = if !fov.is_allsky() {
d1 + OFF_TANGENT * dt - OFF_BI_TANGENT * db
} else {
d1
};
// rot is between -PI and +PI
let rot = dt.y.signum() * dt.x.acos();
Some(Label {
position,
content,
rot,
})
}
pub fn from_parallel(
lat: f64,
lon: &Range<f64>,
options: LabelOptions,
camera: &CameraViewPort,
projection: &ProjectionType,
) -> Option<Self> {
let lonlat = match options {
LabelOptions::Centered => {
let lon = camera.get_center().lon();
LonLatT::new(lon, lat.to_angle())
}
LabelOptions::OnSide => LonLatT::new(lon.start.to_angle(), lat.to_angle())
};
let m1: Vector3<_> = lonlat.vector();
let mut t = Vector3::new(-m1.z, 0.0, m1.x).normalize();
let center = camera.get_center().truncate();
let dot_t_center = center.dot(t);
if dot_t_center.abs() < 1e-4 {
t = -t;
} else {
t = dot_t_center.signum() * t;
}
let m2 = (m1 + t * 1e-3).normalize();
let d1 = projection.model_to_screen_space(&m1.extend(1.0), camera)?;
let d2 = projection.model_to_screen_space(&m2.extend(1.0), camera)?;
let dt = (d2 - d1).normalize();
let db = Vector2::new(dt.y.abs(), dt.x.abs());
let content = SerializeFmt::DMS.to_string(lonlat.lat());
let fov = camera.get_field_of_view();
let position = if !fov.is_allsky() && !fov.contains_pole() {
d1 + OFF_TANGENT * dt - OFF_BI_TANGENT * db
} else {
d1
};
// rot is between -PI and +PI
let rot = dt.y.signum() * dt.x.acos() + PI;
Some(Label {
position,
content,
rot,
})
}
}

View File

@@ -0,0 +1,254 @@
use super::label::{Label, LabelOptions};
use crate::math::lonlat::LonLat;
use crate::math::sph_geom::region::Intersection;
use crate::CameraViewPort;
use core::ops::Range;
use crate::math::MINUS_HALF_PI;
use crate::ProjectionType;
use crate::grid::angle::SerializeFmt;
use crate::math::HALF_PI;
pub fn get_intersecting_meridian(
lon: f64,
camera: &CameraViewPort,
projection: &ProjectionType,
fmt: &SerializeFmt,
) -> Option<Meridian> {
let fov = camera.get_field_of_view();
if fov.contains_both_poles() {
let meridian = Meridian::new(
lon,
&(-HALF_PI..HALF_PI),
LabelOptions::Centered,
camera,
projection,
fmt,
);
Some(meridian)
} else {
let i = fov.intersects_meridian(lon);
match i {
Intersection::Included => {
// Longitude fov >= PI
let meridian = Meridian::new(
lon,
&(-HALF_PI..HALF_PI),
LabelOptions::Centered,
camera,
projection,
fmt,
);
Some(meridian)
}
Intersection::Intersect { vertices } => {
let num_intersections = vertices.len();
let meridian = match num_intersections {
1 => {
let v1 = &vertices[0];
let lonlat1 = v1.lonlat();
let lat1 = lonlat1.lat().to_radians();
let lat = if fov.contains_north_pole() {
lat1..HALF_PI
} else {
lat1..MINUS_HALF_PI
};
Meridian::new(lon, &lat, LabelOptions::OnSide, camera, projection, fmt)
}
2 => {
// full intersection
let v1 = &vertices[0];
let v2 = &vertices[1];
let lat1 = v1.lat().to_radians();
let lat2 = v2.lat().to_radians();
Meridian::new(
lon,
&(lat1..lat2),
LabelOptions::OnSide,
camera,
projection,
fmt,
)
}
_ => {
/*let mut vertices = vertices.into_vec();
// One segment over two will be in the field of view
vertices.push(Vector4::new(0.0, 1.0, 0.0, 1.0));
vertices.push(Vector4::new(0.0, -1.0, 0.0, 1.0));
vertices.sort_by(|i1, i2| {
i1.y.total_cmp(&i2.y)
});
let v1 = &vertices[0];
let v2 = &vertices[1];
// meridian are part of great circles so the mean between v1 & v2 also lies on it
let vm = (v1 + v2).truncate().normalize();
let vertices = if !fov.contains_south_pole() {
&vertices[1..]
} else {
&vertices
};
let line_vertices = vertices.iter().zip(vertices.iter().skip(1))
.step_by(2)
.map(|(i1, i2)| {
line::great_circle_arc::project(
lon,
i1.lat().to_radians(),
lon,
i2.lat().to_radians(),
camera,
projection
)
})
.flatten()
.collect::<Vec<_>>();
let label = Label::from_meridian(&v1.lonlat(), camera, projection, fmt);
*/
Meridian::new(
lon,
&(-HALF_PI..HALF_PI),
LabelOptions::OnSide,
camera,
projection,
fmt,
)
}
};
Some(meridian)
}
Intersection::Empty => None,
}
}
}
pub struct Meridian {
// List of vertices
vertices: Vec<[f32; 2]>,
// Line vertices indices
indices: Vec<Range<usize>>,
label: Option<Label>,
}
impl Meridian {
pub fn new(
lon: f64,
lat: &Range<f64>,
label_options: LabelOptions,
camera: &CameraViewPort,
projection: &ProjectionType,
fmt: &SerializeFmt,
) -> Self {
let label = Label::from_meridian(lon, lat, label_options, camera, projection, fmt);
// Draw the full parallel
let vertices = crate::renderable::line::great_circle_arc::project(
lon, lat.start, lon, lat.end, camera, projection,
)
.into_iter()
.map(|v| [v.x as f32, v.y as f32])
.collect::<Vec<_>>();
let mut start_idx = 0;
let mut indices = if vertices.len() >= 3 {
let v_iter = (1..(vertices.len() - 1)).map(|i| &vertices[i]);
v_iter
.clone()
.zip(v_iter.skip(1))
.enumerate()
.step_by(2)
.filter_map(|(i, (v1, v2))| {
if v1 == v2 {
None
} else {
let res = Some(start_idx..(i + 2));
start_idx = i + 2;
res
}
})
.collect()
} else {
vec![]
};
indices.push(start_idx..vertices.len());
/*let mut prev_v = [vertices[0].x as f32, vertices[0].y as f32];
let vertices: Vec<_> = std::iter::once(prev_v)
.chain(
vertices.into_iter().skip(1)
.filter_map(|v| {
let cur_v = [v.x as f32, v.y as f32];
if cur_v == prev_v {
None
} else {
prev_v = cur_v;
Some(cur_v)
}
})
)
.collect();
// Create subsets of vertices referring to different lines
let indices = if vertices.len() >= 3 {
let mut indices = vec![];
let mut v0 = 0;
let mut v1 = 1;
let mut v2 = 2;
let mut s = 0;
let n_segment = vertices.len() - 1;
for i in 0..n_segment {
if Triangle::new(&vertices[v0], &vertices[v1], &vertices[v2]).is_valid(camera) {
indices.push(s..(i+1));
s = i;
}
v0 = v1;
v1 = v2;
v2 = (v2 + 1) % vertices.len();
}
//indices.push(start_line_i..vertices.len());
//vec![0..vertices.len()]
vec![0..2]
} else {
vec![0..vertices.len()]
};*/
Self {
vertices,
indices,
label,
}
}
#[inline]
pub fn get_lines_vertices(&self) -> Vec<&[[f32; 2]]> {
self.indices
.iter()
.map(|r| &self.vertices[r.start..r.end])
.collect()
}
#[inline]
pub fn get_label(&self) -> Option<&Label> {
self.label.as_ref()
}
}

321
src/core/src/grid/mod.rs Normal file
View File

@@ -0,0 +1,321 @@
pub mod label;
pub mod meridian;
pub mod parallel;
use crate::math::projection::coo_space::XYScreen;
use crate::Abort;
use crate::camera::CameraViewPort;
use crate::math::angle;
use crate::math::HALF_PI;
use crate::renderable::line;
use crate::renderable::line::PathVertices;
use crate::renderable::Renderer;
use crate::ProjectionType;
use al_api::color::ColorRGBA;
use al_api::grid::GridCfg;
use crate::grid::label::Label;
pub struct ProjetedGrid {
// Properties
pub color: ColorRGBA,
pub show_labels: bool,
pub enabled: bool,
pub label_scale: f32,
thickness: f32,
// Render Text Manager
text_renderer: TextRenderManager,
fmt: angle::SerializeFmt,
line_style: line::Style,
}
use crate::shader::ShaderManager;
use wasm_bindgen::JsValue;
use crate::renderable::line::RasterizedLineRenderer;
use crate::renderable::text::TextRenderManager;
impl ProjetedGrid {
pub fn new() -> Result<ProjetedGrid, JsValue> {
let text_renderer = TextRenderManager::new()?;
let color = ColorRGBA {
r: 0.0,
g: 1.0,
b: 0.0,
a: 0.5,
};
let show_labels = true;
let enabled = false;
let label_scale = 1.0;
let line_style = line::Style::None;
let fmt = angle::SerializeFmt::DMS;
let thickness = 3.0;
let grid = ProjetedGrid {
color,
line_style,
show_labels,
enabled,
label_scale,
thickness,
text_renderer,
fmt,
};
// Initialize the vertices & labels
//grid.force_update(camera, projection, line_renderer);
Ok(grid)
}
pub fn set_cfg(
&mut self,
new_cfg: GridCfg,
_camera: &CameraViewPort,
_projection: &ProjectionType,
) -> Result<(), JsValue> {
let GridCfg {
color,
opacity,
thickness,
show_labels,
label_size,
enabled,
fmt,
} = new_cfg;
if let Some(color) = color {
self.color = ColorRGBA {
r: color.r,
g: color.g,
b: color.b,
a: self.color.a,
};
self.text_renderer.set_color(&color);
}
if let Some(opacity) = opacity {
self.color.a = opacity;
}
if let Some(thickness) = thickness {
// convert thickness in pixels to ndc
self.thickness = thickness;
}
if let Some(show_labels) = show_labels {
self.show_labels = show_labels;
}
if let Some(fmt) = fmt {
self.fmt = fmt.into();
}
if let Some(label_size) = label_size {
self.label_scale = label_size;
self.text_renderer.set_font_size(label_size as u32);
}
if let Some(enabled) = enabled {
self.enabled = enabled;
if !self.enabled {
self.text_renderer.clear_text_canvas();
}
}
Ok(())
}
// Update the grid whenever the camera moved
fn update(
&mut self,
camera: &CameraViewPort,
projection: &ProjectionType,
rasterizer: &mut RasterizedLineRenderer,
) -> Result<(), JsValue> {
let fov = camera.get_field_of_view();
let bbox = fov.get_bounding_box();
let max_dim_px = camera.get_width().max(camera.get_height()) as f64;
let step_line_px = max_dim_px * 0.2;
// update meridians
let meridians = {
// Select the good step with a binary search
let step_lon_precised =
(bbox.get_lon_size() as f64) * step_line_px / (camera.get_width() as f64);
let step_lon = select_fixed_step(step_lon_precised);
// Add meridians
let start_lon = bbox.lon_min() - (bbox.lon_min() % step_lon);
let mut stop_lon = bbox.lon_max();
if bbox.all_lon() {
stop_lon -= 1e-3;
}
let mut meridians = vec![];
let mut lon = start_lon;
while lon < stop_lon {
if let Some(p) =
meridian::get_intersecting_meridian(lon, camera, projection, &self.fmt)
{
meridians.push(p);
}
lon += step_lon;
}
meridians
};
let parallels = {
let step_lat_precised =
(bbox.get_lat_size() as f64) * step_line_px / (camera.get_height() as f64);
let step_lat = select_fixed_step(step_lat_precised);
let mut start_lat = bbox.lat_min() - (bbox.lat_min() % step_lat);
if start_lat == -HALF_PI {
start_lat += step_lat;
}
let stop_lat = bbox.lat_max();
let mut lat = start_lat;
let mut parallels = vec![];
while lat < stop_lat {
if let Some(p) = parallel::get_intersecting_parallel(lat, camera, projection) {
parallels.push(p);
}
lat += step_lat;
}
parallels
};
// update the line buffers
let paths = meridians
.iter()
.map(|meridian| meridian.get_lines_vertices())
.chain(
parallels
.iter()
.map(|parallel| parallel.get_lines_vertices()),
)
.flatten()
.map(|vertices| PathVertices {
closed: false,
vertices,
});
rasterizer.add_stroke_paths(
paths,
self.thickness * 2.0 / camera.get_width(),
&self.color,
&self.line_style,
);
// update labels
{
let labels = meridians
.iter()
.filter_map(|m| m.get_label())
.chain(parallels.iter().filter_map(|p| p.get_label()));
let dpi = camera.get_dpi();
self.text_renderer.begin();
for Label {
content,
position,
rot,
} in labels
{
let position = position.cast::<f32>().unwrap_abort() * dpi;
self.text_renderer
.add_label(&content, &position, cgmath::Rad(*rot as f32))?;
}
self.text_renderer.end();
}
Ok(())
}
pub fn draw(
&mut self,
camera: &CameraViewPort,
_shaders: &mut ShaderManager,
projection: &ProjectionType,
rasterizer: &mut RasterizedLineRenderer,
) -> Result<(), JsValue> {
if self.enabled {
self.update(camera, projection, rasterizer)?;
}
Ok(())
}
}
const GRID_STEPS: &[f64] = &[
0.0000000000048481367,
0.000000000009696274,
0.000000000024240685,
0.000000000048481369,
0.000000000096962737,
0.00000000024240683,
0.00000000048481364,
0.0000000009696274,
0.0000000024240686,
0.000000004848138,
0.000000009696275,
0.000000024240685,
0.00000004848138,
0.00000009696275,
0.00000024240687,
0.0000004848138,
0.0000009696275,
0.0000024240686,
0.000004848138,
0.000009696275,
0.000024240685,
0.000048481369,
0.000072722055,
0.00014544412,
0.00029088823,
0.00058177644,
0.0014544412,
0.0029088823,
0.004363324,
0.008726647,
0.017453293,
0.034906586,
0.08726647,
0.17453293,
0.34906585,
std::f64::consts::FRAC_PI_4,
];
fn select_fixed_step(fov: f64) -> f64 {
match GRID_STEPS.binary_search_by(|v| {
v.partial_cmp(&fov)
.expect("Couldn't compare values, maybe because the fov given is NaN")
}) {
Ok(idx) => GRID_STEPS[idx],
Err(idx) => {
if idx == 0 {
GRID_STEPS[0]
} else if idx == GRID_STEPS.len() {
GRID_STEPS[idx - 1]
} else {
let a = GRID_STEPS[idx];
let b = GRID_STEPS[idx - 1];
if a - fov > fov - b {
b
} else {
a
}
}
}
}
}

View File

@@ -0,0 +1,175 @@
use super::label::Label;
use crate::math::projection::ProjectionType;
use crate::math::sph_geom::region::Intersection;
use crate::CameraViewPort;
use crate::math::lonlat::LonLat;
use crate::math::{PI, TWICE_PI};
use crate::renderable::line;
use core::ops::Range;
pub fn get_intersecting_parallel(
lat: f64,
camera: &CameraViewPort,
projection: &ProjectionType,
) -> Option<Parallel> {
let fov = camera.get_field_of_view();
if fov.get_bounding_box().get_lon_size() > PI {
// Longitude fov >= PI
let camera_center = camera.get_center();
let lon_start = camera_center.lon().to_radians();
Some(Parallel::new(
lat,
&(lon_start..(lon_start + TWICE_PI)),
camera,
LabelOptions::Centered,
projection,
))
} else {
// Longitude fov < PI
let i = fov.intersects_parallel(lat);
match i {
Intersection::Included => {
let camera_center = camera.get_center();
let lon_start = camera_center.lon().to_radians();
Some(Parallel::new(
lat,
&(lon_start..(lon_start + TWICE_PI)),
camera,
LabelOptions::Centered,
projection,
))
}
Intersection::Intersect { vertices } => {
let v1 = &vertices[0];
let v2 = &vertices[1];
let mut lon1 = v1.lon().to_radians();
let mut lon2 = v2.lon().to_radians();
let lon_len = crate::math::sph_geom::distance_from_two_lon(lon1, lon2);
let _len_vert = vertices.len();
// The fov should be contained into PI length
if lon_len >= PI {
std::mem::swap(&mut lon1, &mut lon2);
}
Some(Parallel::new(
lat,
&(lon1..lon2),
camera,
LabelOptions::OnSide,
projection,
))
}
Intersection::Empty => None,
}
}
}
pub struct Parallel {
// List of vertices
vertices: Vec<[f32; 2]>,
// Line vertices indices
indices: Vec<Range<usize>>,
label: Option<Label>,
}
use super::label::LabelOptions;
impl Parallel {
pub fn new(
lat: f64,
lon: &Range<f64>,
camera: &CameraViewPort,
label_options: LabelOptions,
projection: &ProjectionType,
) -> Self {
let label = Label::from_parallel(lat, lon, label_options, camera, projection);
// Draw the full parallel
let vertices = if lon.end - lon.start > PI {
let mut vertices =
line::parallel_arc::project(lat, lon.start, lon.start + PI, camera, projection);
vertices.append(&mut line::parallel_arc::project(
lat,
lon.start + PI,
lon.end,
camera,
projection,
));
vertices
} else {
line::parallel_arc::project(lat, lon.start, lon.end, camera, projection)
};
/*let mut prev_v = [vertices[0].x as f32, vertices[0].y as f32];
let vertices: Vec<_> = std::iter::once(prev_v)
.chain(
vertices.into_iter().skip(1)
.filter_map(|v| {
let cur_v = [v.x as f32, v.y as f32];
if cur_v == prev_v {
None
} else {
prev_v = cur_v;
Some(cur_v)
}
})
)
.collect();
let indices = vec![0..vertices.len()];
*/
let mut start_idx = 0;
let mut indices = if vertices.len() >= 3 {
let v_iter = (1..(vertices.len() - 1)).map(|i| &vertices[i]);
v_iter
.clone()
.zip(v_iter.skip(1))
.enumerate()
.step_by(2)
.filter_map(|(i, (v1, v2))| {
if v1 == v2 {
None
} else {
let res = Some(start_idx..(i + 2));
start_idx = i + 2;
res
}
})
.collect()
} else {
vec![]
};
indices.push(start_idx..vertices.len());
Self {
vertices,
indices,
label,
}
}
#[inline]
pub fn get_lines_vertices(&self) -> Vec<&[[f32; 2]]> {
self.indices
.iter()
.map(|range| &self.vertices[range.start..range.end])
.collect()
}
#[inline]
pub fn get_label(&self) -> Option<&Label> {
self.label.as_ref()
}
}

View File

@@ -3,12 +3,24 @@ use std::cmp::Ordering;
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
pub struct HEALPixCell(pub u8, pub u64);
use crate::survey::config::HiPSConfig;
use crate::Abort;
#[derive(Debug)]
pub struct CellVertices {
pub vertices: Vec<Box<[(f64, f64)]>>,
pub closed: bool,
}
const BIT_MASK_ALL_ONE_EXCEPT_FIRST: u32 = !0x1;
use healpix::compass_point::Cardinal;
use healpix::compass_point::MainWind;
use healpix::compass_point::Ordinal;
use healpix::compass_point::OrdinalMap;
use crate::utils;
use crate::Abort;
impl HEALPixCell {
// Build the parent cell
#[inline]
#[inline(always)]
pub fn parent(self) -> HEALPixCell {
let depth = self.depth();
if depth == 0 {
@@ -20,6 +32,7 @@ impl HEALPixCell {
}
}
#[inline(always)]
pub fn ancestor(self, delta_depth: u8) -> HEALPixCell {
let HEALPixCell(depth, idx) = self;
let delta_depth = std::cmp::min(delta_depth, depth);
@@ -28,16 +41,18 @@ impl HEALPixCell {
}
// Get the texture cell in which the tile is
pub fn get_texture_cell(&self, config: &HiPSConfig) -> HEALPixCell {
let delta_depth_to_texture = config.delta_depth();
#[inline(always)]
pub fn get_texture_cell(&self, delta_depth_to_texture: u8) -> HEALPixCell {
self.ancestor(delta_depth_to_texture)
}
pub fn get_offset_in_texture_cell(&self, config: &HiPSConfig) -> (u32, u32) {
let texture_cell = self.get_texture_cell(config);
#[inline(always)]
pub fn get_offset_in_texture_cell(&self, delta_depth_to_texture: u8) -> (u32, u32) {
let texture_cell = self.get_texture_cell(delta_depth_to_texture);
self.offset_in_parent(&texture_cell)
}
#[inline(always)]
pub fn offset_in_parent(&self, parent_cell: &HEALPixCell) -> (u32, u32) {
let HEALPixCell(depth, idx) = *self;
let HEALPixCell(parent_depth, parent_idx) = *parent_cell;
@@ -55,31 +70,92 @@ impl HEALPixCell {
(x, y)
}
#[inline]
#[inline(always)]
pub fn uniq(&self) -> i32 {
let HEALPixCell(depth, idx) = *self;
((16 << (depth << 1)) | idx) as i32
}
#[inline]
#[inline(always)]
pub fn idx(&self) -> u64 {
self.1
}
#[inline]
#[inline(always)]
pub fn depth(&self) -> u8 {
self.0
}
#[inline]
pub fn is_root(&self) -> bool {
#[inline(always)]
pub fn is_root(&self, _delta_depth_to_texture: u8) -> bool {
self.depth() == 0
}
// Returns the tile cells being contained into self
// Find the smallest HEALPix cell containing self and another cells
// Returns None if the 2 HEALPix cell are not located in the same base HEALPix cell
#[inline]
pub fn get_tile_cells(&self, config: &HiPSConfig) -> impl Iterator<Item = HEALPixCell> {
let delta_depth = config.delta_depth();
pub fn smallest_common_ancestor(&self, other: &HEALPixCell) -> Option<HEALPixCell> {
// We want the common smallest ancestor between self and another HEALPix cell
// For this, we should find the number of bits to shift both the 29 order ipix so that
// they are equal
// First we compute both cells ipix number at order 29
let mut c1 = *self;
let mut c2 = *other;
if c1.depth() > c2.depth() {
std::mem::swap(&mut c1, &mut c2);
}
let HEALPixCell(d1, idx1) = c1;
let HEALPixCell(d2, idx2) = c2.ancestor(c2.depth() - d1);
// idx1 and idx2 belongs to the same order
// c1 and c2 does not belong to the same HEALPix 0 order cell
if idx1 >> (2 * d1) != idx2 >> (2 * d2) {
None
} else {
// Find all the equal bits
let xor = idx1 ^ idx2;
// Then we retrieve the position of the bit where the value ipixs values change. This is the number of bits
// we must right shift the ipix 29 order to find the common ipix value
let xor_lz = xor.leading_zeros() & BIT_MASK_ALL_ONE_EXCEPT_FIRST;
let msb = ((std::mem::size_of::<u64>() * 8) as u32 - xor_lz) as u8;
// There is a common ancestor
Some(HEALPixCell(d1 - (msb >> 1), idx1 >> msb))
}
}
#[inline]
pub fn smallest_common_ancestors<'a>(
mut cells: impl Iterator<Item = &'a HEALPixCell>,
) -> Option<HEALPixCell> {
let (first_cell, second_cell) = (cells.next(), cells.next());
match (first_cell, second_cell) {
(Some(c1), Some(c2)) => {
let mut smallest_ancestor = c1.smallest_common_ancestor(c2);
while let (Some(ancestor), Some(cell)) = (smallest_ancestor, cells.next()) {
smallest_ancestor = ancestor.smallest_common_ancestor(&cell);
}
smallest_ancestor
}
(None, Some(_c2)) => {
// cannot happen as there must be a first cell before any second one
// property of iterator
unreachable!();
}
(Some(c1), None) => Some(*c1),
(None, None) => None,
}
}
// Returns the tile cells being contained into self
// delta depth between texture stored and tile cells
#[inline]
pub fn get_tile_cells(&self, delta_depth: u8) -> impl Iterator<Item = HEALPixCell> {
self.get_children_cells(delta_depth)
}
@@ -99,68 +175,228 @@ impl HEALPixCell {
(0_u64..(npix as u64)).map(move |pix| HEALPixCell(depth, pix))
}
#[inline]
#[inline(always)]
pub fn center(&self) -> (f64, f64) {
cdshealpix::nested::center(self.0, self.1)
healpix::nested::center(self.0, self.1)
}
#[inline]
#[inline(always)]
pub fn vertices(&self) -> [(f64, f64); 4] {
cdshealpix::nested::vertices(self.0, self.1)
healpix::nested::vertices(self.0, self.1)
}
#[inline]
#[inline(always)]
pub fn neighbor(&self, wind: MainWind) -> Option<HEALPixCell> {
let HEALPixCell(d, idx) = *self;
healpix::nested::neighbours(d, idx, false)
.get(wind)
.map(|idx| HEALPixCell(d, *idx))
}
#[inline(always)]
pub fn is_on_pole(&self) -> bool {
let two_times_depth = 2*self.depth();
let idx_d0 = self.idx() >> two_times_depth;
let HEALPixCell(depth, idx) = *self;
let two_times_depth = 2 * depth;
let idx_d0 = idx >> two_times_depth;
match idx_d0 {
0..=3 => {
(((idx_d0 + 1) << two_times_depth) - 1) == self.idx()
},
0..=3 => (((idx_d0 + 1) << two_times_depth) - 1) == idx,
4..=7 => false,
8..=11 => {
(idx_d0 << two_times_depth) == self.idx()
},
_ => unreachable!()
8..=11 => (idx_d0 << two_times_depth) == idx,
_ => unreachable!(),
}
}
// Given in ICRS(J2000)
#[inline]
pub fn new(depth: u8, theta: f64, delta: f64) -> Self {
let pix = cdshealpix::nested::hash(depth, theta, delta);
let pix = healpix::nested::hash(depth, theta, delta);
HEALPixCell(depth, pix)
}
#[inline]
pub fn path_along_cell_edge(
&self,
n_segments_by_side: u32
) -> Box<[(f64, f64)]> {
cdshealpix::nested::path_along_cell_edge(
pub fn path_along_cell_edge(&self, n_segments_by_side: u32) -> Box<[(f64, f64)]> {
healpix::nested::path_along_cell_edge(
self.depth(),
self.idx(),
&cdshealpix::compass_point::Cardinal::S,
&healpix::compass_point::Cardinal::S,
false,
n_segments_by_side
n_segments_by_side,
)
}
#[inline]
pub fn grid(
pub fn path_along_cell_side(
&self,
n_segments_by_side: u32
from_vertex: Cardinal,
to_vertex: Cardinal,
include_to_vertex: bool,
n_segments: u32,
) -> Box<[(f64, f64)]> {
cdshealpix::nested::grid(
healpix::nested::path_along_cell_side(
self.depth(),
self.idx(),
n_segments_by_side as u16
&from_vertex,
&to_vertex,
include_to_vertex,
n_segments,
)
}
pub fn path_along_sides(&self, sides: &OrdinalMap<u32>) -> Option<CellVertices> {
let se = sides.get(Ordinal::SE);
let sw = sides.get(Ordinal::SW);
let ne = sides.get(Ordinal::NE);
let nw = sides.get(Ordinal::NW);
let chain_edge_vertices = |card: &[Cardinal], n_segments: &[u32]| -> Box<[(f64, f64)]> {
let mut vertices = vec![];
let num_edges = card.len() - 1;
for (idx, (from_vertex, to_vertex)) in
(card.iter().zip(card.iter().skip(1))).enumerate()
{
let mut edge_vertices = self
.path_along_cell_side(
*from_vertex,
*to_vertex,
num_edges - 1 == idx,
n_segments[idx],
)
.into_vec();
vertices.append(&mut edge_vertices);
}
vertices.into_boxed_slice()
};
// N -> W, W -> S, S -> E, E -> N
match (nw, sw, se, ne) {
// all edges case
(Some(nw), Some(sw), Some(se), Some(ne)) => Some(CellVertices {
vertices: vec![
self.path_along_cell_side(Cardinal::N, Cardinal::W, false, *nw),
self.path_along_cell_side(Cardinal::W, Cardinal::S, false, *sw),
self.path_along_cell_side(Cardinal::S, Cardinal::E, false, *se),
self.path_along_cell_side(Cardinal::E, Cardinal::N, true, *ne),
],
closed: true,
}),
// no edges
(None, None, None, None) => None,
// 1 edge found
(Some(s), None, None, None) => Some(CellVertices {
vertices: vec![self.path_along_cell_side(Cardinal::N, Cardinal::W, true, *s)],
closed: false,
}),
(None, Some(s), None, None) => Some(CellVertices {
vertices: vec![self.path_along_cell_side(Cardinal::W, Cardinal::S, true, *s)],
closed: false,
}),
(None, None, Some(s), None) => Some(CellVertices {
vertices: vec![self.path_along_cell_side(Cardinal::S, Cardinal::E, true, *s)],
closed: false,
}),
(None, None, None, Some(s)) => Some(CellVertices {
vertices: vec![self.path_along_cell_side(Cardinal::E, Cardinal::N, true, *s)],
closed: false,
}),
// 2 edges cases
(Some(nw), Some(sw), None, None) => Some(CellVertices {
vertices: vec![chain_edge_vertices(
&[Cardinal::N, Cardinal::W, Cardinal::S],
&[*nw, *sw],
)],
closed: false,
}),
(Some(nw), None, Some(se), None) => Some(CellVertices {
vertices: vec![
self.path_along_cell_side(Cardinal::N, Cardinal::W, true, *nw),
self.path_along_cell_side(Cardinal::S, Cardinal::E, true, *se),
],
closed: false,
}),
(Some(nw), None, None, Some(ne)) => Some(CellVertices {
vertices: vec![chain_edge_vertices(
&[Cardinal::E, Cardinal::N, Cardinal::W],
&[*ne, *nw],
)],
closed: false,
}),
(None, Some(sw), Some(se), None) => Some(CellVertices {
vertices: vec![chain_edge_vertices(
&[Cardinal::W, Cardinal::S, Cardinal::E],
&[*sw, *se],
)],
closed: false,
}),
(None, Some(sw), None, Some(ne)) => Some(CellVertices {
vertices: vec![
self.path_along_cell_side(Cardinal::W, Cardinal::S, true, *sw),
self.path_along_cell_side(Cardinal::E, Cardinal::N, true, *ne),
],
closed: false,
}),
(None, None, Some(se), Some(ne)) => Some(CellVertices {
vertices: vec![chain_edge_vertices(
&[Cardinal::S, Cardinal::E, Cardinal::N],
&[*se, *ne],
)],
closed: false,
}),
// 3 edges cases
(Some(nw), Some(sw), Some(se), None) => Some(CellVertices {
vertices: vec![chain_edge_vertices(
&[Cardinal::N, Cardinal::W, Cardinal::S, Cardinal::E],
&[*nw, *sw, *se],
)],
closed: false,
}),
(Some(nw), Some(sw), None, Some(ne)) => Some(CellVertices {
vertices: vec![chain_edge_vertices(
&[Cardinal::E, Cardinal::N, Cardinal::W, Cardinal::S],
&[*ne, *nw, *sw],
)],
closed: false,
}),
(Some(nw), None, Some(se), Some(ne)) => Some(CellVertices {
vertices: vec![chain_edge_vertices(
&[Cardinal::S, Cardinal::E, Cardinal::N, Cardinal::W],
&[*se, *ne, *nw],
)],
closed: false,
}),
(None, Some(sw), Some(se), Some(ne)) => Some(CellVertices {
vertices: vec![chain_edge_vertices(
&[Cardinal::W, Cardinal::S, Cardinal::E, Cardinal::N],
&[*sw, *se, *ne],
)],
closed: false,
}),
}
}
#[inline]
pub fn grid(&self, n_segments_by_side: u32) -> Box<[(f64, f64)]> {
healpix::nested::grid(self.depth(), self.idx(), n_segments_by_side as u16)
}
#[inline(always)]
pub fn z_29(&self) -> u64 {
self.1 << ((29 - self.0) << 1)
}
#[inline(always)]
pub fn z_29_rng(&self) -> Range<u64> {
let start = self.1 << ((29 - self.0) << 1);
let end = (self.1 + 1) << ((29 - self.0) << 1);
start..end
}
}
pub const MAX_HPX_DEPTH: u8 = 29;
pub const NUM_HPX_TILES_DEPTH_ZERO: usize = 12;
pub const ALLSKY_HPX_CELLS_D0: &[HEALPixCell; NUM_HPX_TILES_DEPTH_ZERO] = &[
HEALPixCell(0, 0),
@@ -214,12 +450,10 @@ impl Iterator for HEALPixTilesIter {
}
}
// Follow the z-order curve
impl PartialOrd for HEALPixCell {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
let n1 = self.1 << ((29 - self.0) << 1);
let n2 = other.1 << ((29 - other.0) << 1);
n1.partial_cmp(&n2)
self.z_29().partial_cmp(&other.z_29())
}
}
impl Ord for HEALPixCell {
@@ -227,3 +461,70 @@ impl Ord for HEALPixCell {
self.partial_cmp(other).unwrap_abort()
}
}
mod tests {
use super::HEALPixCell;
fn test_ancestor(c1: HEALPixCell, c2: HEALPixCell) {
let test = dbg!(c1.smallest_common_ancestor(&c2));
let gnd_true = dbg!(get_common_ancestor(c1, c2));
assert_eq!(test, gnd_true);
}
fn get_common_ancestor(mut c1: HEALPixCell, mut c2: HEALPixCell) -> Option<HEALPixCell> {
if c1.depth() > c2.depth() {
std::mem::swap(&mut c1, &mut c2);
}
c2 = c2.ancestor(c2.depth() - c1.depth());
while c2 != c1 && c1.depth() > 0 {
c2 = c2.parent();
c1 = c1.parent();
}
if c1 == c2 {
Some(c1)
} else {
None
}
}
#[test]
fn test_smallest_common_ancestor() {
test_ancestor(HEALPixCell(1, 2), HEALPixCell(1, 3));
test_ancestor(HEALPixCell(3, 0), HEALPixCell(3, 192));
test_ancestor(HEALPixCell(5, 6814), HEALPixCell(11, 27910909));
test_ancestor(HEALPixCell(2, 41), HEALPixCell(2, 37));
assert_eq!(
HEALPixCell(2, 159).smallest_common_ancestor(&HEALPixCell(2, 144)),
Some(HEALPixCell(0, 9))
);
assert_eq!(
HEALPixCell(2, 144).smallest_common_ancestor(&HEALPixCell(2, 159)),
Some(HEALPixCell(0, 9))
);
assert_eq!(
HEALPixCell(3, 0).smallest_common_ancestor(&HEALPixCell(3, 192)),
None
);
test_ancestor(HEALPixCell(3, 0), HEALPixCell(3, 15));
test_ancestor(HEALPixCell(6, 27247), HEALPixCell(11, 27912704));
assert_eq!(
HEALPixCell(9, 1048575).smallest_common_ancestor(&HEALPixCell(9, 786432)),
Some(HEALPixCell(0, 3))
);
assert_eq!(
HEALPixCell(9, 786432).smallest_common_ancestor(&HEALPixCell(9, 1048575)),
Some(HEALPixCell(0, 3))
);
assert_eq!(
HEALPixCell(1, 0).smallest_common_ancestor(&HEALPixCell(1, 0)),
Some(HEALPixCell(1, 0))
);
}
}

View File

@@ -1,69 +1,91 @@
use crate::math;
use moclib::{
moc::range::RangeMOC,
qty::Hpx
};
use cgmath::{Vector3, Vector4};
use crate::math::lonlat::LonLatT;
use crate::math::PI;
use cgmath::{Vector3, Vector4};
use moclib::{moc::range::RangeMOC, qty::Hpx, ranges::SNORanges};
pub type Smoc = RangeMOC<u64, Hpx<u64>>;
use crate::healpix::cell::HEALPixCell;
#[derive(Clone, Debug)]
pub struct HEALPixCoverage(pub Smoc);
use moclib::elemset::range::MocRanges;
impl HEALPixCoverage {
pub fn new(
pub fn from_3d_coos<'a>(
// The depth of the smallest HEALPix cells contained in it
depth: u8,
// The vertices of the polygon delimiting the coverage
vertices: &[Vector4<f64>],
vertices_iter: impl Iterator<Item = Vector4<f64>>,
// A vertex being inside the coverage,
// typically the center of projection
inside: &Vector3<f64>,
) -> Self {
let lonlat = vertices
.iter()
let lonlat = vertices_iter
.map(|vertex| {
let (lon, lat) = math::lonlat::xyzw_to_radec(vertex);
let (lon, lat) = math::lonlat::xyzw_to_radec(&vertex);
(lon.0, lat.0)
})
.collect::<Vec<_>>();
let (inside_lon, inside_lat) = math::lonlat::xyz_to_radec(inside);
let moc = RangeMOC::from_polygon_with_control_point(&lonlat[..], (inside_lon.0, inside_lat.0), depth);
let moc = RangeMOC::from_polygon_with_control_point(
&lonlat[..],
(inside_lon.0, inside_lat.0),
depth,
);
HEALPixCoverage(moc)
}
pub fn from_hpx_cells(depth: u8, hpx_idx: impl Iterator<Item = u64>, cap: Option<usize>) -> Self {
pub fn from_fixed_hpx_cells(
depth: u8,
hpx_idx: impl Iterator<Item = u64>,
cap: Option<usize>,
) -> Self {
let moc = RangeMOC::from_fixed_depth_cells(depth, hpx_idx, cap);
HEALPixCoverage(moc)
}
pub fn from_hpx_cells<'a>(
depth: u8,
hpx_cell_it: impl Iterator<Item = &'a HEALPixCell>,
cap: Option<usize>,
) -> Self {
let cells_it = hpx_cell_it.map(|HEALPixCell(depth, idx)| (*depth, *idx));
let moc = RangeMOC::from_cells(depth, cells_it, cap);
HEALPixCoverage(moc)
}
pub fn from_cone(lonlat: &LonLatT<f64>, rad: f64, depth: u8) -> Self {
if rad >= PI {
Self::allsky(depth)
} else {
HEALPixCoverage(RangeMOC::from_cone(
lonlat.lon().to_radians(),
lonlat.lat().to_radians(),
rad,
depth,
0,
))
}
}
pub fn allsky(depth_max: u8) -> Self {
let moc = RangeMOC::new_full_domain(depth_max);
HEALPixCoverage(moc)
}
pub fn contains_coo(&self, vertex: &Vector4<f64>) -> bool {
let (lon, lat) = math::lonlat::xyzw_to_radec(vertex);
pub fn contains_coo(&self, coo: &Vector4<f64>) -> bool {
let (lon, lat) = math::lonlat::xyzw_to_radec(coo);
self.0.is_in(lon.0, lat.0)
}
pub fn contains(&self, cell: &HEALPixCell) -> bool {
let HEALPixCell(depth, idx) = *cell;
// O(log2(N))
pub fn intersects_cell(&self, cell: &HEALPixCell) -> bool {
let z29_rng = cell.z_29_rng();
let start_idx = idx << (2*(29 - depth));
let end_idx = (idx + 1) << (2*(29 - depth));
let moc = RangeMOC::new(
29,
MocRanges::<u64, moclib::qty::Hpx<u64>>::new_unchecked(
vec![start_idx..end_idx],
)
);
self.is_intersecting(&HEALPixCoverage(moc))
self.0.moc_ranges().intersects_range(&z29_rng)
}
pub fn is_intersecting(&self, other: &Self) -> bool {
@@ -73,6 +95,14 @@ impl HEALPixCoverage {
pub fn depth(&self) -> u8 {
self.0.depth_max()
}
pub fn sky_fraction(&self) -> f64 {
self.0.coverage_percentage()
}
pub fn empty(depth: u8) -> Self {
HEALPixCoverage(RangeMOC::new_empty(depth))
}
}
use core::ops::Deref;

View File

@@ -0,0 +1,157 @@
use crate::{
healpix::cell::HEALPixCell,
math::sph_geom::great_circle_arc::{GreatCircleArc, HEALPixBBox},
};
use std::ops::Range;
#[derive(Debug)]
pub struct IdxVec(Box<[(u32, u32)]>);
use crate::math::lonlat::LonLat;
impl IdxVec {
/// Build a coordinate index vector from a list of sky coordinates sorted by HEALPix value
pub fn from_coo<T>(coos: &mut [T]) -> Self
where
T: LonLat<f32>,
{
coos.sort_unstable_by(|c1, c2| {
let ll1 = c1.lonlat();
let ll2 = c2.lonlat();
let h1 = healpix::nested::hash(
7,
ll1.lon().to_radians() as f64,
ll1.lat().to_radians() as f64,
);
let h2 = healpix::nested::hash(
7,
ll2.lon().to_radians() as f64,
ll2.lat().to_radians() as f64,
);
h1.cmp(&h2)
});
let mut coo_idx_vector = vec![(u32::MAX, u32::MAX); 196608];
for (idx, s) in coos.iter().enumerate() {
let lonlat = s.lonlat();
let hash = healpix::nested::hash(
7,
lonlat.lon().to_radians() as f64,
lonlat.lat().to_radians() as f64,
) as usize;
if coo_idx_vector[hash].0 == u32::MAX {
let idx_u32 = idx as u32;
coo_idx_vector[hash] = (idx_u32, idx_u32 + 1);
} else {
coo_idx_vector[hash].1 += 1;
}
}
let mut idx_source = 0;
for coo_idx in coo_idx_vector.iter_mut() {
if coo_idx.0 == u32::MAX {
*coo_idx = (idx_source, idx_source);
} else {
idx_source = coo_idx.1;
}
}
IdxVec(coo_idx_vector.into_boxed_slice())
}
// Create an index vector from a list of segments
pub fn from_great_circle_arc(arcs: &mut [GreatCircleArc]) -> Self {
arcs.sort_unstable_by(|a1, a2| {
let bbox1 = a1.get_containing_hpx_cell();
let bbox2 = a2.get_containing_hpx_cell();
bbox1.cmp(&bbox2)
});
// At this point the arcs are sorted by the z-order curve of their
// HEALPix cell bbox
let zorder_hpx_cell_iter = arcs.iter().filter_map(|arc| {
let hpx_bbox = arc.get_containing_hpx_cell();
match hpx_bbox {
HEALPixBBox::AllSky => None,
HEALPixBBox::Cell(cell) => Some(cell),
}
});
Self::from_hpx_cells(zorder_hpx_cell_iter)
}
// Create an index vector from a list of healpix cells sorted by z-order curve
pub fn from_hpx_cells<'a>(zorder_hpx_cell_iter: impl Iterator<Item = &'a HEALPixCell>) -> Self {
let mut hpx_idx_vector = vec![(u32::MAX, u32::MAX); 196608];
for (idx, hpx_cell) in zorder_hpx_cell_iter.enumerate() {
let HEALPixCell(hpx_cell_depth, hpx_cell_idx) = *hpx_cell;
let hpx_cells_7 = if hpx_cell_depth >= 7 {
let hpx_cell_7_start = hpx_cell_idx >> (2 * (hpx_cell_depth - 7));
let hpx_cell_7_end = hpx_cell_7_start + 1;
(hpx_cell_7_start as usize)..(hpx_cell_7_end as usize)
} else {
let shift = 2 * (7 - hpx_cell_depth);
let hpx_cell_7_start = hpx_cell_idx << shift;
let hpx_cell_7_end = (hpx_cell_idx + 1) << shift;
(hpx_cell_7_start as usize)..(hpx_cell_7_end as usize)
};
for hash in hpx_cells_7 {
if hpx_idx_vector[hash].0 == u32::MAX {
let idx_u32 = idx as u32;
hpx_idx_vector[hash] = (idx_u32, idx_u32 + 1);
} else {
hpx_idx_vector[hash].1 += 1;
}
}
}
let mut idx_hash = 0;
for item in hpx_idx_vector.iter_mut() {
if item.0 == u32::MAX {
*item = (idx_hash, idx_hash);
} else {
idx_hash = item.1;
}
}
IdxVec(hpx_idx_vector.into_boxed_slice())
}
#[inline]
pub fn get_item_indices_inside_hpx_cell(&self, cell: &HEALPixCell) -> Range<usize> {
let HEALPixCell(depth, idx) = *cell;
if depth <= 7 {
let off = 2 * (7 - depth);
let healpix_idx_start = (idx << off) as usize;
let healpix_idx_end = ((idx + 1) << off) as usize;
let idx_start_sources = self.0[healpix_idx_start].0;
let idx_end_sources = self.0[healpix_idx_end - 1].1;
(idx_start_sources as usize)..(idx_end_sources as usize)
} else {
// depth > 7
// Get the sources that are contained in parent cell of depth 7
let off = 2 * (depth - 7);
let idx_start = (idx >> off) as usize;
let idx_start_sources = self.0[idx_start].0;
let idx_end_sources = self.0[idx_start].1;
(idx_start_sources as usize)..(idx_end_sources as usize)
}
}
}

View File

@@ -1,3 +1,4 @@
pub mod cell;
pub mod coverage;
pub mod utils;
pub mod utils;
pub mod index_vector;

View File

@@ -10,7 +10,7 @@ use crate::math::{angle::Angle, lonlat::LonLatT};
use cgmath::BaseFloat;
#[allow(dead_code)]
pub fn vertices_lonlat<S: BaseFloat>(cell: &HEALPixCell) -> [LonLatT<S>; 4] {
let (lon, lat): (Vec<_>, Vec<_>) = cdshealpix::nested::vertices(cell.depth(), cell.idx())
let (lon, lat): (Vec<_>, Vec<_>) = healpix::nested::vertices(cell.depth(), cell.idx())
.iter()
.map(|(lon, lat)| {
// Risky wrapping here
@@ -32,7 +32,7 @@ use crate::Abort;
/// Get the grid
pub fn grid_lonlat<S: BaseFloat>(cell: &HEALPixCell, n_segments_by_side: u16) -> Vec<LonLatT<S>> {
debug_assert!(n_segments_by_side > 0);
cdshealpix::nested::grid(cell.depth(), cell.idx(), n_segments_by_side)
healpix::nested::grid(cell.depth(), cell.idx(), n_segments_by_side)
.into_iter()
.map(|(lon, lat)| {
// Risky wrapping here
@@ -45,7 +45,7 @@ pub fn grid_lonlat<S: BaseFloat>(cell: &HEALPixCell, n_segments_by_side: u16) ->
}
pub fn hash_with_dxdy(depth: u8, lonlat: &LonLatT<f64>) -> (u64, f64, f64) {
cdshealpix::nested::hash_with_dxdy(depth, lonlat.lon().0, lonlat.lat().0)
healpix::nested::hash_with_dxdy(depth, lonlat.lon().0, lonlat.lat().0)
}
pub const MEAN_HPX_CELL_RES: &[f64; 30] = &[
@@ -78,5 +78,5 @@ pub const MEAN_HPX_CELL_RES: &[f64; 30] = &[
0.00000001524875622908,
0.00000000762437811454,
0.00000000381218905727,
0.00000000190609452864
0.00000000190609452864,
];

View File

@@ -2,6 +2,7 @@ use cgmath::Vector3;
use crate::camera::CameraViewPort;
use crate::math::angle::ToAngle;
use crate::math::projection::ProjectionType;
use crate::time::Time;
/// State for inertia
@@ -21,11 +22,11 @@ impl Inertia {
time_start: Time::now(),
ampl: ampl,
speed: ampl,
axis: axis
axis: axis,
}
}
pub fn apply(&mut self, camera: &mut CameraViewPort) {
pub fn apply(&mut self, camera: &mut CameraViewPort, proj: &ProjectionType) {
let t = ((Time::now() - self.time_start).as_millis() / 1000.0) as f64;
// Undamped angular frequency of the oscillator
// From wiki: https://en.wikipedia.org/wiki/Harmonic_oscillator
@@ -40,7 +41,7 @@ impl Inertia {
/*let alpha = 1_f32 + (0_f32 - 1_f32) * (10_f32 * t + 1_f32) * (-10_f32 * t).exp();
let alpha = alpha * alpha;
let fov = start_fov * (1_f32 - alpha) + goal_fov * alpha;*/
camera.rotate(&self.axis, self.speed.to_angle())
camera.rotate(&self.axis, self.speed.to_angle(), proj)
}
pub fn get_start_ampl(&self) -> f64 {

View File

@@ -20,12 +20,14 @@ use std::panic;
pub trait Abort {
type Item;
fn unwrap_abort(self) -> Self::Item where Self: Sized;
fn unwrap_abort(self) -> Self::Item
where
Self: Sized;
}
impl<T> Abort for Option<T> {
type Item = T;
#[inline]
fn unwrap_abort(self) -> Self::Item {
use std::process;
@@ -80,42 +82,45 @@ mod camera;
mod coosys;
mod downloader;
mod fifo_cache;
mod grid;
mod healpix;
pub mod line;
mod inertia;
pub mod math;
pub mod renderable;
mod shader;
mod survey;
mod tile_fetcher;
mod time;
mod fifo_cache;
mod inertia;
use crate::{
camera::CameraViewPort, math::lonlat::LonLatT, shader::ShaderManager, time::DeltaTime,
healpix::coverage::HEALPixCoverage,
};
use crate::downloader::request::moc::from_fits_hpx;
use moclib::deser::fits::MocQtyType;
use moclib::deser::fits::MocIdxType;
use crate::{
camera::CameraViewPort, healpix::coverage::HEALPixCoverage, math::lonlat::LonLatT,
shader::ShaderManager, time::DeltaTime,
};
use moclib::deser::fits;
use moclib::deser::fits::MocIdxType;
use moclib::deser::fits::MocQtyType;
use std::io::Cursor;
use al_api::hips::HiPSProperties;
use al_api::coo_system::CooSystem;
use al_api::color::{Color, ColorRGBA};
use al_api::coo_system::CooSystem;
use al_api::hips::FITSCfg;
use al_api::hips::HiPSProperties;
use al_core::Colormap;
use al_core::{WebGlContext};
use al_core::colormap::Colormaps;
use al_core::Colormap;
use al_core::WebGlContext;
use app::App;
use cgmath::{Vector2};
use cgmath::Vector2;
use math::angle::ArcDeg;
use moclib::{qty::Hpx, moc::{CellMOCIterator, CellMOCIntoIterator, RangeMOCIterator}};
use moclib::{
moc::{CellMOCIntoIterator, CellMOCIterator, RangeMOCIterator},
qty::Hpx,
};
#[wasm_bindgen]
pub struct WebClient {
@@ -155,12 +160,7 @@ impl WebClient {
// Event listeners callbacks
let callback_position_changed = js_sys::Function::new_no_args("");
let app = App::new(
&gl,
shaders,
resources,
callback_position_changed,
)?;
let app = App::new(&gl, shaders, resources, callback_position_changed)?;
let dt = DeltaTime::zero();
@@ -181,7 +181,7 @@ impl WebClient {
/// * `dt` - The time elapsed from the last frame update
/// * `force` - This parameter ensures to force the update of some elements
/// even if the camera has not moved
///
///
/// # Return
/// Whether the view is moving or not
pub fn update(&mut self, dt: f32) -> Result<bool, JsValue> {
@@ -216,43 +216,75 @@ impl WebClient {
pub fn set_projection(&mut self, projection: &str) -> Result<(), JsValue> {
match projection {
// Zenithal
"TAN" => self.app.set_projection(ProjectionType::Tan(mapproj::zenithal::tan::Tan::new())), /* Gnomonic projection */
"STG" => self.app.set_projection(ProjectionType::Stg(mapproj::zenithal::stg::Stg::new())), /* Stereographic projection */
"SIN" => self.app.set_projection(ProjectionType::Sin(mapproj::zenithal::sin::Sin::new())), /* Orthographic */
"ZEA" => self.app.set_projection(ProjectionType::Zea(mapproj::zenithal::zea::Zea::new())), /* Equal-area */
"FEYE" => self.app.set_projection(ProjectionType::Feye(mapproj::zenithal::feye::Feye::new())),
"TAN" => self
.app
.set_projection(ProjectionType::Tan(mapproj::zenithal::tan::Tan::new())), /* Gnomonic projection */
"STG" => self
.app
.set_projection(ProjectionType::Stg(mapproj::zenithal::stg::Stg::new())), /* Stereographic projection */
"SIN" => self
.app
.set_projection(ProjectionType::Sin(mapproj::zenithal::sin::Sin::new())), /* Orthographic */
"ZEA" => self
.app
.set_projection(ProjectionType::Zea(mapproj::zenithal::zea::Zea::new())), /* Equal-area */
"FEYE" => self
.app
.set_projection(ProjectionType::Feye(mapproj::zenithal::feye::Feye::new())),
"AIR" => {
let air_proj = mapproj::zenithal::air::Air::new();
//air_proj.set_n_iter(10);
//air_proj.set_eps(1e-12);
self.app.set_projection(ProjectionType::Air(air_proj))
},
}
//"AZP",
"ARC" => self.app.set_projection(ProjectionType::Arc(mapproj::zenithal::arc::Arc::new())),
"NCP" => self.app.set_projection(ProjectionType::Ncp(mapproj::zenithal::ncp::Ncp::new())),
"ARC" => self
.app
.set_projection(ProjectionType::Arc(mapproj::zenithal::arc::Arc::new())),
"NCP" => self
.app
.set_projection(ProjectionType::Ncp(mapproj::zenithal::ncp::Ncp::new())),
// Cylindrical
"MER" => self.app.set_projection(ProjectionType::Mer(mapproj::cylindrical::mer::Mer::new())),
"CAR" => self.app.set_projection(ProjectionType::Car(mapproj::cylindrical::car::Car::new())),
"CEA" => self.app.set_projection(ProjectionType::Cea(mapproj::cylindrical::cea::Cea::new())),
"CYP" => self.app.set_projection(ProjectionType::Cyp(mapproj::cylindrical::cyp::Cyp::new())),
"MER" => self
.app
.set_projection(ProjectionType::Mer(mapproj::cylindrical::mer::Mer::new())),
"CAR" => self
.app
.set_projection(ProjectionType::Car(mapproj::cylindrical::car::Car::new())),
"CEA" => self
.app
.set_projection(ProjectionType::Cea(mapproj::cylindrical::cea::Cea::new())),
"CYP" => self
.app
.set_projection(ProjectionType::Cyp(mapproj::cylindrical::cyp::Cyp::new())),
// Pseudo-cylindrical
"AIT" => self.app.set_projection(ProjectionType::Ait(mapproj::pseudocyl::ait::Ait::new())),
"PAR" => self.app.set_projection(ProjectionType::Par(mapproj::pseudocyl::par::Par::new())),
"SFL" => self.app.set_projection(ProjectionType::Sfl(mapproj::pseudocyl::sfl::Sfl::new())),
"AIT" => self
.app
.set_projection(ProjectionType::Ait(mapproj::pseudocyl::ait::Ait::new())),
"PAR" => self
.app
.set_projection(ProjectionType::Par(mapproj::pseudocyl::par::Par::new())),
"SFL" => self
.app
.set_projection(ProjectionType::Sfl(mapproj::pseudocyl::sfl::Sfl::new())),
"MOL" => {
let mut mol_proj = mapproj::pseudocyl::mol::Mol::new();
mol_proj.set_n_iter(10);
mol_proj.set_epsilon(1e-12);
self.app.set_projection(ProjectionType::Mol(mol_proj))
},
// Conic
"COD" => self.app.set_projection(ProjectionType::Cod(mapproj::conic::cod::Cod::new())),
// Hybrid
"HPX" => self.app.set_projection(ProjectionType::Hpx(mapproj::hybrid::hpx::Hpx::new())),
_ => {
Err(JsValue::from_str("Not a valid projection name. AIT, ARC, SIN, TAN, MOL, HPX and MER are accepted"))
}
// Conic
"COD" => self
.app
.set_projection(ProjectionType::Cod(mapproj::conic::cod::Cod::new())),
// Hybrid
"HPX" => self
.app
.set_projection(ProjectionType::Hpx(mapproj::hybrid::hpx::Hpx::new())),
_ => Err(JsValue::from_str(
"Not a valid projection name. AIT, ARC, SIN, TAN, MOL, HPX and MER are accepted",
)),
}
}
@@ -349,7 +381,11 @@ impl WebClient {
}
#[wasm_bindgen(js_name = swapLayers)]
pub fn swap_layers(&mut self, first_layer: String, second_layer: String) -> Result<(), JsValue> {
pub fn swap_layers(
&mut self,
first_layer: String,
second_layer: String,
) -> Result<(), JsValue> {
// Deserialize the survey objects that compose the survey
self.app.swap_layers(&first_layer, &second_layer)
}
@@ -568,19 +604,30 @@ impl WebClient {
/// * `lat` - A latitude in degrees
#[wasm_bindgen(js_name = worldToScreen)]
pub fn world_to_screen(&self, lon: f64, lat: f64) -> Option<Box<[f64]>> {
self.app.world_to_screen(lon, lat)
self.app
.world_to_screen(lon, lat)
.map(|v| Box::new([v.x, v.y]) as Box<[f64]>)
}
#[wasm_bindgen(js_name = screenToClip)]
pub fn screen_to_clip(&self, x: f64, y: f64) -> Box<[f64]> {
let v = self.app.screen_to_clip(&Vector2::new(x, y));
Box::new([v.x, v.y]) as Box<[f64]>
}
#[wasm_bindgen(js_name = worldToScreenVec)]
pub fn world_to_screen_vec(&self, lon: &[f64], lat: &[f64]) -> Box<[f64]> {
let vertices = lon.iter()
let vertices = lon
.iter()
.zip(lat.iter())
.map(|(&lon, &lat)| {
let xy = self.app.world_to_screen(lon, lat)
let xy = self
.app
.world_to_screen(lon, lat)
.map(|v| [v.x, v.y])
.unwrap_or([0.0, 0.0]);
xy
})
.flatten()
@@ -590,9 +637,7 @@ impl WebClient {
}
#[wasm_bindgen(js_name = setCatalog)]
pub fn set_catalog(&self, catalog: &Catalog) {
}
pub fn set_catalog(&self, _catalog: &Catalog) {}
/// Screen to world unprojection
///
@@ -602,7 +647,8 @@ impl WebClient {
/// * `pos_y` - The y screen coordinate in pixels
#[wasm_bindgen(js_name = screenToWorld)]
pub fn screen_to_world(&self, pos_x: f64, pos_y: f64) -> Option<Box<[f64]>> {
self.app.screen_to_world(&Vector2::new(pos_x, pos_y))
self.app
.screen_to_world(&Vector2::new(pos_x, pos_y))
.map(|lonlat| {
let lon_deg: ArcDeg<f64> = lonlat.lon().into();
let lat_deg: ArcDeg<f64> = lonlat.lat().into();
@@ -629,6 +675,14 @@ impl WebClient {
Ok(())
}
/// Signal the backend when the left mouse button has been pressed.
#[wasm_bindgen(js_name = moveMouse)]
pub fn move_mouse(&mut self, s1x: f32, s1y: f32, s2x: f32, s2y: f32) -> Result<(), JsValue> {
self.app.move_mouse(s1x, s1y, s2x, s2y);
Ok(())
}
/// Add a catalog rendered as a heatmap.
///
/// # Arguments
@@ -745,7 +799,9 @@ impl WebClient {
/// in core/img/colormaps/colormaps.png
#[wasm_bindgen(js_name = getAvailableColormapList)]
pub fn get_available_colormap_list(&self) -> Result<Vec<JsValue>, JsValue> {
let colormaps = self.app.get_colormaps()
let colormaps = self
.app
.get_colormaps()
.get_list_available_colormaps()
.iter()
.map(|s| JsValue::from_str(s))
@@ -755,20 +811,24 @@ impl WebClient {
}
#[wasm_bindgen(js_name = createCustomColormap)]
pub fn add_custom_colormap(&mut self, label: String, hex_colors: Vec<JsValue>) -> Result<(), JsValue> {
pub fn add_custom_colormap(
&mut self,
label: String,
hex_colors: Vec<JsValue>,
) -> Result<(), JsValue> {
let rgba_colors: Result<Vec<_>, JsValue> = hex_colors
.into_iter()
.map(|hex_color| {
let hex_color = serde_wasm_bindgen::from_value(hex_color)?;
let color = Color::hexToRgba(hex_color);
let color_rgba: ColorRGBA = color.try_into()?;
Ok(colorgrad::Color::new(
color_rgba.r as f64,
color_rgba.g as f64,
color_rgba.b as f64,
color_rgba.a as f64)
)
color_rgba.a as f64,
))
})
.collect();
@@ -823,7 +883,11 @@ impl WebClient {
}
#[wasm_bindgen(js_name = addJSONMoc)]
pub fn add_json_moc(&mut self, params: &al_api::moc::MOC, data: &JsValue) -> Result<(), JsValue> {
pub fn add_json_moc(
&mut self,
params: &al_api::moc::MOC,
data: &JsValue,
) -> Result<(), JsValue> {
let str: String = js_sys::JSON::stringify(data)?.into();
let moc = moclib::deser::json::from_json_aladin::<u64, Hpx<u64>>(&str)
@@ -839,8 +903,9 @@ impl WebClient {
#[wasm_bindgen(js_name = parseVOTable)]
pub fn parse_votable(&mut self, s: &str) -> Result<JsValue, JsValue> {
let votable: VOTableWrapper<votable::impls::mem::InMemTableDataRows> = votable::votable::VOTableWrapper::from_ivoa_xml_str(s)
.map_err(|err| JsValue::from_str(&format!("Error parsing votable: {:?}", err)))?;
let votable: VOTableWrapper<votable::impls::mem::InMemTableDataRows> =
votable::votable::VOTableWrapper::from_ivoa_xml_str(s)
.map_err(|err| JsValue::from_str(&format!("Error parsing votable: {:?}", err)))?;
let votable = serde_wasm_bindgen::to_value(&votable)
.map_err(|_| JsValue::from_str("cannot convert votable to js type"))?;
@@ -851,11 +916,15 @@ impl WebClient {
#[wasm_bindgen(js_name = addFITSMoc)]
pub fn add_fits_moc(&mut self, params: &al_api::moc::MOC, data: &[u8]) -> Result<(), JsValue> {
//let bytes = js_sys::Uint8Array::new(array_buffer).to_vec();
let moc = match fits::from_fits_ivoa_custom(Cursor::new(&data[..]), false).map_err(|e| JsValue::from_str(&e.to_string()))? {
MocIdxType::U16(MocQtyType::<u16, _>::Hpx(moc)) => Ok(crate::downloader::request::moc::from_fits_hpx(moc)),
let moc = match fits::from_fits_ivoa_custom(Cursor::new(&data[..]), false)
.map_err(|e| JsValue::from_str(&e.to_string()))?
{
MocIdxType::U16(MocQtyType::<u16, _>::Hpx(moc)) => {
Ok(crate::downloader::request::moc::from_fits_hpx(moc))
}
MocIdxType::U32(MocQtyType::<u32, _>::Hpx(moc)) => Ok(from_fits_hpx(moc)),
MocIdxType::U64(MocQtyType::<u64, _>::Hpx(moc)) => Ok(from_fits_hpx(moc)),
_ => Err(JsValue::from_str("MOC not supported. Must be a HPX MOC"))
_ => Err(JsValue::from_str("MOC not supported. Must be a HPX MOC")),
}?;
self.app.add_moc(params.clone(), HEALPixCoverage(moc))?;
@@ -871,25 +940,31 @@ impl WebClient {
}
#[wasm_bindgen(js_name = setMocParams)]
pub fn set_moc_params(&mut self, params: &al_api::moc::MOC) -> Result<(), JsValue> {
self.app.set_moc_params(params.clone())?;
pub fn set_moc_cfg(&mut self, cfg: &al_api::moc::MOC) -> Result<(), JsValue> {
self.app.set_moc_cfg(cfg.clone())?;
Ok(())
}
#[wasm_bindgen(js_name = mocContains)]
pub fn moc_contains(&mut self, params: &al_api::moc::MOC, lon: f64, lat: f64) -> Result<bool, JsValue> {
let moc = self.app.get_moc(params).ok_or_else(|| JsValue::from(js_sys::Error::new("MOC not found")))?;
pub fn moc_contains(
&mut self,
_params: &al_api::moc::MOC,
_lon: f64,
_lat: f64,
) -> Result<bool, JsValue> {
/*let moc = self.app.get_moc(params).ok_or_else(|| JsValue::from(js_sys::Error::new("MOC not found")))?;
let location = LonLatT::new(ArcDeg(lon).into(), ArcDeg(lat).into());
Ok(moc.is_in(location.lon().0, location.lat().0))
Ok(moc.is_in(location.lon().0, location.lat().0))*/
Ok(false)
}
#[wasm_bindgen(js_name = mocSkyFraction)]
pub fn moc_sky_fraction(&mut self, params: &al_api::moc::MOC) -> Result<f32, JsValue> {
let moc = self.app.get_moc(params).ok_or_else(|| JsValue::from(js_sys::Error::new("MOC not found")))?;
pub fn moc_sky_fraction(&mut self, _params: &al_api::moc::MOC) -> Result<f32, JsValue> {
//let moc = self.app.get_moc(params).ok_or_else(|| JsValue::from(js_sys::Error::new("MOC not found")))?;
//Ok(moc.coverage_percentage() as f32)
Ok(moc.coverage_percentage() as f32)
Ok(0.0)
}
}

View File

@@ -1,143 +0,0 @@
use crate::math::angle::Angle;
use cgmath::Vector2;
use crate::ProjectionType;
use crate::CameraViewPort;
use cgmath::Zero;
use cgmath::InnerSpace;
use crate::math::angle::ToAngle;
pub fn project_along_longitudes_and_latitudes(
mut start_lon: f64,
mut start_lat: f64,
mut end_lon: f64,
mut end_lat: f64,
camera: &CameraViewPort,
projection: &ProjectionType
) -> Vec<Vector2<f64>> {
if start_lat >= end_lat {
std::mem::swap(&mut start_lat, &mut end_lat);
}
if start_lon >= end_lon {
std::mem::swap(&mut start_lon, &mut end_lon);
}
let num_point_max = if camera.is_allsky() {
12
} else {
let one_deg: Angle<f64> = ArcDeg(40.0).into();
if camera.get_aperture() < one_deg && !camera.contains_pole() {
2
} else {
6
}
};
let delta_lon = (end_lon - start_lon) / ((num_point_max - 1) as f64);
let delta_lat = (end_lat - start_lat) / ((num_point_max - 1) as f64);
let mut s_vert: Vec<Vector2<f64>> = vec![];
let mut start = true;
let mut prev = (0.0, 0.0, Vector2::zero());
for i in 0..num_point_max {
let (lon, lat) = (start_lon + (i as f64) * delta_lon, start_lat + (i as f64) * delta_lat);
if let Some(p) = crate::math::lonlat::proj(LonLatT::new(lon.to_angle(), lat.to_angle()), projection, camera) {
if start {
prev = (lon, lat, p);
start = false;
} else {
let cur = (lon, lat, p);
subdivide_along_longitude_and_latitudes(&mut s_vert, prev, cur, camera, projection, 0);
prev = cur;
}
} else if !start {
start = true;
}
}
s_vert
}
use crate::ArcDeg;
use crate::LonLatT;
const MAX_ANGLE_BEFORE_SUBDIVISION: Angle<f64> = Angle(0.10943951023); // 12 degrees
const MAX_ITERATION: usize = 3;
pub fn subdivide_along_longitude_and_latitudes(
vertices: &mut Vec<Vector2<f64>>,
(lon_s, lat_s, p_s): (f64, f64, Vector2<f64>),
(lon_e, lat_e, p_e): (f64, f64, Vector2<f64>),
camera: &CameraViewPort,
projection: &ProjectionType,
iter: usize,
) {
if iter > MAX_ITERATION {
vertices.push(p_s);
vertices.push(p_e);
return;
}
// Project them. We are always facing the camera
let lon_m = (lon_s + lon_e)*0.5;
let lat_m = (lat_s + lat_e)*0.5;
if let Some(p_m) = crate::math::lonlat::proj(LonLatT::new(lon_m.to_angle(), lat_m.to_angle()), projection, camera) {
let ab = p_m - p_s;
let bc = p_e - p_m;
let ab_l = ab.magnitude2();
let bc_l = bc.magnitude2();
if ab_l < 1e-5 || bc_l < 1e-5 {
return;
}
let ab = ab.normalize();
let bc = bc.normalize();
let theta = crate::math::vector::angle2(&ab, &bc);
let vectors_nearly_colinear = theta.abs() < MAX_ANGLE_BEFORE_SUBDIVISION;
if vectors_nearly_colinear {
// Check if ab and bc are colinear
if crate::math::vector::det(&ab, &bc).abs() < 1e-2 {
vertices.push(p_s);
vertices.push(p_e);
} else {
// not colinear
vertices.push(p_s);
vertices.push(p_m);
vertices.push(p_m);
vertices.push(p_e);
}
} else if ab_l.min(bc_l) / ab_l.max(bc_l) < 0.1 {
if ab_l < bc_l {
vertices.push(p_s);
vertices.push(p_m);
} else {
vertices.push(p_m);
vertices.push(p_e);
}
} else {
// Subdivide a->b and b->c
subdivide_along_longitude_and_latitudes(
vertices,
(lon_s, lat_s, p_s),
(lon_m, lat_m, p_m),
camera,
projection,
iter + 1
);
subdivide_along_longitude_and_latitudes(
vertices,
(lon_m, lat_m, p_m),
(lon_e, lat_e, p_e),
camera,
projection,
iter + 1
);
}
}
}

View File

@@ -31,10 +31,6 @@ where
LonLatT(lon, lat)
}
pub fn from_radians(lon: Rad<S>, lat: Rad<S>) -> LonLatT<S> {
LonLatT::new(lon.into(), lat.into())
}
#[inline]
pub fn lon(&self) -> Angle<S> {
self.0
@@ -45,8 +41,8 @@ where
self.1
}
pub fn vector<VectorT: LonLat<S>>(&self) -> VectorT {
VectorT::from_lonlat(self)
pub fn vector<T: LonLat<S>>(&self) -> T {
T::from_lonlat(self)
}
}
@@ -223,8 +219,9 @@ use crate::CameraViewPort;
use crate::ProjectionType;
use super::projection::coo_space::XYNDC;
use super::projection::coo_space::XYScreen;
#[inline]
pub fn proj(lonlat: LonLatT<f64>, projection: &ProjectionType, camera: &CameraViewPort) -> Option<XYNDC> {
pub fn proj(lonlat: &LonLatT<f64>, projection: &ProjectionType, camera: &CameraViewPort) -> Option<XYNDC> {
let xyzw = lonlat.vector();
projection.model_to_normalized_device_space(&xyzw, camera)
}
@@ -234,3 +231,15 @@ pub fn unproj(ndc_xy: &XYNDC, projection: &ProjectionType, camera: &CameraViewPo
projection.normalized_device_to_model_space(&ndc_xy, camera)
.map(|model_pos| model_pos.lonlat())
}
#[inline]
pub fn proj_to_screen(lonlat: &LonLatT<f64>, projection: &ProjectionType, camera: &CameraViewPort) -> Option<XYScreen> {
let xyzw = lonlat.vector();
projection.model_to_screen_space(&xyzw, camera)
}
#[inline]
pub fn unproj_from_screen(xy: &XYScreen, projection: &ProjectionType, camera: &CameraViewPort) -> Option<LonLatT<f64>> {
projection.screen_to_model_space(&xy, camera)
.map(|model_pos| model_pos.lonlat())
}

View File

@@ -1,13 +1,18 @@
pub const TWICE_PI: f64 = std::f64::consts::TAU;
pub const PI: f64 = std::f64::consts::PI;
pub const HALF_PI: f64 = std::f64::consts::PI * 0.5;
pub const MINUS_HALF_PI: f64 = -std::f64::consts::PI * 0.5;
pub const TWO_SQRT_TWO: f64 = 2.82842712475;
pub const SQRT_TWO: f64 = 1.41421356237;
pub const ZERO: f64 = 0.0;
pub mod angle;
pub mod lonlat;
pub mod projection;
pub mod rotation;
pub mod spherical;
pub mod sph_geom;
pub mod utils;
pub mod vector;

View File

@@ -9,4 +9,5 @@ pub type XYNDC = Vector2<f64>;
pub type XYClip = Vector2<f64>;
pub type XYZWorld = Vector3<f64>;
pub type XYZWWorld = Vector4<f64>;
pub type XYZWModel = Vector4<f64>;
pub type XYZWModel = Vector4<f64>;
pub type XYZModel = Vector3<f64>;

View File

@@ -8,27 +8,18 @@
// World space
use crate::camera::CameraViewPort;
use coo_space::XYZWModel;
use crate::domain::sdf::ProjDefType;
use crate::LonLatT;
use coo_space::XYZWModel;
//use crate::num_traits::FloatConst;
use crate::math::PI;
use crate::math::{
rotation::Rotation,
HALF_PI
};
use crate::math::{rotation::Rotation, HALF_PI};
use cgmath::Vector2;
pub mod coo_space;
pub mod domain;
use domain::{
full::FullScreen,
hpx::Hpx,
par::Par,
cod::Cod,
basic,
};
use domain::{basic, cod::Cod, full::FullScreen, hpx::Hpx, par::Par};
pub fn screen_to_ndc_space(
pos_screen_space: &Vector2<f64>,
@@ -130,7 +121,7 @@ pub enum ProjectionType {
Ncp(mapproj::zenithal::ncp::Ncp),
// Pseudo-cylindrical projections
/* AIT, Aitoff */
/* AIT, Aitoff */
Ait(mapproj::pseudocyl::ait::Ait),
// MOL, Mollweide */
Mol(mapproj::pseudocyl::mol::Mol),
@@ -241,9 +232,7 @@ impl ProjectionType {
camera: &CameraViewPort,
) -> Option<Vector2<f64>> {
self.view_to_normalized_device_space(pos_model_space, camera)
.map(|ndc_pos| {
crate::ndc_to_screen_space(&ndc_pos, camera)
})
.map(|ndc_pos| crate::ndc_to_screen_space(&ndc_pos, camera))
}
pub fn view_to_normalized_device_space(
@@ -251,7 +240,7 @@ impl ProjectionType {
pos_view_space: &Vector4<f64>,
camera: &CameraViewPort,
) -> Option<Vector2<f64>> {
let view_coosys = camera.get_system();
let view_coosys = camera.get_coo_system();
let c = CooSystem::ICRS.to::<f64>(view_coosys);
let m2w = camera.get_m2w();
@@ -264,7 +253,7 @@ impl ProjectionType {
pos_view_space: &Vector4<f64>,
camera: &CameraViewPort,
) -> Vector2<f64> {
let view_coosys = camera.get_system();
let view_coosys = camera.get_coo_system();
let c = CooSystem::ICRS.to::<f64>(view_coosys);
let m2w = camera.get_m2w();
@@ -282,6 +271,16 @@ impl ProjectionType {
self.world_to_normalized_device_space(&pos_world_space, camera)
}
pub fn model_to_clip_space(
&self,
pos_model_space: &XYZWModel,
camera: &CameraViewPort,
) -> Option<XYClip> {
let m2w = camera.get_m2w();
let pos_world_space = m2w * pos_model_space;
self.world_to_clip_space(&pos_world_space)
}
/// World to screen space projection
/// World to screen space transformation
@@ -346,6 +345,16 @@ impl ProjectionType {
.map(|pos_normalized_device| ndc_to_screen_space(&pos_normalized_device, camera))
}
pub(crate) fn is_allsky(&self) -> bool {
match self {
ProjectionType::Sin(_)
| ProjectionType::Tan(_)
| ProjectionType::Feye(_)
| ProjectionType::Ncp(_) => false,
_ => true,
}
}
pub fn bounds_size_ratio(&self) -> f64 {
match self {
// Zenithal projections
@@ -369,7 +378,7 @@ impl ProjectionType {
ProjectionType::Ncp(_) => 1.0,
// Pseudo-cylindrical projections
/* AIT, Aitoff */
/* AIT, Aitoff */
ProjectionType::Ait(_) => 2.0,
// MOL, Mollweide */
ProjectionType::Mol(_) => 2.0,
@@ -420,7 +429,7 @@ impl ProjectionType {
ProjectionType::Ncp(_) => 180.0,
// Pseudo-cylindrical projections
/* AIT, Aitoff */
/* AIT, Aitoff */
ProjectionType::Ait(_) => 360.0,
// MOL, Mollweide */
ProjectionType::Mol(_) => 360.0,
@@ -455,56 +464,56 @@ impl ProjectionType {
ProjectionType::Tan(_) => {
const FULL_SCREEN: ProjDefType = ProjDefType::FullScreen(FullScreen);
&FULL_SCREEN
},
}
/* STG, Stereographic projection */
ProjectionType::Stg(_) => {
const DISK: ProjDefType = ProjDefType::FullScreen(FullScreen);
&DISK
},
}
/* SIN, Orthographic */
ProjectionType::Sin(_) => {
const DISK: ProjDefType = ProjDefType::Disk(basic::disk::Disk { radius: 1.0 });
&DISK
},
}
/* ZEA, Equal-area */
ProjectionType::Zea(_) => {
const DISK: ProjDefType = ProjDefType::Disk(basic::disk::Disk { radius: 1.0 });
&DISK
},
}
/* FEYE, Fish-eyes */
ProjectionType::Feye(_) => {
const DISK: ProjDefType = ProjDefType::Disk(basic::disk::Disk { radius: 1.0 });
&DISK
},
}
/* AIR, */
ProjectionType::Air(_) => {
const DISK: ProjDefType = ProjDefType::FullScreen(FullScreen);
&DISK
},
}
//AZP: {fov: 180},
//Azp(mapproj::zenithal::azp::Azp),
/* ARC, */
ProjectionType::Arc(_) => {
const DISK: ProjDefType = ProjDefType::Disk(basic::disk::Disk { radius: 1.0 });
&DISK
},
}
/* NCP, */
ProjectionType::Ncp(_) => {
const DISK: ProjDefType = ProjDefType::Disk(basic::disk::Disk { radius: 1.0 });
&DISK
},
}
// Pseudo-cylindrical projections
/* AIT, Aitoff */
/* AIT, Aitoff */
ProjectionType::Ait(_) => {
const ELLIPSE: ProjDefType = ProjDefType::Disk(basic::disk::Disk { radius: 1.0 });
&ELLIPSE
},
}
// MOL, Mollweide */
ProjectionType::Mol(_) => {
const ELLIPSE: ProjDefType = ProjDefType::Disk(basic::disk::Disk { radius: 1.0 });
&ELLIPSE
},
}
// PAR, */
ProjectionType::Par(_) => {
const PAR: ProjDefType = ProjDefType::Par(Par);
@@ -521,22 +530,22 @@ impl ProjectionType {
ProjectionType::Mer(_) => {
const FULL_SCREEN: ProjDefType = ProjDefType::FullScreen(FullScreen);
&FULL_SCREEN
},
}
// CAR, */
ProjectionType::Car(_) => {
const FULL_SCREEN: ProjDefType = ProjDefType::FullScreen(FullScreen);
&FULL_SCREEN
},
}
// CEA, */
ProjectionType::Cea(_) => {
const FULL_SCREEN: ProjDefType = ProjDefType::FullScreen(FullScreen);
&FULL_SCREEN
},
}
// CYP, */
ProjectionType::Cyp(_) => {
const FULL_SCREEN: ProjDefType = ProjDefType::FullScreen(FullScreen);
&FULL_SCREEN
},
}
// Conic projections
// COD, */
@@ -578,7 +587,7 @@ impl Projection for ProjectionType {
ProjectionType::Ncp(ncp) => ncp.clip_to_world_space(xy),
// Pseudo-cylindrical projections
/* AIT, Aitoff */
/* AIT, Aitoff */
ProjectionType::Ait(ait) => ait.clip_to_world_space(xy),
// MOL, Mollweide */
ProjectionType::Mol(mol) => mol.clip_to_world_space(xy),
@@ -599,19 +608,18 @@ impl Projection for ProjectionType {
// Conic projections
// COD, */
ProjectionType::Cod(cod) => {
cod.clip_to_world_space(xy)
.map(|xyzw| {
let rot = Rotation::from_sky_position(&LonLatT::new(0.0_f64.to_angle(), (HALF_PI * 0.5).to_angle()).vector());
rot.inv_rotate(&xyzw)
})
},
ProjectionType::Cod(cod) => cod.clip_to_world_space(xy).map(|xyzw| {
let rot = Rotation::from_sky_position(
&LonLatT::new(0.0_f64.to_angle(), (HALF_PI * 0.5).to_angle()).vector(),
);
rot.inv_rotate(&xyzw)
}),
// HEALPix hybrid projection
ProjectionType::Hpx(hpx) => hpx.clip_to_world_space(xy),
}
}
// Projection
fn world_to_clip_space(&self, xyzw: &XYZWWorld) -> Option<XYClip> {
match self {
@@ -636,7 +644,7 @@ impl Projection for ProjectionType {
ProjectionType::Ncp(ncp) => ncp.world_to_clip_space(xyzw),
// Pseudo-cylindrical projections
/* AIT, Aitoff */
/* AIT, Aitoff */
ProjectionType::Ait(ait) => ait.world_to_clip_space(xyzw),
// MOL, Mollweide */
ProjectionType::Mol(mol) => mol.world_to_clip_space(xyzw),
@@ -658,9 +666,11 @@ impl Projection for ProjectionType {
// COD, */
ProjectionType::Cod(cod) => {
// The Cod projection is centered on (0, 45 deg)
let rot = Rotation::from_sky_position(&LonLatT::new(0.0_f64.to_angle(), (HALF_PI * 0.5).to_angle()).vector());
let rot = Rotation::from_sky_position(
&LonLatT::new(0.0_f64.to_angle(), (HALF_PI * 0.5).to_angle()).vector(),
);
cod.world_to_clip_space(&rot.rotate(&xyzw))
},
}
// HEALPix hybrid projection
ProjectionType::Hpx(hpx) => hpx.world_to_clip_space(xyzw),
}
@@ -692,7 +702,7 @@ use self::coo_space::XYNDC;
use super::angle::ToAngle;
impl<'a, P> Projection for &'a P
where
P: CanonicalProjection
P: CanonicalProjection,
{
/// Perform a clip to the world space deprojection
///
@@ -703,47 +713,29 @@ where
let proj_bounds = self.bounds();
// Scale the xy_clip space so that it maps the proj definition domain of mapproj
let xy_mapproj = {
let x_proj_bounds = proj_bounds.x_bounds()
.as_ref()
.unwrap_or(&(-PI..=PI));
let x_proj_bounds = proj_bounds.x_bounds().as_ref().unwrap_or(&(-PI..=PI));
let y_proj_bounds = proj_bounds.y_bounds()
.as_ref()
.unwrap_or(&(-PI..=PI));
let y_proj_bounds = proj_bounds.y_bounds().as_ref().unwrap_or(&(-PI..=PI));
let x_len = x_proj_bounds.end() - x_proj_bounds.start();
let y_len = y_proj_bounds.end() - y_proj_bounds.start();
let y_mean = (y_proj_bounds.end() + y_proj_bounds.start())*0.5;
let y_mean = (y_proj_bounds.end() + y_proj_bounds.start()) * 0.5;
let x_off = x_proj_bounds.start();
let y_off = y_proj_bounds.start();
ProjXY::new(
(xy_clip.x*0.5 + 0.5) * x_len + x_off,
(xy_clip.y*0.5 + 0.5) * y_len + y_off - y_mean,
(xy_clip.x * 0.5 + 0.5) * x_len + x_off,
(xy_clip.y * 0.5 + 0.5) * y_len + y_off - y_mean,
)
/*let x_len = x_proj_bounds.end().abs().max(x_proj_bounds.start().abs());
let y_len = y_proj_bounds.end().abs().max(y_proj_bounds.start().abs());
ProjXY::new(
xy_clip.x * x_len,
xy_clip.y * y_len,
)*/
};
self.unproj(&xy_mapproj)
.map(|xyz_mapproj| {
// Xmpp <-> Zal
// -Ympp <-> Xal
// Zmpp <-> Yal
Vector4::new(
-xyz_mapproj.y(),
xyz_mapproj.z(),
xyz_mapproj.x(),
1.0
)
})
self.unproj(&xy_mapproj).map(|xyz_mapproj| {
// Xmpp <-> Zal
// -Ympp <-> Xal
// Zmpp <-> Yal
Vector4::new(-xyz_mapproj.y(), xyz_mapproj.z(), xyz_mapproj.x(), 1.0)
})
}
/// World to the clipping space deprojection
///
@@ -760,31 +752,26 @@ where
pos_world_space.y,
);
self.proj(&xyz_mapproj)
.map(|xy_clip_mapproj| {
let proj_bounds = self.bounds();
// Scale the xy_clip space so that it maps the proj definition domain of mapproj
let x_proj_bounds = proj_bounds.x_bounds()
.as_ref()
.unwrap_or(&(-PI..=PI));
let y_proj_bounds = proj_bounds.y_bounds()
.as_ref()
.unwrap_or(&(-PI..=PI));
self.proj(&xyz_mapproj).map(|xy_clip_mapproj| {
let proj_bounds = self.bounds();
// Scale the xy_clip space so that it maps the proj definition domain of mapproj
let x_proj_bounds = proj_bounds.x_bounds().as_ref().unwrap_or(&(-PI..=PI));
let x_len = x_proj_bounds.end() - x_proj_bounds.start();
let y_len = y_proj_bounds.end() - y_proj_bounds.start();
let y_proj_bounds = proj_bounds.y_bounds().as_ref().unwrap_or(&(-PI..=PI));
let x_off = x_proj_bounds.start();
let y_off = y_proj_bounds.start();
let x_len = x_proj_bounds.end() - x_proj_bounds.start();
let y_len = y_proj_bounds.end() - y_proj_bounds.start();
let y_mean = (y_proj_bounds.end() + y_proj_bounds.start())*0.5;
let x_off = x_proj_bounds.start();
let y_off = y_proj_bounds.start();
XYClip::new(
((( xy_clip_mapproj.x() - x_off ) / x_len ) - 0.5 ) * 2.0,
((( xy_clip_mapproj.y() - y_off + y_mean ) / y_len ) - 0.5 ) * 2.0
)
})
let y_mean = (y_proj_bounds.end() + y_proj_bounds.start()) * 0.5;
XYClip::new(
(((xy_clip_mapproj.x() - x_off) / x_len) - 0.5) * 2.0,
(((xy_clip_mapproj.y() - y_off + y_mean) / y_len) - 0.5) * 2.0,
)
})
}
}
@@ -825,28 +812,82 @@ mod tests {
}
// Zenithal
generate_projection_map("./../img/tan.png", ProjectionType::Tan(mapproj::zenithal::tan::Tan));
generate_projection_map("./../img/stg.png", ProjectionType::Stg(mapproj::zenithal::stg::Stg));
generate_projection_map("./../img/sin.png", ProjectionType::Sin(mapproj::zenithal::sin::Sin));
generate_projection_map("./../img/zea.png", ProjectionType::Zea(mapproj::zenithal::zea::Zea));
generate_projection_map("./../img/feye.png", ProjectionType::Feye(mapproj::zenithal::feye::Feye));
generate_projection_map("./../img/arc.png", ProjectionType::Arc(mapproj::zenithal::arc::Arc));
generate_projection_map("./../img/ncp.png", ProjectionType::Ncp(mapproj::zenithal::ncp::Ncp));
generate_projection_map("./../img/air.png", ProjectionType::Air(mapproj::zenithal::air::Air::new()));
generate_projection_map(
"./../img/tan.png",
ProjectionType::Tan(mapproj::zenithal::tan::Tan),
);
generate_projection_map(
"./../img/stg.png",
ProjectionType::Stg(mapproj::zenithal::stg::Stg),
);
generate_projection_map(
"./../img/sin.png",
ProjectionType::Sin(mapproj::zenithal::sin::Sin),
);
generate_projection_map(
"./../img/zea.png",
ProjectionType::Zea(mapproj::zenithal::zea::Zea),
);
generate_projection_map(
"./../img/feye.png",
ProjectionType::Feye(mapproj::zenithal::feye::Feye),
);
generate_projection_map(
"./../img/arc.png",
ProjectionType::Arc(mapproj::zenithal::arc::Arc),
);
generate_projection_map(
"./../img/ncp.png",
ProjectionType::Ncp(mapproj::zenithal::ncp::Ncp),
);
generate_projection_map(
"./../img/air.png",
ProjectionType::Air(mapproj::zenithal::air::Air::new()),
);
// Cylindrical
generate_projection_map("./../img/mer.png", ProjectionType::Mer(mapproj::cylindrical::mer::Mer));
generate_projection_map("./../img/car.png", ProjectionType::Car(mapproj::cylindrical::car::Car));
generate_projection_map("./../img/cea.png", ProjectionType::Cea(mapproj::cylindrical::cea::Cea::new()));
generate_projection_map("./../img/cyp.png", ProjectionType::Cyp(mapproj::cylindrical::cyp::Cyp::new()));
generate_projection_map(
"./../img/mer.png",
ProjectionType::Mer(mapproj::cylindrical::mer::Mer),
);
generate_projection_map(
"./../img/car.png",
ProjectionType::Car(mapproj::cylindrical::car::Car),
);
generate_projection_map(
"./../img/cea.png",
ProjectionType::Cea(mapproj::cylindrical::cea::Cea::new()),
);
generate_projection_map(
"./../img/cyp.png",
ProjectionType::Cyp(mapproj::cylindrical::cyp::Cyp::new()),
);
// Pseudo-cylindrical
generate_projection_map("./../img/mer.png", ProjectionType::Ait(mapproj::pseudocyl::ait::Ait));
generate_projection_map("./../img/car.png", ProjectionType::Par(mapproj::pseudocyl::par::Par));
generate_projection_map("./../img/cea.png", ProjectionType::Sfl(mapproj::pseudocyl::sfl::Sfl));
generate_projection_map("./../img/cyp.png", ProjectionType::Mol(mapproj::pseudocyl::mol::Mol::new()));
generate_projection_map(
"./../img/mer.png",
ProjectionType::Ait(mapproj::pseudocyl::ait::Ait),
);
generate_projection_map(
"./../img/car.png",
ProjectionType::Par(mapproj::pseudocyl::par::Par),
);
generate_projection_map(
"./../img/cea.png",
ProjectionType::Sfl(mapproj::pseudocyl::sfl::Sfl),
);
generate_projection_map(
"./../img/cyp.png",
ProjectionType::Mol(mapproj::pseudocyl::mol::Mol::new()),
);
// Conic
generate_projection_map("./../img/cod.png", ProjectionType::Cod(mapproj::conic::cod::Cod::new()));
generate_projection_map(
"./../img/cod.png",
ProjectionType::Cod(mapproj::conic::cod::Cod::new()),
);
// Hybrid
generate_projection_map("./../img/hpx.png", ProjectionType::Hpx(mapproj::hybrid::hpx::Hpx));
generate_projection_map(
"./../img/hpx.png",
ProjectionType::Hpx(mapproj::hybrid::hpx::Hpx),
);
}
}

View File

@@ -0,0 +1,155 @@
use super::super::{ZERO, PI, HALF_PI, TWICE_PI, MINUS_HALF_PI};
use crate::math::{sph_geom::region::PoleContained, lonlat::LonLatT};
pub const ALLSKY_BBOX: BoundingBox = BoundingBox {
lon: ZERO..TWICE_PI,
lat: MINUS_HALF_PI..HALF_PI,
};
use std::ops::{Range};
#[derive(Debug)]
pub struct BoundingBox {
pub lon: Range<f64>,
pub lat: Range<f64>,
}
impl BoundingBox {
pub fn from_polygon(
pole_contained: &PoleContained,
mut lon: Vec<f64>,
lat: &[f64],
intersect_zero_meridian: bool,
) -> Self {
// The longitudes must be readjust if the
// polygon crosses the 0deg meridian
// We make the assumption the polygon is not too big
// (i.e. < PI length on the longitude so that it does not
// crosses both the 0 and 180deg meridians)
if intersect_zero_meridian {
lon = lon
.iter()
.map(|&lon| if lon > PI { lon - TWICE_PI } else { lon })
.collect();
}
let (lon, lat) = match pole_contained {
PoleContained::None => {
// The polygon does not contain any pole
// Meridian 0deg is not crossing the polygon
let (min_lat, max_lat) = lat
.iter()
.fold((std::f64::MAX, std::f64::MIN), |(min, max), &b| {
(min.min(b), max.max(b))
});
let (min_lon, max_lon) = lon
.iter()
.fold((std::f64::MAX, std::f64::MIN), |(min, max), &b| {
(min.min(b), max.max(b))
});
(min_lon..max_lon, min_lat..max_lat)
}
PoleContained::South => {
let max_lat = lat.iter().fold(std::f64::MIN, |a, b| a.max(*b));
(
if intersect_zero_meridian {
-PI..PI
} else {
ZERO..TWICE_PI
},
-HALF_PI..max_lat,
)
}
PoleContained::North => {
let min_lat = lat.iter().fold(std::f64::MAX, |a, b| a.min(*b));
(
if intersect_zero_meridian {
-PI..PI
} else {
ZERO..TWICE_PI
},
min_lat..HALF_PI,
)
}
PoleContained::Both => (
if intersect_zero_meridian {
-PI..PI
} else {
ZERO..TWICE_PI
},
-HALF_PI..HALF_PI,
),
};
BoundingBox { lon, lat }
}
#[inline]
pub fn get_lon_size(&self) -> f64 {
self.lon.end - self.lon.start
}
#[inline]
pub fn get_lat_size(&self) -> f64 {
self.lat.end - self.lat.start
}
#[inline]
pub fn all_lon(&self) -> bool {
(self.lon.end - self.lon.start) == TWICE_PI
}
#[inline]
pub fn lon_min(&self) -> f64 {
self.lon.start
}
#[inline]
pub fn lon_max(&self) -> f64 {
self.lon.end
}
#[inline]
pub fn lat_min(&self) -> f64 {
self.lat.start
}
#[inline]
pub fn lat_max(&self) -> f64 {
self.lat.end
}
#[inline]
pub fn get_lon(&self) -> Range<f64> {
self.lon.start..self.lon.end
}
#[inline]
pub fn get_lat(&self) -> Range<f64> {
self.lat.start..self.lat.end
}
#[inline]
pub fn contains_latitude(&self, lat: f64) -> bool {
self.lat.contains(&lat)
}
#[inline]
pub fn contains_meridian(&self, lon: f64) -> bool {
self.lon.contains(&lon)
}
#[inline]
pub fn contains_lonlat(&self, lonlat: &LonLatT<f64>) -> bool {
self.contains_meridian(lonlat.lon().to_radians()) && self.contains_latitude(lonlat.lat().to_radians())
}
#[inline]
pub const fn fullsky() -> Self {
BoundingBox {
lon: ZERO..TWICE_PI,
lat: MINUS_HALF_PI..HALF_PI,
}
}
}

View File

@@ -0,0 +1,70 @@
use crate::healpix::cell::MAX_HPX_DEPTH;
use crate::{healpix::cell::HEALPixCell, math::lonlat::LonLat};
use al_api::Abort;
use cgmath::BaseFloat;
use std::cmp::{Ord, Ordering};
#[derive(PartialEq, Eq)]
pub enum HEALPixBBox {
AllSky,
Cell(HEALPixCell),
}
impl PartialOrd for HEALPixBBox {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
match (self, other) {
(HEALPixBBox::AllSky, HEALPixBBox::AllSky) => Some(Ordering::Equal),
(HEALPixBBox::AllSky, HEALPixBBox::Cell(_)) => Some(Ordering::Greater),
(HEALPixBBox::Cell(_), HEALPixBBox::AllSky) => Some(Ordering::Less),
(HEALPixBBox::Cell(c1), HEALPixBBox::Cell(c2)) => c1.partial_cmp(c2),
}
}
}
impl Ord for HEALPixBBox {
fn cmp(&self, other: &Self) -> Ordering {
self.partial_cmp(other).unwrap_abort()
}
}
pub struct GreatCircleArc {
/// Smallest HEALPix cell containing the arc
hpx_bbox: HEALPixBBox,
}
impl GreatCircleArc {
pub fn new<S, T, U>(v1: T, v2: U) -> Self
where
S: BaseFloat,
T: LonLat<S>,
U: LonLat<S>,
{
// Compute the HPX bbox
let lonlat1 = v1.lonlat();
let lonlat2 = v2.lonlat();
let c1 = HEALPixCell::new(
MAX_HPX_DEPTH,
lonlat1.lon().to_radians().to_f64().unwrap_abort(),
lonlat1.lat().to_radians().to_f64().unwrap_abort(),
);
let c2 = HEALPixCell::new(
MAX_HPX_DEPTH,
lonlat2.lon().to_radians().to_f64().unwrap_abort(),
lonlat2.lat().to_radians().to_f64().unwrap_abort(),
);
let hpx_bbox = if let Some(common_ancestor) = c1.smallest_common_ancestor(&c2) {
HEALPixBBox::Cell(common_ancestor)
} else {
HEALPixBBox::AllSky
};
Self { hpx_bbox }
}
pub fn get_containing_hpx_cell(&self) -> &HEALPixBBox {
&self.hpx_bbox
}
}

View File

@@ -0,0 +1,61 @@
pub mod bbox;
pub mod region;
pub mod great_circle_arc;
use super::{PI, TWICE_PI};
#[inline]
pub fn is_in_lon_range(lon0: f64, lon1: f64, lon2: f64) -> bool {
// First version of the code:
// ((v2.lon() - v1.lon()).abs() > PI) != ((v2.lon() > coo.lon()) != (v1.lon() > coo.lon()))
//
// Lets note
// - lonA = v1.lon()
// - lonB = v2.lon()
// - lon0 = coo.lon()
// When (lonB - lonA).abs() <= PI
// => lonB > lon0 != lonA > lon0 like in PNPOLY
// A B lonA <= lon0 && lon0 < lonB
// --[++++[--
// B A lonB <= lon0 && lon0 < lonA
//
// But when (lonB - lonA).abs() > PI, then the test should be
// => lonA >= lon0 == lonB >= lon0
// <=> !(lonA >= lon0 != lonB >= lon0)
// A | B (lon0 < lonB) || (lonA <= lon0)
// --[++|++[--
// B | A (lon0 < lonA) || (lonB <= lon0)
//
// Instead of lonA > lon0 == lonB > lon0,
// i.e. !(lonA > lon0 != lonB > lon0).
// A | B (lon0 <= lonB) || (lonA < lon0)
// --]++|++]--
// B | A (lon0 <= lonA) || (lonB < lon0)
//
// So the previous code was bugged in this very specific case:
// - `lon0` has the same value as a vertex being part of:
// - one segment that do not cross RA=0
// - plus one segment crossing RA=0.
// - the point have an odd number of intersections with the polygon
// (since it will be counted 0 or 2 times instead of 1).
let dlon = lon2 - lon1;
if dlon < 0.0 {
(dlon >= -PI) == (lon2 <= lon0 && lon0 < lon1)
} else {
(dlon <= PI) == (lon1 <= lon0 && lon0 < lon2)
}
}
// Returns the longitude length between two longitudes
// lon1 lies in [0; 2\pi[
// lon2 lies in [0; 2\pi[
#[inline]
pub fn distance_from_two_lon(lon1: f64, lon2: f64) -> f64 {
// Cross the primary meridian
if lon1 > lon2 {
lon2 + TWICE_PI - lon1
} else {
lon2 - lon1
}
}

View File

@@ -0,0 +1,234 @@
use super::bbox::BoundingBox;
use crate::math::angle::ToAngle;
use crate::math::{lonlat::LonLatT, projection::coo_space::XYZWModel, MINUS_HALF_PI};
use cgmath::Vector3;
use healpix::sph_geom::coo3d::Vec3;
use healpix::sph_geom::coo3d::{Coo3D, UnitVect3};
use healpix::sph_geom::ContainsSouthPoleMethod;
use healpix::sph_geom::Polygon;
use mapproj::math::HALF_PI;
pub enum Region {
AllSky,
Polygon {
polygon: Polygon,
// A fast way to query if a position is contained
// is to check first the bounding box
bbox: BoundingBox,
// Some informations about the poles
poles: PoleContained,
is_intersecting_zero_meridian: bool,
},
}
#[derive(PartialEq, Eq, Clone, Copy, Debug)]
pub enum PoleContained {
None,
South,
North,
Both,
}
#[derive(Debug)]
pub enum Intersection {
// The segment is fully included into the region
Included,
// The segment does not intersect the region
Empty,
// The segment does intersect the region
Intersect { vertices: Box<[XYZWModel]> },
}
impl Region {
pub fn from_vertices(vertices: &[XYZWModel], control_point: &XYZWModel) -> Self {
let (vertices, (lon, lat)): (Vec<_>, (Vec<_>, Vec<_>)) = vertices
.iter()
.map(|v| {
let coo = healpix::sph_geom::coo3d::Coo3D::from_vec3(v.z, v.x, v.y);
let (lon, lat) = coo.lonlat();
(coo, (lon, lat))
})
.unzip();
let polygon = Polygon::new_custom_vec3(
vertices.into_boxed_slice(),
&ContainsSouthPoleMethod::ControlPointIn(Coo3D::from_vec3(
control_point.z,
control_point.x,
control_point.y,
)),
);
let north_pole_coo = &Coo3D::from_sph_coo(0.0, HALF_PI);
let south_pole_coo = &Coo3D::from_sph_coo(0.0, -HALF_PI);
let north_pole_contained = polygon.contains(north_pole_coo);
let south_pole_contained = polygon.contains(south_pole_coo);
let poles = match (south_pole_contained, north_pole_contained) {
(false, false) => PoleContained::None,
(false, true) => PoleContained::North,
(true, false) => PoleContained::South,
(true, true) => PoleContained::Both,
};
// The arc length must be < PI, so we create an arc from [(0, -PI/2); (0, PI/2)[
// see the cdshealpix doc:
// https://docs.rs/cdshealpix/latest/cdshealpix/sph_geom/struct.Polygon.html#method.intersect_great_circle_arc
let is_intersecting_zero_meridian = polygon.is_intersecting_great_circle_arc(
&Coo3D::from_sph_coo(0.0, -HALF_PI),
&Coo3D::from_sph_coo(0.0, HALF_PI - 1e-6),
);
let bbox = BoundingBox::from_polygon(&poles, lon, &lat, is_intersecting_zero_meridian);
// Allsky case
Region::Polygon {
polygon,
bbox,
poles,
is_intersecting_zero_meridian,
}
}
pub fn intersects_parallel(&self, lat: f64) -> Intersection {
if lat == 0.0 {
self.intersects_great_circle(&Vector3::unit_y())
} else {
match self {
// The polygon is included inside the region
Region::AllSky => Intersection::Included,
Region::Polygon { polygon, .. } => {
let vertices = polygon
.intersect_parallel(lat)
.iter()
.map(|v| XYZWModel::new(v.y(), v.z(), v.x(), 1.0))
.collect::<Vec<_>>();
if !vertices.is_empty() {
Intersection::Intersect {
vertices: vertices.into_boxed_slice(),
}
// test whether a point on the parallel is included
} else if self.contains(&LonLatT::new(0.0.to_angle(), lat.to_angle())) {
Intersection::Included
} else {
Intersection::Empty
}
}
}
}
}
pub fn intersects_great_circle_arc(
&self,
lonlat1: &LonLatT<f64>,
lonlat2: &LonLatT<f64>,
) -> Intersection {
match self {
// The polygon is included inside the region
Region::AllSky => Intersection::Included,
Region::Polygon { polygon, .. } => {
let coo1 =
Coo3D::from_sph_coo(lonlat1.lon().to_radians(), lonlat1.lat().to_radians());
let coo2 =
Coo3D::from_sph_coo(lonlat2.lon().to_radians(), lonlat2.lat().to_radians());
let vertices: Vec<cgmath::Vector4<f64>> = polygon
.intersect_great_circle_arc(&coo1, &coo2)
.iter()
.map(|v| XYZWModel::new(v.y(), v.z(), v.x(), 1.0))
.collect::<Vec<_>>();
if !vertices.is_empty() {
Intersection::Intersect {
vertices: vertices.into_boxed_slice(),
}
// test whether a point on the meridian is included
} else if self.contains(lonlat1) {
Intersection::Included
} else {
Intersection::Empty
}
}
}
}
pub fn intersects_meridian(&self, lon: f64) -> Intersection {
let n_pole_lonlat = LonLatT::new(lon.to_angle(), (HALF_PI - 1e-4).to_angle());
let s_pole_lonlat = LonLatT::new(lon.to_angle(), (MINUS_HALF_PI + 1e-4).to_angle());
self.intersects_great_circle_arc(&s_pole_lonlat, &n_pole_lonlat)
}
pub fn intersects_great_circle(&self, n: &Vector3<f64>) -> Intersection {
match self {
// The polygon is included inside the region
Region::AllSky => Intersection::Included,
Region::Polygon { polygon, .. } => {
let vertices: Vec<cgmath::Vector4<f64>> = polygon
.intersect_great_circle(&UnitVect3::new_unsafe(n.z, n.x, n.y))
.iter()
.map(|v| XYZWModel::new(v.y(), v.z(), v.x(), 1.0))
.collect::<Vec<_>>();
// Test whether a point on the meridian is included
match vertices.len() {
0 => Intersection::Empty,
1 => Intersection::Included,
_ => Intersection::Intersect {
vertices: vertices.into_boxed_slice(),
},
}
}
}
}
pub fn contains(&self, lonlat: &LonLatT<f64>) -> bool {
match self {
Region::AllSky => true,
Region::Polygon { polygon, bbox, .. } => {
// Fast checking with the bbox
if !bbox.contains_lonlat(&lonlat) {
return false;
}
let coo = Coo3D::from_sph_coo(lonlat.lon().to_radians(), lonlat.lat().to_radians());
polygon.contains(&coo)
}
}
}
// Is intersecting API
pub fn is_intersecting_parallel(&self, lat: f64) -> bool {
match self {
// The polygon is included inside the region
Region::AllSky => true,
Region::Polygon { polygon, .. } => polygon.is_intersecting_parallel(lat),
}
}
pub fn is_intersecting_great_circle_arc(
&self,
lonlat1: &LonLatT<f64>,
lonlat2: &LonLatT<f64>,
) -> bool {
match self {
Region::AllSky => true,
Region::Polygon { polygon, .. } => {
let coo1 =
Coo3D::from_sph_coo(lonlat1.lon().to_radians(), lonlat1.lat().to_radians());
let coo2 =
Coo3D::from_sph_coo(lonlat2.lon().to_radians(), lonlat2.lat().to_radians());
polygon.is_intersecting_great_circle_arc(&coo1, &coo2)
}
}
}
pub fn is_intersecting_meridian(&self, lon: f64) -> bool {
let n_pole_lonlat = LonLatT::new(HALF_PI.to_angle(), lon.to_angle());
let s_pole_lonlat = LonLatT::new(MINUS_HALF_PI.to_angle(), lon.to_angle());
self.is_intersecting_great_circle_arc(&s_pole_lonlat, &n_pole_lonlat)
}
}

View File

@@ -1,354 +0,0 @@
use cgmath::Rad;
use cgmath::{Vector3, Vector4};
const PI: f64 = std::f64::consts::PI;
const ZERO: f64 = 0.0;
const TWICE_PI: f64 = std::f64::consts::PI * 2.0;
const HALF_PI: f64 = std::f64::consts::PI * 0.5;
const MINUS_HALF_PI: f64 = -std::f64::consts::PI * 0.5;
use crate::math::angle::Angle;
use crate::math::lonlat::LonLat;
use cdshealpix::sph_geom::{
coo3d::{Coo3D, Vec3},
ContainsSouthPoleMethod, Polygon,
};
pub enum FieldOfViewType {
Allsky,
Polygon {
poly: Polygon,
bbox: BoundingBox,
poles: PoleContained,
},
}
#[derive(PartialEq, Eq, Clone, Copy, Debug)]
pub enum PoleContained {
None,
South,
North,
Both,
}
use crate::math::lonlat::LonLatT;
//use cgmath::Vector2;
use crate::CameraViewPort;
impl FieldOfViewType {
pub fn new_polygon(vertices: &[Vector4<f64>], control_point: &Vector4<f64>) -> FieldOfViewType {
let (vertices, (lon, lat)): (Vec<_>, (Vec<_>, Vec<_>)) = vertices
.iter()
.map(|v| {
let coo = cdshealpix::sph_geom::coo3d::Coo3D::from_vec3(v.z, v.x, v.y);
let (lon, lat) = coo.lonlat();
(coo, (lon, lat))
})
.unzip();
let control_point = Coo3D::from_vec3(control_point.z, control_point.x, control_point.y);
let poly = Polygon::new_custom_vec3(
vertices.into_boxed_slice(),
&ContainsSouthPoleMethod::ControlPointIn(control_point),
);
let north_pole_coo = &Coo3D::from_sph_coo(0.0, HALF_PI);
let south_pole_coo = &Coo3D::from_sph_coo(0.0, -HALF_PI);
let north_pole_contained = poly.contains(north_pole_coo);
let south_pole_contained = poly.contains(south_pole_coo);
let poles = match (south_pole_contained, north_pole_contained) {
(false, false) => PoleContained::None,
(false, true) => PoleContained::North,
(true, false) => PoleContained::South,
(true, true) => PoleContained::Both,
};
// The arc length must be < PI, so we create an arc from [(0, -PI/2); (0, PI/2)[
// see the cdshealpix doc:
// https://docs.rs/cdshealpix/latest/cdshealpix/sph_geom/struct.Polygon.html#method.intersect_great_circle_arc
let poly_intersects_meridian = poly.is_intersecting_great_circle_arc(
&Coo3D::from_sph_coo(0.0, -HALF_PI),
&Coo3D::from_sph_coo(0.0, HALF_PI - 1e-6),
);
let bbox = BoundingBox::from_polygon(&poles, lon, &lat, poly_intersects_meridian);
FieldOfViewType::Polygon { poly, poles, bbox }
}
pub fn get_bounding_box(&self) -> &BoundingBox {
match self {
FieldOfViewType::Allsky => &ALLSKY_BBOX,
FieldOfViewType::Polygon { bbox, .. } => bbox,
}
}
pub fn intersect_meridian<LonT: Into<Rad<f64>>>(
&self,
lon: LonT,
camera: &CameraViewPort,
) -> Option<Vector3<f64>> {
let Rad::<f64>(lon) = lon.into();
match self {
FieldOfViewType::Allsky => {
// Allsky case
// We do an approx saying allsky fovs intersect all meridian
// but this is not true for example for the orthographic projection
// Some meridians may not be visible
let center = camera.get_center();
let pos: Vector3<f64> = LonLatT::new(Angle(lon), center.lat()).vector();
Some(pos)
}
FieldOfViewType::Polygon { poly, .. } => {
let lon = if lon < 0.0 { lon + TWICE_PI } else { lon };
// The arc length must be < PI, so we create an arc from [(lon, -PI/2); (lon, PI/2)[
// see the cdshealpix doc:
// https://docs.rs/cdshealpix/latest/cdshealpix/sph_geom/struct.Polygon.html#method.intersect_great_circle_arc
let a = Coo3D::from_sph_coo(lon, -HALF_PI);
let b = Coo3D::from_sph_coo(lon, HALF_PI - 1e-6);
// For those intersecting, perform the intersection
poly.intersect_great_circle_arc(&a, &b)
.map(|v| Vector3::new(v.y(), v.z(), v.x()))
.or_else(|| {
// If no intersection has been found, e.g. because the
// great circle is fully contained in the bounding box
let center = camera.get_center();
let pos: Vector3<f64> = LonLatT::new(Angle(lon), center.lat()).vector();
Some(pos)
})
}
}
}
pub fn intersect_parallel<LatT: Into<Rad<f64>>>(
&self,
lat: LatT,
camera: &CameraViewPort,
) -> Option<Vector3<f64>> {
let Rad::<f64>(lat) = lat.into();
match self {
FieldOfViewType::Allsky => {
let center = camera.get_center();
let pos: Vector3<f64> = LonLatT::new(center.lon(), Angle(lat)).vector();
Some(pos)
}
FieldOfViewType::Polygon { poly, bbox, .. } => {
// Prune parallels that do not intersect the fov
if bbox.contains_latitude(lat) {
// For those intersecting, perform the intersection
poly.intersect_parallel(lat)
.map(|v| Vector3::new(v.y(), v.z(), v.x()))
.or_else(|| {
// If no intersection has been found, e.g. because the
// great circle is fully contained in the bounding box
let center = camera.get_center();
let pos: Vector3<f64> = LonLatT::new(center.lon(), Angle(lat)).vector();
Some(pos)
})
} else {
None
}
}
}
}
pub fn is_allsky(&self) -> bool {
matches!(self, FieldOfViewType::Allsky)
}
pub fn contains_pole(&self) -> bool {
match self {
FieldOfViewType::Allsky => true,
FieldOfViewType::Polygon { poles, .. } => *poles != PoleContained::None,
}
}
pub fn contains_north_pole(&self) -> bool {
match self {
FieldOfViewType::Allsky => {
//let center = camera.get_center();
//center.y >= 0.0
true
}
FieldOfViewType::Polygon { poles, .. } => {
*poles == PoleContained::North || *poles == PoleContained::Both
}
}
}
pub fn contains_south_pole(&self) -> bool {
match self {
FieldOfViewType::Allsky => {
//let center = camera.get_center();
//center.y < 0.0
true
}
FieldOfViewType::Polygon { poles, .. } => {
*poles == PoleContained::South || *poles == PoleContained::Both
}
}
}
pub fn contains_both_poles(&self) -> bool {
match self {
FieldOfViewType::Allsky => {
true
}
FieldOfViewType::Polygon { poles, .. } => {
*poles == PoleContained::Both
}
}
}
}
const ALLSKY_BBOX: BoundingBox = BoundingBox {
lon: ZERO..TWICE_PI,
lat: MINUS_HALF_PI..HALF_PI,
};
use std::ops::{Range};
#[derive(Debug)]
pub struct BoundingBox {
pub lon: Range<f64>,
pub lat: Range<f64>,
}
impl BoundingBox {
fn from_polygon(
pole_contained: &PoleContained,
mut lon: Vec<f64>,
lat: &[f64],
intersect_zero_meridian: bool,
) -> Self {
// The longitudes must be readjust if the
// polygon crosses the 0deg meridian
// We make the assumption the polygon is not too big
// (i.e. < PI length on the longitude so that it does not
// crosses both the 0 and 180deg meridians)
if intersect_zero_meridian {
lon = lon
.iter()
.map(|&lon| if lon > PI { lon - TWICE_PI } else { lon })
.collect();
}
let (lon, lat) = match pole_contained {
PoleContained::None => {
// The polygon does not contain any pole
// Meridian 0deg is not crossing the polygon
let (min_lat, max_lat) = lat
.iter()
.fold((std::f64::MAX, std::f64::MIN), |(min, max), &b| {
(min.min(b), max.max(b))
});
let (min_lon, max_lon) = lon
.iter()
.fold((std::f64::MAX, std::f64::MIN), |(min, max), &b| {
(min.min(b), max.max(b))
});
(min_lon..max_lon, min_lat..max_lat)
}
PoleContained::South => {
let max_lat = lat.iter().fold(std::f64::MIN, |a, b| a.max(*b));
(
if intersect_zero_meridian {
-PI..PI
} else {
ZERO..TWICE_PI
},
-HALF_PI..max_lat,
)
}
PoleContained::North => {
let min_lat = lat.iter().fold(std::f64::MAX, |a, b| a.min(*b));
(
if intersect_zero_meridian {
-PI..PI
} else {
ZERO..TWICE_PI
},
min_lat..HALF_PI,
)
}
PoleContained::Both => (
if intersect_zero_meridian {
-PI..PI
} else {
ZERO..TWICE_PI
},
-HALF_PI..HALF_PI,
),
};
BoundingBox { lon, lat }
}
#[inline]
pub fn get_lon_size(&self) -> f64 {
self.lon.end - self.lon.start
}
#[inline]
pub fn get_lat_size(&self) -> f64 {
self.lat.end - self.lat.start
}
#[inline]
pub fn all_lon(&self) -> bool {
(self.lon.end - self.lon.start) == TWICE_PI
}
#[inline]
pub fn lon_min(&self) -> f64 {
self.lon.start
}
#[inline]
pub fn lon_max(&self) -> f64 {
self.lon.end
}
#[inline]
pub fn lat_min(&self) -> f64 {
self.lat.start
}
#[inline]
pub fn lat_max(&self) -> f64 {
self.lat.end
}
#[inline]
pub fn get_lon(&self) -> Range<f64> {
self.lon.start..self.lon.end
}
#[inline]
pub fn get_lat(&self) -> Range<f64> {
self.lat.start..self.lat.end
}
#[inline]
pub fn contains_latitude(&self, lat: f64) -> bool {
self.lat.contains(&lat)
}
#[inline]
pub fn contains_meridian(&self, lon: f64) -> bool {
self.lon.contains(&lon)
}
#[inline]
pub const fn fullsky() -> Self {
BoundingBox {
lon: ZERO..TWICE_PI,
lat: MINUS_HALF_PI..HALF_PI,
}
}
}

View File

@@ -1,3 +1,5 @@
use cgmath::BaseFloat;
#[inline]
pub fn asinc_positive(x: f64) -> f64 {
debug_assert!(x >= 0.0);
@@ -105,11 +107,11 @@ pub fn lambert_wm1(x: f32) -> f32 {
#[inline]
pub fn ccw_tri(a: &[f32; 2], b: &[f32; 2], c: &[f32; 2]) -> bool {
pub fn ccw_tri<S: BaseFloat>(a: &[S; 2], b: &[S; 2], c: &[S; 2]) -> bool {
// From: https://math.stackexchange.com/questions/1324179/how-to-tell-if-3-connected-points-are-connected-clockwise-or-counter-clockwise
// | x1, y1, 1 |
// | x2, y2, 1 | > 0 => the triangle is given in anticlockwise order
// | x3, y3, 1 |
a[0]*b[1] + a[1]*c[0] + b[0]*c[1] - c[0]*b[1] - c[1]*a[0] - b[0]*a[1] >= 0.0
a[0]*b[1] + a[1]*c[0] + b[0]*c[1] - c[0]*b[1] - c[1]*a[0] - b[0]*a[1] >= S::zero()
}

View File

@@ -13,20 +13,30 @@ pub fn angle3<S: BaseFloat>(x: &Vector3<S>, y: &cgmath::Vector3<S>) -> Angle<S>
}
#[inline]
pub fn dist2(a: &Vector2<f64>, b: &Vector2<f64>) -> f64 {
let dx = a.x - b.x;
let dy = a.y - b.y;
dx*dx + dy*dy
pub fn dist2<S>(a: &[S; 2], b: &[S; 2]) -> S
where
S: BaseFloat,
{
let dx = a[0] - b[0];
let dy = a[1] - b[1];
dx * dx + dy * dy
}
#[inline]
pub fn ccw_tri<S: BaseFloat>(a: &Vector2<S>, b: &Vector2<S>, c: &Vector2<S>) -> bool {
pub fn ccw_tri<'a, S, V>(a: V, b: V, c: V) -> bool
where
S: BaseFloat + 'a,
V: AsRef<[S; 2]>,
{
let a: &[S; 2] = a.as_ref();
let b: &[S; 2] = b.as_ref();
let c: &[S; 2] = c.as_ref();
// From: https://math.stackexchange.com/questions/1324179/how-to-tell-if-3-connected-points-are-connected-clockwise-or-counter-clockwise
// | x1, y1, 1 |
// | x2, y2, 1 | > 0 => the triangle is given in anticlockwise order
// | x3, y3, 1 |
a.x*b.y + a.y*c.x + b.x*c.y - c.x*b.y - c.y*a.x - b.x*a.y >= S::zero()
a[0] * b[1] + a[1] * c[0] + b[0] * c[1] - c[0] * b[1] - c[1] * a[0] - b[0] * a[1] >= S::zero()
}
#[inline]
@@ -55,6 +65,7 @@ impl NormedVector2 {
}
}
use std::ops::Deref;
impl Deref for NormedVector2 {
type Target = Vector2<f64>;
@@ -72,4 +83,4 @@ impl<'a> Mul<f64> for &'a NormedVector2 {
fn mul(self, rhs: f64) -> Self::Output {
self.0 * rhs
}
}
}

View File

@@ -1,97 +0,0 @@
use crate::healpix::cell::HEALPixCell;
use std::ops::Range;
pub struct SourceIndices(Box<[Range<u32>]>);
use super::source::Source;
impl SourceIndices {
pub fn new(sources: &[Source]) -> Self {
let mut healpix_idx: Box<[Option<Range<u32>>]> = vec![None; 196608].into_boxed_slice();
for (idx_source, s) in sources.iter().enumerate() {
let (lon, lat) = s.lonlat();
let idx = cdshealpix::nested::hash(7, lon as f64, lat as f64) as usize;
if let Some(ref mut healpix_idx) = &mut healpix_idx[idx] {
healpix_idx.end += 1;
} else {
healpix_idx[idx] = Some((idx_source as u32)..((idx_source + 1) as u32));
}
}
let mut idx_source = 0;
let healpix_idx = healpix_idx
.iter()
.map(|idx| {
if let Some(r) = idx {
idx_source = r.end;
r.start..r.end
} else {
idx_source..idx_source
}
})
.collect::<Vec<_>>();
SourceIndices(healpix_idx.into_boxed_slice())
}
pub fn get_source_indices(&self, cell: &HEALPixCell) -> Range<u32> {
let HEALPixCell(depth, idx) = *cell;
if depth <= 7 {
let off = 2 * (7 - depth);
let healpix_idx_start = (idx << off) as usize;
let healpix_idx_end = ((idx + 1) << off) as usize;
let idx_start_sources = self.0[healpix_idx_start].start;
let idx_end_sources = self.0[healpix_idx_end - 1].end;
idx_start_sources..idx_end_sources
} else {
// depth > 7
// Get the sources that are contained in parent cell of depth 7
let off = 2 * (depth - 7);
let idx_start = (idx >> off) as usize;
let idx_start_sources = self.0[idx_start].start;
let idx_end_sources = self.0[idx_start].end;
idx_start_sources..idx_end_sources
}
}
// Returns k sources from a cell having depth <= 7
pub fn get_k_sources<'a>(
&self,
sources: &'a [f32],
cell: &HEALPixCell,
k: usize,
offset: usize,
) -> &'a [f32] {
let HEALPixCell(depth, idx) = *cell;
debug_assert!(depth <= 7);
let off = 2 * (7 - depth);
let healpix_idx_start = (idx << off) as usize;
let healpix_idx_end = ((idx + 1) << off) as usize;
let idx_start_sources = self.0[healpix_idx_start].start as usize;
let idx_end_sources = self.0[healpix_idx_end - 1].end as usize;
let num_sources = idx_end_sources - idx_start_sources;
let idx_sources = if (num_sources - offset) > k {
(idx_start_sources + offset)..(idx_start_sources + offset + k)
} else {
idx_start_sources..idx_end_sources
};
let idx_f32 =
(idx_sources.start * Source::num_f32())..(idx_sources.end * Source::num_f32());
&sources[idx_f32]
}
}

View File

@@ -1,19 +1,18 @@
use super::source::Source;
use crate::ShaderManager;
use al_api::coo_system::CooSystem;
use al_api::resources::Resources;
use al_core::FrameBufferObject;
use al_core::{
Texture2D, VecData, VertexArrayObject, WebGlContext,
};
use al_core::Colormaps;
use al_core::colormap::Colormap;
use al_core::Colormaps;
use al_core::FrameBufferObject;
use al_core::{Texture2D, VecData, VertexArrayObject, WebGlContext};
use std::collections::HashMap;
use std::iter::FromIterator;
use web_sys::WebGl2RenderingContext;
use crate::ProjectionType;
use std::collections::HashMap;
use web_sys::WebGl2RenderingContext;
#[derive(Debug)]
pub enum Error {
@@ -163,11 +162,10 @@ impl Manager {
pub fn add_catalog<P: Projection>(
&mut self,
name: String,
sources: Box<[Source]>,
sources: Box<[LonLatT<f32>]>,
colormap: Colormap,
_shaders: &mut ShaderManager,
_camera: &CameraViewPort,
_view: &HEALPixCellsInView,
camera: &mut CameraViewPort,
proj: &ProjectionType,
) {
// Create the HashMap storing the source indices with respect to the
// HEALPix cell at depth 7 in which they are contained
@@ -176,6 +174,7 @@ impl Manager {
// Update the number of sources loaded
//self.num_sources += num_instances_in_catalog as usize;
self.catalogs.insert(name, catalog);
camera.register_view_frame(CooSystem::ICRS, proj);
// At this point, all the sources memory will be deallocated here
// These sources have been copied to the GPU so we do not need them
@@ -185,6 +184,18 @@ impl Manager {
// at depth 7
}
pub fn remove_catalog<P: Projection>(
&mut self,
name: String,
camera: &mut CameraViewPort,
proj: &ProjectionType,
) {
// Update the number of sources loaded
//self.num_sources += num_instances_in_catalog as usize;
self.catalogs.remove(&name);
camera.unregister_view_frame(CooSystem::ICRS, proj);
}
pub fn set_kernel_size(&mut self, camera: &CameraViewPort) {
let size = camera.get_screen_size();
self.kernel_size = Vector2::new(32.0 / size.x, 32.0 / size.y);
@@ -196,7 +207,7 @@ impl Manager {
})
}
pub fn update(&mut self, camera: &CameraViewPort, view: &HEALPixCellsInView) {
pub fn update(&mut self, camera: &mut CameraViewPort) {
// Render only the sources in the current field of view
// Cells that are of depth > 7 are not handled by the hashmap (limited to depth 7)
// For these cells, we draw all the sources lying in the ancestor cell of depth 7 containing
@@ -209,19 +220,11 @@ impl Manager {
catalog.update(cells);
}
} else {
let cells = Vec::from_iter(
view.get_cells()
.map(|&cell| {
let d = cell.depth();
if d > 7 {
cell.ancestor(d - 7)
} else {
cell
}
})
// This will delete the doublons if there is
.collect::<HashSet<_>>(),
);
let depth = camera.get_tile_depth().min(7);
let cells: Vec<_> = camera
.get_hpx_cells(depth, CooSystem::ICRS)
.cloned()
.collect();
for catalog in self.catalogs.values_mut() {
catalog.update(&cells);
@@ -248,39 +251,38 @@ impl Manager {
}
}
use super::index::SourceIndices;
use crate::healpix::index_vector::IdxVec;
use crate::LonLatT;
pub struct Catalog {
colormap: Colormap,
num_instances: i32,
indices: SourceIndices,
index_vec: IdxVec,
alpha: f32,
strength: f32,
current_sources: Vec<f32>,
sources: Box<[f32]>,
lonlat: Box<[LonLatT<f32>]>,
vertex_array_object_catalog: VertexArrayObject,
}
use crate::healpix::cell::HEALPixCell;
use crate::{camera::CameraViewPort, math::projection::Projection, utils};
use al_core::SliceData;
use cgmath::Vector2;
use std::collections::HashSet;
const MAX_SOURCES_PER_CATALOG: f32 = 50000.0;
use crate::survey::view::HEALPixCellsInView;
use crate::Abort;
impl Catalog {
fn new<P: Projection>(
gl: &WebGlContext,
colormap: Colormap,
sources: Box<[Source]>,
mut lonlat: Box<[LonLatT<f32>]>,
) -> Catalog {
let alpha = 1_f32;
let strength = 1_f32;
let indices = SourceIndices::new(&sources);
let num_instances = sources.len() as i32;
let index_vec = IdxVec::from_coo(&mut lonlat);
let num_instances = lonlat.len() as i32;
let sources = unsafe { utils::transmute_boxed_slice(sources) };
//let sources = unsafe { utils::transmute_boxed_slice(sources) };
let vertex_array_object_catalog = {
#[cfg(feature = "webgl2")]
@@ -320,7 +322,7 @@ impl Catalog {
&[3],
&[0],
WebGl2RenderingContext::DYNAMIC_DRAW,
SliceData(sources.as_ref()),
SliceData(&[]),
)
// Set the element buffer
.add_element_buffer(
@@ -335,7 +337,7 @@ impl Catalog {
3,
"center",
WebGl2RenderingContext::DYNAMIC_DRAW,
SliceData(sources.as_ref()),
SliceData(&[]),
)
// Store the UV and the offsets of the billboard in a VBO
.add_array_buffer(
@@ -361,15 +363,13 @@ impl Catalog {
vao
};
let current_sources = vec![];
Self {
alpha,
strength,
colormap,
num_instances,
indices,
current_sources,
sources,
index_vec,
lonlat,
vertex_array_object_catalog,
}
@@ -391,7 +391,7 @@ impl Catalog {
let mut total_sources = 0;
for cell in cells {
let sources_idx = self.indices.get_source_indices(cell);
let sources_idx = self.index_vec.get_item_indices_inside_hpx_cell(cell);
total_sources += (sources_idx.end - sources_idx.start) as usize;
}
@@ -402,40 +402,43 @@ impl Catalog {
fn update(&mut self, cells: &[HEALPixCell]) {
let num_sources_in_fov = self.get_total_num_sources_in_fov(cells) as f32;
// reset the sources in the frame
self.current_sources.clear();
let mut sources: Vec<_> = vec![];
// depth < 7
for cell in cells {
let delta_depth = (7_i8 - cell.depth() as i8).max(0);
for c in cell.get_children_cells(delta_depth as u8) {
// Define the total number of sources being in this kernel depth tile
let sources_in_cell = self.indices.get_source_indices(&c);
let sources_in_cell = self.index_vec.get_item_indices_inside_hpx_cell(&c);
let num_sources_in_kernel_cell =
(sources_in_cell.end - sources_in_cell.start) as usize;
if num_sources_in_kernel_cell > 0 {
let num_sources = ((num_sources_in_kernel_cell as f32) / num_sources_in_fov)
* MAX_SOURCES_PER_CATALOG;
let num_sources = (((num_sources_in_kernel_cell as f32) / num_sources_in_fov)
* MAX_SOURCES_PER_CATALOG) as usize;
let sources =
self.indices
.get_k_sources(&self.sources, &c, num_sources as usize, 0);
self.current_sources.extend(sources);
let mut idx = self.index_vec.get_item_indices_inside_hpx_cell(&c);
if num_sources < idx.end - idx.start {
// use a selection of num_sources items
idx = idx.start..(idx.start + num_sources);
}
sources.extend(&self.lonlat[idx]);
}
}
}
//self.current_sources.shrink_to_fit();
self.num_instances = sources.len() as i32;
let sources = unsafe { utils::transmute_vec::<LonLatT<f32>, f32>(sources).unwrap() };
// Update the vertex buffer
self.num_instances = (self.current_sources.len() / Source::num_f32()) as i32;
#[cfg(feature = "webgl1")]
self.vertex_array_object_catalog
.bind_for_update()
.update_instanced_array("center", VecData(&self.current_sources));
.update_instanced_array("center", VecData(&sources));
#[cfg(feature = "webgl2")]
self.vertex_array_object_catalog
.bind_for_update()
.update_instanced_array("center", VecData(&self.current_sources));
.update_instanced_array("center", VecData(&sources));
}
fn draw(
@@ -458,14 +461,28 @@ impl Catalog {
gl.clear(WebGl2RenderingContext::COLOR_BUFFER_BIT);
let shader = match projection {
ProjectionType::Sin(_) => crate::shader::get_shader(gl, shaders, "CatalogOrtVS", "CatalogOrtFS"),
ProjectionType::Ait(_) => crate::shader::get_shader(gl, shaders, "CatalogAitVS", "CatalogFS"),
ProjectionType::Mer(_) => crate::shader::get_shader(gl, shaders, "CatalogMerVS", "CatalogFS"),
ProjectionType::Mol(_) => crate::shader::get_shader(gl, shaders, "CatalogMolVS", "CatalogFS"),
ProjectionType::Arc(_) => crate::shader::get_shader(gl, shaders, "CatalogArcVS", "CatalogFS"),
ProjectionType::Tan(_) => crate::shader::get_shader(gl, shaders, "CatalogTanVS", "CatalogFS"),
ProjectionType::Hpx(_) => crate::shader::get_shader(gl, shaders, "CatalogHpxVS", "CatalogFS"),
_ => todo!()
ProjectionType::Sin(_) => {
crate::shader::get_shader(gl, shaders, "CatalogOrtVS", "CatalogOrtFS")
}
ProjectionType::Ait(_) => {
crate::shader::get_shader(gl, shaders, "CatalogAitVS", "CatalogFS")
}
ProjectionType::Mer(_) => {
crate::shader::get_shader(gl, shaders, "CatalogMerVS", "CatalogFS")
}
ProjectionType::Mol(_) => {
crate::shader::get_shader(gl, shaders, "CatalogMolVS", "CatalogFS")
}
ProjectionType::Arc(_) => {
crate::shader::get_shader(gl, shaders, "CatalogArcVS", "CatalogFS")
}
ProjectionType::Tan(_) => {
crate::shader::get_shader(gl, shaders, "CatalogTanVS", "CatalogFS")
}
ProjectionType::Hpx(_) => {
crate::shader::get_shader(gl, shaders, "CatalogHpxVS", "CatalogFS")
}
_ => todo!(),
}?;
let shader_bound = shader.bind(gl);
@@ -493,7 +510,12 @@ impl Catalog {
let size = camera.get_screen_size();
gl.viewport(0, 0, size.x as i32, size.y as i32);
let shader = crate::shader::get_shader(gl, shaders, "ColormapCatalogVS", "ColormapCatalogFS")?;
let shader = crate::shader::get_shader(
gl,
shaders,
"ColormapCatalogVS",
"ColormapCatalogFS",
)?;
//self.colormap.get_shader(gl, shaders);
let shaderbound = shader.bind(gl);
shaderbound
@@ -515,4 +537,3 @@ impl Catalog {
Ok(())
}
}

View File

@@ -1,5 +1,2 @@
mod manager;
pub use manager::{Catalog, Manager};
mod source;
pub use source::Source;
mod index;

View File

@@ -1,64 +0,0 @@
#[repr(C, packed)]
pub struct Source {
pub x: f32,
pub y: f32,
pub z: f32,
}
impl Source {
pub const fn num_f32() -> usize {
std::mem::size_of::<Self>() / std::mem::size_of::<f32>()
}
}
impl PartialEq for Source {
fn eq(&self, other: &Self) -> bool {
self.x == other.x && self.y == other.y && self.z == other.z
}
}
impl Eq for Source {}
impl Clone for Source {
fn clone(&self) -> Self {
Source { x: self.x, y: self.y, z: self.z }
}
}
use cgmath::Vector3;
use crate::math::{self, angle::Angle, lonlat::LonLat};
impl Source {
pub fn new(lon: Angle<f32>, lat: Angle<f32> /*, mag: f32*/) -> Source {
let world_pos = math::lonlat::radec_to_xyz(lon, lat);
let x = world_pos.x;
let y = world_pos.y;
let z = world_pos.z;
Source {
x,
y,
z,
//lon,
//lat,
//mag
}
}
pub fn lonlat(&self) -> (f32, f32) {
let lonlat = Vector3::new(self.x, self.y, self.z).lonlat();
(lonlat.lon().to_radians(), lonlat.lat().to_radians())
}
}
use crate::math::angle::ArcDeg;
impl From<&[f32]> for Source {
fn from(data: &[f32]) -> Source {
let lon = ArcDeg(data[0]).into();
let lat = ArcDeg(data[1]).into();
//let mag = data[3];
Source::new(lon, lat /*, mag*/)
}
}

View File

@@ -0,0 +1,296 @@
use crate::healpix::coverage::HEALPixCoverage;
use moclib::elem::cell::Cell;
use moclib::moc::range::CellAndNeighs;
use moclib::moc::RangeMOCIntoIterator;
use moclib::moc::RangeMOCIterator;
use crate::renderable::coverage::HEALPixCell;
use healpix::compass_point::{MainWind, Ordinal};
#[derive(Debug)]
pub(super) struct EdgeNeigs {
// Indices of the neighbors in the stack
pub neig_idx: Vec<usize>,
// Smallest depth from the neighbor cells
pub max_depth_neig: u8,
}
#[derive(Debug)]
pub(super) struct NodeEdgeNeigs {
pub cell: HEALPixCell,
pub edge_neigs: [Option<EdgeNeigs>; 4],
}
impl PartialEq for NodeEdgeNeigs {
fn eq(&self, other: &Self) -> bool {
self.cell == other.cell
}
}
impl NodeEdgeNeigs {
pub(super) fn add_neig(&mut self, org: Ordinal, neig_idx: usize, neig_cell_depth: u8) {
let org_idx = org as u8 as usize;
if let Some(neigs) = &mut self.edge_neigs[org_idx] {
neigs.neig_idx.push(neig_idx);
neigs.max_depth_neig = neigs.max_depth_neig.max(neig_cell_depth);
} else {
self.edge_neigs[org_idx] = Some(EdgeNeigs {
neig_idx: vec![neig_idx],
max_depth_neig: neig_cell_depth,
});
}
}
pub(super) fn compute_n_seg(&self, side: Ordinal) -> u32 {
let mut delta_depth =
if let Some(edge_neigs) = self.edge_neigs[side as u8 as usize].as_ref() {
edge_neigs.max_depth_neig.max(self.cell.depth()) - self.cell.depth()
} else {
0
};
if self.cell.depth() + delta_depth < 3 {
delta_depth = 3 - self.cell.depth();
}
if self.cell.depth() >= 6 {
delta_depth = 0
}
1 << delta_depth
}
pub(super) fn compute_n_seg_with_neig_info(
&self,
neig: &Self,
side: Ordinal,
side_neig: Ordinal,
) -> u32 {
let mut delta_depth = if self.cell.depth() > 6 {
0
} else {
if let (Some(edge_neigs), Some(edge_self)) = (
self.edge_neigs[side as u8 as usize].as_ref(),
neig.edge_neigs[side_neig as u8 as usize].as_ref(),
) {
edge_neigs
.max_depth_neig
.max(edge_self.max_depth_neig)
.max(self.cell.depth())
- self.cell.depth()
} else {
0
}
};
if self.cell.depth() + delta_depth < 3 {
delta_depth = 3 - self.cell.depth();
}
1 << delta_depth
}
}
pub(super) struct G {
nodes: Vec<NodeEdgeNeigs>,
}
impl G {
pub(super) fn new(moc: &HEALPixCoverage) -> Self {
let mut nodes: Vec<_> = (&moc.0)
.into_range_moc_iter()
.cells()
.map(|cell| {
let cell = HEALPixCell(cell.depth, cell.idx);
NodeEdgeNeigs {
cell,
edge_neigs: [None, None, None, None],
}
})
.collect();
let find_cell_node_idx = |nodes: &[NodeEdgeNeigs], cell: &Cell<u64>| -> usize {
let hpx_cell = HEALPixCell(cell.depth, cell.idx);
let result = nodes.binary_search_by(|n| n.cell.cmp(&hpx_cell));
match result {
Ok(i) => i,
Err(_) => unreachable!(),
}
};
// 1. Build the MOC graph structure
for cell_and_neig in moc.0.all_cells_with_unidirectional_neigs() {
let CellAndNeighs { cell, neigs } = cell_and_neig;
// cells are given by uniq order so big cells at first and smaller cells after
// neighbor information are also given for small cells towards bigger cells
// Thus we are sure we have already processed the neighbor before as it is a bigger cell or equal order
// cell having an idx inferior
let small_node_idx = find_cell_node_idx(&nodes, &cell);
if let Some(&nw_neig_cell_idx) = neigs.get(Ordinal::NW) {
//al_core::log("nw neig");
let nw_neig_cell_d = nodes[nw_neig_cell_idx].cell.depth();
debug_assert!(nw_neig_cell_d <= cell.depth);
if let Some(dir) =
find_neig_dir(nodes[nw_neig_cell_idx].cell, nodes[small_node_idx].cell)
{
nodes[nw_neig_cell_idx].add_neig(dir, small_node_idx, cell.depth);
}
// Add the neig info from the big to the small node
//nodes[nw_neig_cell_idx].add_neig(Ordinal::SE, small_node_idx, cell.depth);
// Add the neig info from the small to the big node
nodes[small_node_idx].add_neig(Ordinal::NW, nw_neig_cell_idx, nw_neig_cell_d);
}
if let Some(&ne_neig_cell_idx) = neigs.get(Ordinal::NE) {
//al_core::log("ne neig");
let ne_neig_cell_d = nodes[ne_neig_cell_idx].cell.depth();
debug_assert!(ne_neig_cell_d <= cell.depth);
if let Some(dir) =
find_neig_dir(nodes[ne_neig_cell_idx].cell, nodes[small_node_idx].cell)
{
nodes[ne_neig_cell_idx].add_neig(dir, small_node_idx, cell.depth);
}
// Add the neig info from the big to the small node
//nodes[ne_neig_cell_idx].add_neig(Ordinal::SW, small_node_idx, cell.depth);
// Add the neig info from the small to the big node
nodes[small_node_idx].add_neig(Ordinal::NE, ne_neig_cell_idx, ne_neig_cell_d);
}
if let Some(&se_neig_cell_idx) = neigs.get(Ordinal::SE) {
//al_core::log("se neig");
let se_neig_cell_d = nodes[se_neig_cell_idx].cell.depth();
debug_assert!(se_neig_cell_d <= cell.depth);
if let Some(dir) =
find_neig_dir(nodes[se_neig_cell_idx].cell, nodes[small_node_idx].cell)
{
nodes[se_neig_cell_idx].add_neig(dir, small_node_idx, cell.depth);
}
// Add the neig info from the big to the small node
//nodes[se_neig_cell_idx].add_neig(Ordinal::NW, small_node_idx, cell.depth);
// Add the neig info from the small to the big node
nodes[small_node_idx].add_neig(Ordinal::SE, se_neig_cell_idx, se_neig_cell_d);
}
if let Some(&sw_neig_cell_idx) = neigs.get(Ordinal::SW) {
//al_core::log("sw neig");
let sw_neig_cell_d = nodes[sw_neig_cell_idx].cell.depth();
debug_assert!(sw_neig_cell_d <= cell.depth);
if let Some(dir) =
find_neig_dir(nodes[sw_neig_cell_idx].cell, nodes[small_node_idx].cell)
{
nodes[sw_neig_cell_idx].add_neig(dir, small_node_idx, cell.depth);
}
// Add the neig info from the big to the small node
//nodes[sw_neig_cell_idx].add_neig(Ordinal::NE, small_node_idx, cell.depth);
// Add the neig info from the small to the big node
nodes[small_node_idx].add_neig(Ordinal::SW, sw_neig_cell_idx, sw_neig_cell_d);
}
}
Self { nodes }
}
pub(super) fn get_neigs(
&self,
node: &NodeEdgeNeigs,
dir: Ordinal,
) -> Option<Vec<&NodeEdgeNeigs>> {
node.edge_neigs[dir as u8 as usize]
.as_ref()
.map(|edge| {
if !edge.neig_idx.is_empty() {
Some(
edge.neig_idx
.iter()
.map(|idx| &self.nodes[*idx])
.collect::<Vec<_>>(),
)
} else {
None
}
})
.flatten()
}
pub(super) fn get_neig_dir(
&self,
node: &NodeEdgeNeigs,
neig: &NodeEdgeNeigs,
) -> Option<Ordinal> {
if let Some(neigs) = self.get_neigs(node, Ordinal::NW) {
if let Some(_) = neigs.iter().find(|&&n| n == neig) {
return Some(Ordinal::NW);
}
}
if let Some(neigs) = self.get_neigs(node, Ordinal::SW) {
if let Some(_) = neigs.iter().find(|&&n| n == neig) {
return Some(Ordinal::SW);
}
}
if let Some(neigs) = self.get_neigs(node, Ordinal::SE) {
if let Some(_) = neigs.iter().find(|&&n| n == neig) {
return Some(Ordinal::SE);
}
}
if let Some(neigs) = self.get_neigs(node, Ordinal::NE) {
if let Some(_) = neigs.iter().find(|&&n| n == neig) {
return Some(Ordinal::NE);
}
}
None
}
pub(super) fn nodes_iter<'a>(&'a self) -> impl Iterator<Item = &'a NodeEdgeNeigs> {
self.nodes.iter()
}
pub(super) fn nodes(&self) -> &[NodeEdgeNeigs] {
&self.nodes[..]
}
}
fn find_neig_dir(mut cell: HEALPixCell, mut neig: HEALPixCell) -> Option<Ordinal> {
if cell.depth() > neig.depth() {
cell = cell.ancestor(cell.depth() - neig.depth());
} else if cell.depth() < neig.depth() {
neig = neig.ancestor(neig.depth() - cell.depth());
}
if let Some(nw) = cell.neighbor(MainWind::NW) {
if nw == neig {
return Some(Ordinal::NW);
}
}
if let Some(ne) = cell.neighbor(MainWind::NE) {
if ne == neig {
return Some(Ordinal::NE);
}
}
if let Some(sw) = cell.neighbor(MainWind::SW) {
if sw == neig {
return Some(Ordinal::SW);
}
}
if let Some(se) = cell.neighbor(MainWind::SE) {
if se == neig {
return Some(Ordinal::SE);
}
}
None
}

View File

@@ -0,0 +1,54 @@
use super::moc::MOC;
use crate::{camera::CameraViewPort, HEALPixCoverage};
use al_api::moc::MOC as Cfg;
pub struct MOCHierarchy {
full_res_depth: u8,
// MOC at different resolution
mocs: Vec<MOC>,
}
impl MOCHierarchy {
pub fn from_full_res_moc(full_res_moc: HEALPixCoverage, cfg: &Cfg) -> Self {
let full_res_depth = full_res_moc.depth();
let mut mocs: Vec<_> = (0..full_res_depth)
.map(|d| MOC::new(&HEALPixCoverage(full_res_moc.degraded(d)), cfg))
.collect();
mocs.push(MOC::new(&full_res_moc, cfg));
Self {
mocs,
full_res_depth,
}
}
pub fn select_moc_from_view(&mut self, camera: &mut CameraViewPort) -> &mut MOC {
const MAX_NUM_CELLS_TO_DRAW: usize = 1500;
let mut d = self.full_res_depth as usize;
while d > 5 {
self.mocs[d].cell_indices_in_view(camera);
let num_cells = self.mocs[d].num_cells_in_view(camera);
if num_cells < MAX_NUM_CELLS_TO_DRAW {
break;
}
d = d - 1;
}
self.mocs[d].cell_indices_in_view(camera);
&mut self.mocs[d]
}
pub fn get_full_moc(&self) -> &MOC {
&self.mocs[self.full_res_depth as usize]
}
pub fn get_full_res_depth(&self) -> u8 {
self.full_res_depth
}
}

View File

@@ -0,0 +1,357 @@
use al_api::moc::MOC as Cfg;
use std::cmp::Ordering;
use std::ops::Range;
use std::vec;
use crate::camera::CameraViewPort;
use crate::healpix::cell::CellVertices;
use crate::healpix::coverage::HEALPixCoverage;
use crate::math::projection::ProjectionType;
use crate::renderable::coverage::mode::RenderMode;
use crate::renderable::coverage::Angle;
use crate::renderable::coverage::IdxVec;
use crate::renderable::line::PathVertices;
use crate::renderable::line::RasterizedLineRenderer;
use al_api::color::ColorRGBA;
use al_api::coo_system::CooSystem;
use super::mode::Node;
use cgmath::Vector2;
pub struct MOC([Option<MOCIntern>; 3]);
impl MOC {
pub(super) fn new(moc: &HEALPixCoverage, cfg: &Cfg) -> Self {
let mocs = [
if cfg.perimeter {
// draw only perimeter
Some(MOCIntern::new(
moc,
RenderModeType::Perimeter {
thickness: cfg.line_width,
color: cfg.color,
},
))
} else {
None
},
if cfg.filled {
// change color
let fill_color = cfg.fill_color;
// draw the edges
Some(MOCIntern::new(
moc,
RenderModeType::Filled { color: fill_color },
))
} else {
None
},
if cfg.edges {
Some(MOCIntern::new(
moc,
RenderModeType::Edge {
thickness: cfg.line_width,
color: cfg.color,
},
))
} else {
None
},
];
Self(mocs)
}
pub(super) fn cell_indices_in_view(&mut self, camera: &mut CameraViewPort) {
for render in &mut self.0 {
if let Some(render) = render.as_mut() {
render.cell_indices_in_view(camera);
}
}
}
pub(super) fn num_cells_in_view(&self, camera: &mut CameraViewPort) -> usize {
self.0
.iter()
.filter_map(|moc| moc.as_ref())
.map(|moc| moc.num_cells_in_view(camera))
.sum()
}
/*pub(super) fn num_vertices_in_view(&self, camera: &mut CameraViewPort) -> usize {
let mut num_vertices = 0;
for render in &self.0 {
if let Some(render) = render.as_ref() {
num_vertices += render.num_vertices_in_view(camera);
}
}
num_vertices
}*/
pub(super) fn draw(
&self,
camera: &mut CameraViewPort,
proj: &ProjectionType,
rasterizer: &mut RasterizedLineRenderer,
) {
for render in &self.0 {
if let Some(render) = render.as_ref() {
render.draw(camera, proj, rasterizer)
}
}
}
}
struct MOCIntern {
// HEALPix index vector
// Used for fast HEALPix cell retrieval
hpx_idx_vec: IdxVec,
// Node indices in view
indices: Vec<Range<usize>>,
nodes: Vec<Node>,
mode: RenderModeType,
}
#[derive(Clone)]
pub enum RenderModeType {
Perimeter { thickness: f32, color: ColorRGBA },
Edge { thickness: f32, color: ColorRGBA },
Filled { color: ColorRGBA },
}
impl MOCIntern {
fn new(moc: &HEALPixCoverage, mode: RenderModeType) -> Self {
let nodes = match mode {
RenderModeType::Edge { .. } => super::mode::edge::Edge::build(moc),
RenderModeType::Filled { .. } => super::mode::filled::Fill::build(moc),
RenderModeType::Perimeter { .. } => super::mode::perimeter::Perimeter::build(moc),
};
let hpx_idx_vec = IdxVec::from_hpx_cells(nodes.iter().map(|n| &n.cell));
Self {
nodes,
hpx_idx_vec,
indices: vec![],
mode,
}
}
fn cell_indices_in_view(&mut self, camera: &mut CameraViewPort) {
// Cache it for several reuse during the same frame
let view_depth = camera.get_tile_depth();
let cells_iter = camera.get_hpx_cells(view_depth, CooSystem::ICRS);
if self.nodes.is_empty() {
self.indices = vec![0..0];
return;
}
let indices: Vec<_> = if view_depth > 7 {
// Binary search version, we are using this alternative for retrieving
// MOC's cells to render for deep fields of view
let first_cell_rng = &self.nodes[0].cell.z_29_rng();
let last_cell_rng = &self.nodes[self.nodes.len() - 1].cell.z_29_rng();
cells_iter
.filter_map(|cell| {
let cell_rng = cell.z_29_rng();
// Quick rejection test
if cell_rng.end <= first_cell_rng.start || cell_rng.start >= last_cell_rng.end {
None
} else {
let contains_val = |hash_z29: u64| -> Result<usize, usize> {
self.nodes.binary_search_by(|node| {
let node_cell_rng = node.cell.z_29_rng();
if hash_z29 < node_cell_rng.start {
// the node cell range contains hash_z29
Ordering::Greater
} else if hash_z29 >= node_cell_rng.end {
Ordering::Less
} else {
Ordering::Equal
}
})
};
let start_idx = contains_val(cell_rng.start);
let end_idx = contains_val(cell_rng.end);
let cell_indices = match (start_idx, end_idx) {
(Ok(l), Ok(r)) => {
if l == r {
l..(r + 1)
} else {
l..r
}
}
(Err(l), Ok(r)) => l..r,
(Ok(l), Err(r)) => l..r,
(Err(l), Err(r)) => l..r,
};
Some(cell_indices)
}
})
.collect()
} else {
// Index Vector 7 order version
cells_iter
.map(|cell| self.hpx_idx_vec.get_item_indices_inside_hpx_cell(&cell))
.collect()
};
let indices = crate::utils::merge_overlapping_intervals(indices);
self.indices = indices;
}
/*fn num_vertices_in_view(&self, camera: &CameraViewPort) -> usize {
self.cells_in_view(camera)
.filter_map(|n| n.vertices.as_ref())
.map(|n_vertices| {
n_vertices
.vertices
.iter()
.map(|edge| edge.len())
.sum::<usize>()
})
.sum()
}*/
fn num_cells_in_view(&self, _camera: &CameraViewPort) -> usize {
self.indices
.iter()
.map(|range| range.end - range.start)
.sum()
}
fn cells_in_view<'a>(&'a self, _camera: &CameraViewPort) -> impl Iterator<Item = &'a Node> {
let nodes = &self.nodes;
self.indices
.iter()
.map(move |indices| nodes[indices.start..indices.end].iter())
.flatten()
}
fn vertices_in_view<'a>(
&'a self,
camera: &mut CameraViewPort,
_projection: &ProjectionType,
) -> impl Iterator<Item = &'a CellVertices> {
self.cells_in_view(camera)
.filter_map(move |node| node.vertices.as_ref())
}
fn draw(
&self,
camera: &mut CameraViewPort,
proj: &ProjectionType,
rasterizer: &mut RasterizedLineRenderer,
) {
// Determine if the view may lead to crossing edges/triangles
// This is dependant on the projection used
let crossing_edges_testing = if proj.is_allsky() {
let sky_percent_covered = camera.get_cov(CooSystem::ICRS).sky_fraction();
//al_core::info!("sky covered: ", sky_percent_covered);
sky_percent_covered > 0.80
} else {
// The projection is not allsky.
false
};
let camera_coosys = camera.get_coo_system();
let paths_iter = self
.vertices_in_view(camera, proj)
.filter_map(|cell_vertices| {
let vertices = &cell_vertices.vertices[..];
let mut ndc: Vec<[f32; 2]> = vec![];
for i in 0..vertices.len() {
let line_vertices = &vertices[i];
for k in 0..line_vertices.len() {
let (lon, lat) = line_vertices[k];
let xyzw = crate::math::lonlat::radec_to_xyzw(Angle(lon), Angle(lat));
let xyzw =
crate::coosys::apply_coo_system(CooSystem::ICRS, camera_coosys, &xyzw);
if let Some(p) = proj.model_to_normalized_device_space(&xyzw, camera) {
if ndc.len() > 0 && crossing_edges_testing {
let mag2 = crate::math::vector::dist2(
crate::math::projection::ndc_to_clip_space(&p, camera).as_ref(),
crate::math::projection::ndc_to_clip_space(
&Vector2::new(
ndc[ndc.len() - 1][0] as f64,
ndc[ndc.len() - 1][1] as f64,
),
camera,
)
.as_ref(),
);
//al_core::info!("mag", i, mag2);
if mag2 > 0.1 {
return None;
}
}
ndc.push([p.x as f32, p.y as f32]);
} else {
return None;
}
}
}
// Check the last
if cell_vertices.closed && crossing_edges_testing {
let mag2 = crate::math::vector::dist2(
crate::math::projection::ndc_to_clip_space(
&Vector2::new(ndc[0][0] as f64, ndc[0][1] as f64),
camera,
)
.as_ref(),
crate::math::projection::ndc_to_clip_space(
&Vector2::new(
ndc[ndc.len() - 1][0] as f64,
ndc[ndc.len() - 1][1] as f64,
),
camera,
)
.as_ref(),
);
if mag2 > 0.1 {
return None;
}
}
Some(PathVertices {
vertices: ndc,
closed: cell_vertices.closed,
})
});
match self.mode {
RenderModeType::Perimeter { thickness, color }
| RenderModeType::Edge { thickness, color } => {
let thickness = (thickness + 0.5) * 2.0 / camera.get_width();
rasterizer.add_stroke_paths(
paths_iter,
thickness,
&color,
&super::line::Style::None,
);
}
RenderModeType::Filled { color } => rasterizer.add_fill_paths(paths_iter, &color),
}
}
}

View File

@@ -0,0 +1,314 @@
use crate::{
healpix::{cell::HEALPixCell, coverage::HEALPixCoverage, index_vector::IdxVec},
math::angle::Angle,
CameraViewPort, ShaderManager,
};
mod graph;
pub mod mode;
pub mod hierarchy;
pub mod moc;
use crate::renderable::line::RasterizedLineRenderer;
use wasm_bindgen::JsValue;
use hierarchy::MOCHierarchy;
use super::utils::Triangle;
use al_api::coo_system::CooSystem;
use al_api::moc::MOC as Cfg;
pub struct MOCRenderer {
mocs: Vec<MOCHierarchy>,
cfgs: Vec<Cfg>,
}
use cgmath::Vector2;
fn is_crossing_projection(
cell: &HEALPixCell,
camera: &CameraViewPort,
projection: &ProjectionType,
) -> bool {
let vertices = cell
.path_along_cell_edge(1)
.iter()
.filter_map(|(lon, lat)| {
let xyzw = crate::math::lonlat::radec_to_xyzw(Angle(*lon), Angle(*lat));
let xyzw =
crate::coosys::apply_coo_system(CooSystem::ICRS, camera.get_coo_system(), &xyzw);
projection
.model_to_normalized_device_space(&xyzw, camera)
.map(|v| [v.x as f32, v.y as f32])
})
.collect::<Vec<_>>();
let cell_inside = vertices.len() == 4;
if cell_inside {
let c0 = &vertices[0];
let c1 = &vertices[1];
let c2 = &vertices[2];
let c3 = &vertices[3];
let t0 = Triangle::new(c0, c1, c2);
let t2 = Triangle::new(c2, c3, c0);
t0.is_invalid(camera) || t2.is_invalid(camera)
} else {
true
}
}
use al_api::cell::HEALPixCellProjeted;
pub fn rasterize_hpx_cell(
cell: &HEALPixCell,
n_segment_by_side: usize,
camera: &CameraViewPort,
idx_off: &mut u32,
proj: &ProjectionType,
) -> Option<(Vec<f32>, Vec<u32>)> {
let n_vertices_per_segment = n_segment_by_side + 1;
let vertices = cell
.grid(n_segment_by_side as u32)
.iter()
.filter_map(|(lon, lat)| {
let xyzw = crate::math::lonlat::radec_to_xyzw(Angle(*lon), Angle(*lat));
let xyzw =
crate::coosys::apply_coo_system(CooSystem::ICRS, camera.get_coo_system(), &xyzw);
proj.model_to_normalized_device_space(&xyzw, camera)
.map(|v| [v.x as f32, v.y as f32])
})
.flatten()
.collect::<Vec<_>>();
let cell_inside = vertices.len() == 2 * (n_segment_by_side + 1) * (n_segment_by_side + 1);
if cell_inside {
// Generate the iterator: idx_off + 1, idx_off + 1, .., idx_off + 4*n_segment - 1, idx_off + 4*n_segment - 1
let mut indices = Vec::with_capacity(n_segment_by_side * n_segment_by_side * 6);
let num_vertices = (n_segment_by_side + 1) * (n_segment_by_side + 1);
let longitude_reversed = camera.get_longitude_reversed();
let invalid_tri = |tri_ccw: bool, reversed_longitude: bool| -> bool {
(!reversed_longitude && !tri_ccw) || (reversed_longitude && tri_ccw)
};
for i in 0..n_segment_by_side {
for j in 0..n_segment_by_side {
let idx_0 = j + i * n_vertices_per_segment;
let idx_1 = j + 1 + i * n_vertices_per_segment;
let idx_2 = j + (i + 1) * n_vertices_per_segment;
let idx_3 = j + 1 + (i + 1) * n_vertices_per_segment;
let c0 = crate::math::projection::ndc_to_screen_space(
&Vector2::new(vertices[2 * idx_0] as f64, vertices[2 * idx_0 + 1] as f64),
camera,
);
let c1 = crate::math::projection::ndc_to_screen_space(
&Vector2::new(vertices[2 * idx_1] as f64, vertices[2 * idx_1 + 1] as f64),
camera,
);
let c2 = crate::math::projection::ndc_to_screen_space(
&Vector2::new(vertices[2 * idx_2] as f64, vertices[2 * idx_2 + 1] as f64),
camera,
);
let c3 = crate::math::projection::ndc_to_screen_space(
&Vector2::new(vertices[2 * idx_3] as f64, vertices[2 * idx_3 + 1] as f64),
camera,
);
let first_tri_ccw = !crate::math::vector::ccw_tri(&c0, &c1, &c2);
let second_tri_ccw = !crate::math::vector::ccw_tri(&c1, &c3, &c2);
if invalid_tri(first_tri_ccw, longitude_reversed)
|| invalid_tri(second_tri_ccw, longitude_reversed)
{
return None;
}
let vx = [c0.x, c1.x, c2.x, c3.x];
let vy = [c0.y, c1.y, c2.y, c3.y];
let projeted_cell = HEALPixCellProjeted {
ipix: cell.idx(),
vx,
vy,
};
crate::camera::view_hpx_cells::project(projeted_cell, camera, proj)?;
indices.push(*idx_off + idx_0 as u32);
indices.push(*idx_off + idx_1 as u32);
indices.push(*idx_off + idx_2 as u32);
indices.push(*idx_off + idx_1 as u32);
indices.push(*idx_off + idx_3 as u32);
indices.push(*idx_off + idx_2 as u32);
}
}
*idx_off += num_vertices as u32;
Some((vertices, indices))
} else {
None
}
}
use crate::ProjectionType;
use super::line;
impl MOCRenderer {
pub fn new() -> Result<Self, JsValue> {
// layout (location = 0) in vec2 ndc_pos;
//let vertices = vec![0.0; MAX_NUM_FLOATS_TO_DRAW];
//let indices = vec![0_u16; MAX_NUM_INDICES_TO_DRAW];
//let vertices = vec![];
/*let position = vec![];
let indices = vec![];
#[cfg(feature = "webgl2")]
vao.bind_for_update()
.add_array_buffer_single(
2,
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&position),
)
// Set the element buffer
.add_element_buffer(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u32>(&indices),
)
.unbind();
#[cfg(feature = "webgl1")]
vao.bind_for_update()
.add_array_buffer(
2,
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&position),
)
// Set the element buffer
.add_element_buffer(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u32>(&indices),
)
.unbind();
*/
let mocs = Vec::new();
let cfgs = Vec::new();
Ok(Self { mocs, cfgs })
}
pub fn push_back(
&mut self,
moc: HEALPixCoverage,
cfg: Cfg,
camera: &mut CameraViewPort,
proj: &ProjectionType,
) {
self.mocs.push(MOCHierarchy::from_full_res_moc(moc, &cfg));
self.cfgs.push(cfg);
camera.register_view_frame(CooSystem::ICRS, proj);
//self.layers.push(key);
}
pub fn remove(
&mut self,
cfg: &Cfg,
camera: &mut CameraViewPort,
proj: &ProjectionType,
) -> Option<Cfg> {
let name = cfg.get_uuid();
if let Some(idx) = self.cfgs.iter().position(|cfg| cfg.get_uuid() == name) {
self.mocs.remove(idx);
camera.unregister_view_frame(CooSystem::ICRS, proj);
Some(self.cfgs.remove(idx))
} else {
None
}
}
pub fn set_cfg(
&mut self,
cfg: Cfg,
camera: &mut CameraViewPort,
projection: &ProjectionType,
line_renderer: &mut RasterizedLineRenderer,
) -> Option<Cfg> {
let name = cfg.get_uuid();
if let Some(idx) = self.cfgs.iter().position(|cfg| cfg.get_uuid() == name) {
let old_cfg = self.cfgs[idx].clone();
self.cfgs[idx] = cfg;
self.update(camera, projection, line_renderer);
Some(old_cfg)
} else {
// the cfg has not been found
None
}
}
/*pub fn get(&self, cfg: &Cfg) -> Option<&HEALPixCoverage> {
let key = cfg.get_uuid();
self.mocs.get(key).map(|coverage| coverage.get_full_moc())
}*/
fn update(
&mut self,
camera: &mut CameraViewPort,
proj: &ProjectionType,
line_renderer: &mut RasterizedLineRenderer,
) {
for (hmoc, cfg) in self.mocs.iter_mut().zip(self.cfgs.iter()) {
if cfg.show {
let moc = hmoc.select_moc_from_view(camera);
moc.draw(camera, proj, line_renderer);
}
}
/*self.vao.bind_for_update()
.update_array(
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData(&self.position),
)
.update_element_array(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u32>(&self.indices),
);*/
}
pub fn is_empty(&self) -> bool {
self.cfgs.is_empty()
}
pub fn draw(
&mut self,
_shaders: &mut ShaderManager,
camera: &mut CameraViewPort,
projection: &ProjectionType,
line_renderer: &mut RasterizedLineRenderer,
) {
if self.is_empty() {
return;
}
self.update(camera, projection, line_renderer);
}
}

View File

@@ -0,0 +1,200 @@
use super::super::graph;
use super::Node;
use super::RenderMode;
use crate::HEALPixCoverage;
use healpix::{
compass_point::{Ordinal, OrdinalMap},
};
pub struct Edge;
impl RenderMode for Edge {
fn build(moc: &HEALPixCoverage) -> Vec<Node> {
let g = graph::G::new(moc);
// 2. Precompute the vertices from the graph structure
g.nodes_iter()
.flat_map(|n| {
let mut edges = OrdinalMap::new();
let cell = n.cell;
if let Some(edge_neigs) = &n.edge_neigs[Ordinal::NW as u8 as usize] {
// if the smallest neig for this edge is smaller than self
let _smallest_neig_depth = edge_neigs.max_depth_neig;
let first_neig_idx = edge_neigs.neig_idx[0];
let neig_cell = &g.nodes()[first_neig_idx].cell;
let draw_side =
// the current node has several (smaller) neig
edge_neigs.neig_idx.len() > 1
// or it has only one neig and if so
// we draw the side either if the node's idx is < to the neig's idx
|| (edge_neigs.neig_idx.len() == 1
&& cell.depth() == neig_cell.depth()
&& neig_cell.idx() > cell.idx())
// or we draw the side if the neig is smaller than the node
|| (edge_neigs.neig_idx.len() == 1
&& cell.depth() < neig_cell.depth());
if draw_side {
debug_assert!(edge_neigs.max_depth_neig >= cell.depth());
// draw the NW edge
edges.put(Ordinal::NW, n.compute_n_seg(Ordinal::NW));
}
} else {
// draw the NW edge because there are no neig along that edge
edges.put(Ordinal::NW, n.compute_n_seg(Ordinal::NW));
}
if let Some(edge_neigs) = &n.edge_neigs[Ordinal::SW as u8 as usize] {
// if the smallest neig for this edge is smaller than self
let _smallest_neig_depth = edge_neigs.max_depth_neig;
let first_neig_idx = edge_neigs.neig_idx[0];
let neig_cell = &g.nodes()[first_neig_idx].cell;
let draw_side =
// the current node has several (smaller) neig
edge_neigs.neig_idx.len() > 1
// or it has only one neig and if so
// we draw the side either if the node's idx is < to the neig's idx
|| (edge_neigs.neig_idx.len() == 1
&& cell.depth() == neig_cell.depth()
&& neig_cell.idx() > cell.idx())
// or we draw the side if the neig is smaller than the node
|| (edge_neigs.neig_idx.len() == 1
&& cell.depth() < neig_cell.depth());
if draw_side {
debug_assert!(edge_neigs.max_depth_neig >= cell.depth());
// draw the NW edge
edges.put(Ordinal::SW, n.compute_n_seg(Ordinal::SW));
}
} else {
// draw the NW edge because there are no neig along that edge
edges.put(Ordinal::SW, n.compute_n_seg(Ordinal::SW));
}
if let Some(edge_neigs) = &n.edge_neigs[Ordinal::SE as u8 as usize] {
// if the smallest neig for this edge is smaller than self
let _smallest_neig_depth = edge_neigs.max_depth_neig;
let first_neig_idx = edge_neigs.neig_idx[0];
let neig_cell = &g.nodes()[first_neig_idx].cell;
let draw_side =
// the current node has several (smaller) neig
edge_neigs.neig_idx.len() > 1
// or it has only one neig and if so
// we draw the side either if the node's idx is < to the neig's idx
|| (edge_neigs.neig_idx.len() == 1
&& cell.depth() == neig_cell.depth()
&& neig_cell.idx() > cell.idx())
// or we draw the side if the neig is smaller than the node
|| (edge_neigs.neig_idx.len() == 1
&& cell.depth() < neig_cell.depth());
if draw_side {
debug_assert!(edge_neigs.max_depth_neig >= cell.depth());
edges.put(Ordinal::SE, n.compute_n_seg(Ordinal::SE));
}
} else {
// draw the NW edge because there are no neig along that edge
edges.put(Ordinal::SE, n.compute_n_seg(Ordinal::SE));
}
if let Some(edge_neigs) = &n.edge_neigs[Ordinal::NE as u8 as usize] {
// if the smallest neig for this edge is smaller than self
let _smallest_neig_depth = edge_neigs.max_depth_neig;
let first_neig_idx = edge_neigs.neig_idx[0];
let neig_cell = &g.nodes()[first_neig_idx].cell;
let draw_side =
// the current node has several (smaller) neig
edge_neigs.neig_idx.len() > 1
// or it has only one neig and if so
// we draw the side either if the node's idx is < to the neig's idx
|| (edge_neigs.neig_idx.len() == 1
&& cell.depth() == neig_cell.depth()
&& neig_cell.idx() > cell.idx())
// or we draw the side if the neig is smaller than the node
|| (edge_neigs.neig_idx.len() == 1
&& cell.depth() < neig_cell.depth());
if draw_side {
debug_assert!(edge_neigs.max_depth_neig >= cell.depth());
// draw the NW edge
edges.put(Ordinal::NE, n.compute_n_seg(Ordinal::NE));
}
} else {
// draw the NE edge because there are no neig along that edge
edges.put(Ordinal::NE, n.compute_n_seg(Ordinal::NE));
}
/*let delta_depth = (3 - (cell.depth() as usize)).max(0) as u8;
cell.get_children_cells(delta_depth).map(move |child_cell| {
let mut edges = OrdinalMap::new();
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, 1);
edges.put(Ordinal::NE, 1);
Node {
vertices: child_cell.path_along_sides(&edges),
cell: child_cell,
}
})*/
if cell.depth() < 2 {
/*let max_depth = crate::math::utils::log_2_unchecked(n_seg_nw)
.max(crate::math::utils::log_2_unchecked(n_seg_se))
.max(crate::math::utils::log_2_unchecked(n_seg_ne))
.max(crate::math::utils::log_2_unchecked(n_seg_sw))
+ cell.depth() as u32;
let n_seg = if max_depth > 3 {
1 << (max_depth - 3)
} else {
1
};*/
cell.get_children_cells(2 - cell.depth())
.map(|child_cell| {
let mut edges = OrdinalMap::new();
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, 1);
Node {
vertices: child_cell.path_along_sides(&edges),
cell: child_cell,
}
})
.collect()
} else {
vec![Node {
vertices: cell.path_along_sides(&edges),
cell,
}]
}
})
.collect()
}
}

View File

@@ -0,0 +1,191 @@
use super::super::graph::NodeEdgeNeigs;
use super::Node;
use super::RenderMode;
use crate::HEALPixCoverage;
use healpix::compass_point::{Ordinal, OrdinalMap};
use super::super::graph::G;
pub struct Fill;
impl RenderMode for Fill {
fn build(moc: &HEALPixCoverage) -> Vec<Node> {
let g = G::new(moc);
let n_seg_from_dir = |n: &NodeEdgeNeigs, dir: Ordinal| -> u32 {
if let Some(neigs) = g.get_neigs(n, dir) {
if let Some(neig_side) = g.get_neig_dir(neigs[0], n) {
n.compute_n_seg_with_neig_info(neigs[0], dir, neig_side)
} else {
1
}
} else {
1
}
};
g.nodes_iter()
.flat_map(|n| {
let cell = n.cell;
// Draw all of the node's edges
let n_seg_nw = n_seg_from_dir(n, Ordinal::NW);
let n_seg_ne = n_seg_from_dir(n, Ordinal::NE);
let n_seg_sw = n_seg_from_dir(n, Ordinal::SW);
let n_seg_se = n_seg_from_dir(n, Ordinal::SE);
match cell.depth() {
0 => {
let n_seg_sw = (n_seg_sw >> 2).max(1);
let n_seg_se = (n_seg_se >> 2).max(1);
let n_seg_nw = (n_seg_nw >> 2).max(1);
let n_seg_ne = (n_seg_ne >> 2).max(1);
cell.get_children_cells(2)
.map(|child_cell| {
let mut edges = OrdinalMap::new();
let off = child_cell.idx() - (cell.idx() << 4);
match off {
// S
0 => {
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, n_seg_sw);
edges.put(Ordinal::SE, n_seg_se);
}
// W
10 => {
edges.put(Ordinal::NW, n_seg_nw);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, n_seg_sw);
edges.put(Ordinal::SE, 1);
}
// E
5 => {
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, n_seg_ne);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, n_seg_se);
}
// N
15 => {
edges.put(Ordinal::NW, n_seg_nw);
edges.put(Ordinal::NE, n_seg_ne);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, 1);
}
// SE
1 | 4 => {
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, n_seg_se);
}
// SW
2 | 8 => {
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, n_seg_sw);
edges.put(Ordinal::SE, 1);
}
// NW
11 | 14 => {
edges.put(Ordinal::NW, n_seg_nw);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, 1);
}
// NE
7 | 13 => {
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, n_seg_ne);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, 1);
}
_ => {
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, 1);
}
}
Node {
vertices: child_cell.path_along_sides(&edges),
cell: child_cell,
}
})
.collect()
}
1 => {
let n_seg_sw = (n_seg_sw >> 1).max(1);
let n_seg_se = (n_seg_se >> 1).max(1);
let n_seg_nw = (n_seg_nw >> 1).max(1);
let n_seg_ne = (n_seg_ne >> 1).max(1);
cell.get_children_cells(1)
.map(|child_cell| {
let mut edges = OrdinalMap::new();
let off = child_cell.idx() - (cell.idx() << 2);
match off {
// S
0 => {
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, n_seg_sw);
edges.put(Ordinal::SE, n_seg_se);
}
// W
2 => {
edges.put(Ordinal::NW, n_seg_nw);
edges.put(Ordinal::NE, 1);
edges.put(Ordinal::SW, n_seg_sw);
edges.put(Ordinal::SE, 1);
}
// E
1 => {
edges.put(Ordinal::NW, 1);
edges.put(Ordinal::NE, n_seg_ne);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, n_seg_se);
}
// N
3 => {
edges.put(Ordinal::NW, n_seg_nw);
edges.put(Ordinal::NE, n_seg_ne);
edges.put(Ordinal::SW, 1);
edges.put(Ordinal::SE, 1);
}
_ => {
unimplemented!();
}
}
Node {
vertices: child_cell.path_along_sides(&edges),
cell: child_cell,
}
})
.collect()
}
_ => {
let mut edges = OrdinalMap::new();
edges.put(Ordinal::NW, n_seg_nw);
edges.put(Ordinal::NE, n_seg_ne);
edges.put(Ordinal::SW, n_seg_sw);
edges.put(Ordinal::SE, n_seg_se);
vec![Node {
vertices: cell.path_along_sides(&edges),
cell,
}]
}
}
})
.collect()
}
}

View File

@@ -0,0 +1,17 @@
use crate::healpix::cell::CellVertices;
use crate::renderable::coverage::HEALPixCell;
use crate::HEALPixCoverage;
pub mod edge;
pub mod filled;
pub mod perimeter;
pub(super) trait RenderMode {
fn build(moc: &HEALPixCoverage) -> Vec<Node>;
}
#[derive(Debug)]
pub struct Node {
pub cell: HEALPixCell,
pub vertices: Option<CellVertices>,
}

View File

@@ -0,0 +1,45 @@
use super::Node;
use super::RenderMode;
use crate::healpix::cell::HEALPixCell;
use healpix::{
compass_point::{Ordinal, OrdinalMap},
};
use moclib::elem::cell::Cell;
use crate::HEALPixCoverage;
use moclib::moc::range::CellAndEdges;
pub struct Perimeter;
impl RenderMode for Perimeter {
fn build(moc: &HEALPixCoverage) -> Vec<Node> {
moc.0
.border_elementary_edges()
.map(|CellAndEdges { uniq, edges }| {
let c = Cell::from_uniq_hpx(uniq);
let cell = HEALPixCell(c.depth, c.idx);
let mut map = OrdinalMap::new();
if edges.get(Ordinal::SE) {
map.put(Ordinal::SE, 1);
}
if edges.get(Ordinal::SW) {
map.put(Ordinal::SW, 1);
}
if edges.get(Ordinal::NE) {
map.put(Ordinal::NE, 1);
}
if edges.get(Ordinal::NW) {
map.put(Ordinal::NW, 1);
}
let vertices = cell.path_along_sides(&map);
Node { cell, vertices }
})
.collect()
}
}

View File

@@ -1,742 +0,0 @@
use web_sys::WebGl2RenderingContext;
use crate::math::angle;
use cgmath::Vector4;
use crate::camera::CameraViewPort;
use crate::ProjectionType;
use al_api::grid::GridCfg;
use al_core::VertexArrayObject;
use al_api::color::ColorRGB;
use crate::Abort;
pub struct ProjetedGrid {
// Properties
pub color: ColorRGB,
pub opacity: f32,
pub show_labels: bool,
pub enabled: bool,
pub label_scale: f32,
// The vertex array object of the screen in NDC
vao: VertexArrayObject,
labels: Vec<Option<Label>>,
sizes: Vec<usize>,
offsets: Vec<usize>,
num_vertices: usize,
gl: WebGlContext,
// Render Text Manager
text_renderer: TextRenderManager,
fmt: angle::SerializeFmt,
}
use crate::shader::ShaderManager;
use al_core::VecData;
use al_core::WebGlContext;
use wasm_bindgen::JsValue;
use super::labels::RenderManager;
use super::TextRenderManager;
use al_api::resources::Resources;
impl ProjetedGrid {
pub fn new(
gl: &WebGlContext,
camera: &CameraViewPort,
resources: &Resources,
projection: &ProjectionType
) -> Result<ProjetedGrid, JsValue> {
let vao = {
let mut vao = VertexArrayObject::new(gl);
let vertices = vec![];
// layout (location = 0) in vec2 ndc_pos;
#[cfg(feature = "webgl2")]
vao.bind_for_update().add_array_buffer(
"ndc_pos",
2 * std::mem::size_of::<f32>(),
&[2],
&[0],
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&vertices),
);
#[cfg(feature = "webgl1")]
vao.bind_for_update().add_array_buffer(
2,
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&vertices),
);
vao
};
let num_vertices = 0;
let labels = vec![];
let gl = gl.clone();
let sizes = vec![];
let offsets = vec![];
let text_renderer = TextRenderManager::new(gl.clone(), &resources)?;
let color = ColorRGB { r: 0.0, g: 1.0, b: 0.0 };
let opacity = 1.0;
let show_labels = true;
let enabled = false;
let label_scale = 1.0;
let fmt = angle::SerializeFmt::DMS;
let mut grid = ProjetedGrid {
color,
opacity,
show_labels,
enabled,
label_scale,
vao,
//vbo,
labels,
num_vertices,
sizes,
offsets,
gl,
text_renderer,
fmt,
};
// Initialize the vertices & labels
grid.force_update(camera, projection);
Ok(grid)
}
pub fn set_cfg(&mut self, new_cfg: GridCfg, camera: &CameraViewPort, projection: &ProjectionType) -> Result<(), JsValue> {
let GridCfg {
color,
opacity,
show_labels,
label_size,
enabled,
fmt,
} = new_cfg;
if let Some(color) = color {
self.color = color;
}
if let Some(opacity) = opacity {
self.opacity = opacity;
}
if let Some(show_labels) = show_labels {
self.show_labels = show_labels;
}
if let Some(fmt) = fmt {
self.fmt = fmt.into();
}
if let Some(enabled) = enabled {
self.enabled = enabled;
if enabled {
self.force_update(camera, projection);
}
}
if let Some(label_size) = label_size {
self.label_scale = label_size;
}
self.text_renderer.begin_frame();
for label in self.labels.iter().flatten() {
self.text_renderer.add_label(
&label.content,
&label.position.cast::<f32>().unwrap_abort(),
cgmath::Rad(label.rot as f32),
);
}
self.text_renderer.end_frame();
Ok(())
}
fn force_update(&mut self, camera: &CameraViewPort, projection: &ProjectionType) {
self.text_renderer.begin_frame();
//let text_height = text_renderer.text_size();
let lines = lines(camera, &self.text_renderer, projection, &self.fmt);
self.offsets.clear();
self.sizes.clear();
let (vertices, labels): (Vec<Vec<Vector2<f64>>>, Vec<Option<Label>>) = lines
.into_iter()
.map(|line| {
if self.sizes.is_empty() {
self.offsets.push(0);
} else {
let last_offset = *self.offsets.last().unwrap_abort();
self.offsets.push(last_offset + self.sizes.last().unwrap_abort());
}
self.sizes.push(line.vertices.len());
(line.vertices, line.label)
})
.unzip();
self.labels = labels;
for label in self.labels.iter().flatten() {
self.text_renderer.add_label(
&label.content,
&label.position.cast::<f32>().unwrap_abort(),
cgmath::Rad(label.rot as f32),
);
}
let vertices = vertices
.into_iter()
.flatten()
.flat_map(|v| [v.x as f32, v.y as f32])
.collect::<Vec<_>>();
//self.lines = lines;
self.num_vertices = vertices.len() >> 1;
/*let vertices = unsafe {
let len = vertices.len() << 1;
let cap = len;
Vec::from_raw_parts(vertices.as_mut_ptr() as *mut f32, len, cap)
};*/
/*let vertices = unsafe {
vertices.set_len(self.num_vertices << 1);
std::mem::transmute::<_, Vec<f32>>(vertices)
};*/
#[cfg(feature = "webgl2")]
self.vao.bind_for_update().update_array(
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData(&vertices),
);
#[cfg(feature = "webgl1")]
self.vao.bind_for_update().update_array(
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData(&vertices),
);
self.text_renderer.end_frame();
}
// Update the grid whenever the camera moved
pub fn update(&mut self, camera: &CameraViewPort, projection: &ProjectionType) {
if !self.enabled {
return;
}
self.force_update(camera, projection);
}
fn draw_lines_cpu(&self, camera: &CameraViewPort, shaders: &mut ShaderManager) {
self.gl.blend_func_separate(
WebGl2RenderingContext::SRC_ALPHA,
WebGl2RenderingContext::ONE_MINUS_SRC_ALPHA,
WebGl2RenderingContext::ONE,
WebGl2RenderingContext::ONE,
);
let shader = shaders
.get(
&self.gl,
&ShaderId(Cow::Borrowed("GridVS_CPU"), Cow::Borrowed("GridFS_CPU")),
)
.unwrap_abort();
let shader = shader.bind(&self.gl);
shader
.attach_uniforms_from(camera)
.attach_uniform("opacity", &self.opacity)
.attach_uniform("color", &self.color);
// The raster vao is bound at the lib.rs level
let drawer = shader.bind_vertex_array_object_ref(&self.vao);
for (offset, size) in self.offsets.iter().zip(self.sizes.iter()) {
if *size > 0 {
drawer.draw_arrays(WebGl2RenderingContext::LINES, *offset as i32, *size as i32);
}
}
}
pub fn draw(
&mut self,
camera: &CameraViewPort,
shaders: &mut ShaderManager,
) -> Result<(), JsValue> {
if self.enabled {
self.gl.enable(WebGl2RenderingContext::BLEND);
self.draw_lines_cpu(camera, shaders);
self.gl.disable(WebGl2RenderingContext::BLEND);
if self.show_labels {
self.text_renderer.draw(camera, &self.color, self.opacity, self.label_scale)?;
}
}
Ok(())
}
}
use crate::shader::ShaderId;
use std::borrow::Cow;
use crate::math::{
angle::Angle,
spherical::FieldOfViewType,
};
use cgmath::InnerSpace;
use cgmath::Vector2;
use core::ops::Range;
#[derive(Debug)]
struct Label {
position: Vector2<f64>,
content: String,
rot: f64,
}
impl Label {
fn meridian(
fov: &FieldOfViewType,
lon: f64,
m1: &Vector3<f64>,
camera: &CameraViewPort,
sp: Option<&Vector2<f64>>,
text_renderer: &TextRenderManager,
projection: &ProjectionType,
fmt: &angle::SerializeFmt
) -> Option<Self> {
let lat = camera.get_center().lonlat().lat();
// Do not plot meridian labels when the center of fov
// is above 80deg
if fov.is_allsky() {
// If allsky label plotting mode
// check if we are not too near of a pole
// If so, do not plot the meridian labels because
// they can overlap
if lat.abs() > ArcDeg(80.0) {
return None;
}
}
let d = if fov.contains_north_pole() {
Vector3::new(0.0, 1.0, 0.0)
} else if fov.contains_south_pole() {
Vector3::new(0.0, -1.0, 0.0)
} else {
Vector3::new(0.0, 1.0, 0.0)
};
let m2 = ((m1 + d * 1e-3).normalize()).extend(1.0);
//let s1 = projection.model_to_screen_space(&(system.to_icrs_j2000::<f64>() * m1), camera, reversed_longitude)?;
let s1 = projection.model_to_screen_space(&m1.extend(1.0), camera)?;
if !fov.is_allsky() && fov.contains_pole() {
// If a pole is contained in the view
// we will have its screen projected position
if let Some(sp) = sp {
// Distance factor between the label position
// and the nearest pole position
let dy = sp.y - s1.y;
let dx = sp.x - s1.x;
let dd2 = dx * dx + dy * dy;
let ss = camera.get_screen_size();
let ds2 = (ss.x * ss.x + ss.y * ss.y) as f64;
// This distance is divided by the size of the
// screen diagonal to be pixel agnostic
let fdd2 = dd2 / ds2;
if fdd2 < 0.004 {
return None;
}
} else {
return None;
}
}
let s2 = projection.model_to_screen_space(&m2, camera)?;
//let s2 = projection.model_to_screen_space(&(system.to_icrs_j2000::<f64>() * m2), camera, reversed_longitude)?;
let ds = (s2 - s1).normalize();
let content = fmt.to_string(Angle(lon));
let position = if !fov.is_allsky() {
//let dim = ctx2d.measure_text(&content).unwrap_abort();
let dim = text_renderer.get_width_pixel_size(&content);
let k = ds * (dim * 0.5 + 10.0);
s1 + k
} else {
s1
};
//position += dv * text_height * 0.5;
// rot is between -PI and +PI
let rot = if ds.y > 0.0 {
ds.x.acos()
} else {
-ds.x.acos()
};
let rot = if ds.y > 0.0 {
if rot > HALF_PI {
-PI + rot
} else {
rot
}
} else if rot < -HALF_PI {
PI + rot
} else {
rot
};
Some(Label {
position,
content,
rot,
})
}
fn parallel(
fov: &FieldOfViewType,
lat: f64,
m1: &Vector3<f64>,
camera: &CameraViewPort,
// in pixels
text_renderer: &TextRenderManager,
projection: &ProjectionType,
) -> Option<Self> {
let mut d = Vector3::new(-m1.z, 0.0, m1.x).normalize();
let _system = camera.get_system();
let center = camera.get_center().truncate();
//let center = (system.to_gal::<f64>() * camera.get_center()).truncate();
if center.dot(d) < 0.0 {
d = -d;
}
let m2 = (m1 + d * 1e-3).normalize();
let s1 =
//projection.model_to_screen_space(&(system.to_icrs_j2000::<f64>() * m1.extend(1.0)), camera, reversed_longitude)?;
projection.model_to_screen_space(&m1.extend(1.0), camera)?;
let s2 =
//projection.model_to_screen_space(&(system.to_icrs_j2000::<f64>() * m2.extend(1.0)), camera, reversed_longitude)?;
projection.model_to_screen_space(&m2.extend(1.0), camera)?;
let ds = (s2 - s1).normalize();
let content = angle::SerializeFmt::DMS.to_string(Angle(lat));
let position = if !fov.is_allsky() && !fov.contains_pole() {
let dim = text_renderer.get_width_pixel_size(&content);
let k = ds * (dim * 0.5 + 10.0);
//let k = Vector2::new(0.0, 0.0);
s1 + k
} else {
s1
};
//position += dv * text_height * 0.5;
// rot is between -PI and +PI
let rot = if ds.y > 0.0 {
ds.x.acos()
} else {
-ds.x.acos()
};
let rot = if ds.y > 0.0 {
if rot > HALF_PI {
-PI + rot
} else {
rot
}
} else if rot < -HALF_PI {
PI + rot
} else {
rot
};
Some(Label {
position,
content,
rot,
})
}
/*fn size(camera: &CameraViewPort) -> f64 {
let ndc1 =
crate::projection::clip_to_ndc_space(&Vector2::new(-1.0, 0.0), camera);
let ndc2 =
crate::projection::clip_to_ndc_space(&Vector2::new(1.0, 0.0), camera);
let dx = ndc2.x - ndc1.x;
let allsky = dx < 2.0;
if allsky {
let dw = dx / 2.0; // [0..1]
dw.max(0.75)
} else {
1.0
}
}*/
}
#[derive(Debug)]
struct GridLine {
vertices: Vec<Vector2<f64>>,
label: Option<Label>,
}
use cgmath::{Rad, Vector3};
//use math::angle::SerializeToString;
const PI: f64 = std::f64::consts::PI;
const HALF_PI: f64 = 0.5 * PI;
use crate::math::{
angle::ArcDeg,
lonlat::LonLat,
};
impl GridLine {
fn meridian(
lon: f64,
lat: &Range<f64>,
sp: Option<&Vector2<f64>>,
camera: &CameraViewPort,
//text_height: f64,
text_renderer: &TextRenderManager,
projection: &ProjectionType,
fmt: &angle::SerializeFmt
) -> Option<Self> {
let fov = camera.get_field_of_view();
if let Some(p) = fov.intersect_meridian(Rad(lon), camera) {
let vertices = crate::line::project_along_longitudes_and_latitudes(
lon, lat.start,
lon, lat.end,
camera,
projection,
);
let label = Label::meridian(fov, lon, &p, camera, sp, text_renderer, projection, fmt);
Some(GridLine { vertices, label })
} else {
None
}
}
fn parallel(
lon: &Range<f64>,
lat: f64,
camera: &CameraViewPort,
text_renderer: &TextRenderManager,
projection: &ProjectionType,
) -> Option<Self> {
let fov = camera.get_field_of_view();
if let Some(p) = fov.intersect_parallel(Rad(lat), camera) {
let vertices = crate::line::project_along_longitudes_and_latitudes(
lon.start, lat,
lon.end, lat,
camera,
projection,
);
let label = Label::parallel(fov, lat, &p, camera, text_renderer, projection);
Some(GridLine { vertices, label })
} else {
None
}
}
}
const GRID_STEPS: &[f64] = &[
0.0000000000048481366,
0.000000000009696273,
0.000000000024240684,
0.000000000048481368,
0.000000000096962736,
0.00000000024240682,
0.00000000048481363,
0.0000000009696273,
0.0000000024240685,
0.000000004848137,
0.000000009696274,
0.000000024240684,
0.00000004848137,
0.00000009696274,
0.00000024240686,
0.0000004848137,
0.0000009696274,
0.0000024240685,
0.000004848137,
0.000009696274,
0.000024240684,
0.000048481368,
0.000072722054,
0.00014544411,
0.00029088822,
0.00058177643,
0.0014544411,
0.0029088822,
0.004363323,
0.008726646,
0.017453292,
0.034906585,
0.08726646,
0.17453292,
0.34906584,
std::f64::consts::FRAC_PI_4,
];
fn lines(
camera: &CameraViewPort,
//text_height: f64,
text_renderer: &TextRenderManager,
projection: &ProjectionType,
fmt: &angle::SerializeFmt,
) -> Vec<GridLine> {
// Get the screen position of the nearest pole
let _system = camera.get_system();
let fov = camera.get_field_of_view();
let sp = if fov.contains_pole() {
if fov.contains_north_pole() {
// Project the pole into the screen
// This is an information needed
// for plotting labels
// screen north pole
projection.view_to_screen_space(
//&(system.to_icrs_j2000::<f64>() * Vector4::new(0.0, 1.0, 0.0, 1.0)),
&Vector4::new(0.0, 1.0, 0.0, 1.0),
camera,
)
} else {
// screen south pole
projection.view_to_screen_space(
//&(system.to_icrs_j2000::<f64>() * Vector4::new(0.0, -1.0, 0.0, 1.0)),
&Vector4::new(0.0, -1.0, 0.0, 1.0),
camera,
)
}
} else {
None
};
let bbox = camera.get_bounding_box();
/*let step_lon = select_grid_step(
bbox,
bbox.get_lon_size() as f64,
//(NUM_LINES_LATITUDES as f64 * (camera.get_aspect() as f64)) as usize,
//((NUM_LINES_LATITUDES as f64) * fs.0) as usize
NUM_LINES,
);*/
let max_dim_px = camera.get_width().max(camera.get_height()) as f64;
let step_line_px = max_dim_px * 0.2;
let step_lon_precised = (bbox.get_lon_size() as f64) * step_line_px / (camera.get_width() as f64);
let step_lat_precised = (bbox.get_lat_size() as f64) * step_line_px / (camera.get_height() as f64);
// Select the good step with a binary search
let step_lon = select_fixed_step(step_lon_precised);
let step_lat = select_fixed_step(step_lat_precised);
let mut lines = vec![];
// Add meridians
let mut theta = bbox.lon_min() - (bbox.lon_min() % step_lon);
let mut stop_theta = bbox.lon_max();
if bbox.all_lon() {
stop_theta -= 1e-3;
}
while theta < stop_theta {
if let Some(line) =
GridLine::meridian(theta, &bbox.get_lat(), sp.as_ref(), camera, text_renderer, projection, fmt)
{
lines.push(line);
}
theta += step_lon;
}
// Add parallels
//let step_lat = select_grid_step(bbox, bbox.get_lat_size() as f64, NUM_LINES);
let mut alpha = bbox.lat_min() - (bbox.lat_min() % step_lat);
if alpha == -HALF_PI {
alpha += step_lat;
}
let stop_alpha = bbox.lat_max();
/*if stop_alpha == HALF_PI {
stop_alpha -= 1e-3;
}*/
while alpha < stop_alpha {
if let Some(line) = GridLine::parallel(&bbox.get_lon(), alpha, camera, text_renderer, projection) {
lines.push(line);
}
alpha += step_lat;
}
lines
}
/*fn select_grid_step(fov: f64, max_lines: usize) -> f64 {
// Select the best meridian grid step
let mut i = 0;
let mut step = GRID_STEPS[0];
while i < GRID_STEPS.len() {
if fov >= GRID_STEPS[i] {
let num_meridians_in_fov = (fov / GRID_STEPS[i]) as usize;
if num_meridians_in_fov >= max_lines - 1 {
//let idx_grid = if i == 0 { 0 } else { i - 1 };
//step = GRID_STEPS[idx_grid];
step = GRID_STEPS[i];
break;
}
}
step = GRID_STEPS[i];
i += 1;
}
step
}*/
fn select_fixed_step(fov: f64) -> f64 {
match GRID_STEPS.binary_search_by(|v| {
v.partial_cmp(&fov).expect("Couldn't compare values, maybe because the fov given is NaN")
}) {
Ok(idx) => GRID_STEPS[idx],
Err(idx) => {
if idx == 0 {
GRID_STEPS[0]
} else if idx == GRID_STEPS.len() {
GRID_STEPS[idx - 1]
} else {
let a = GRID_STEPS[idx];
let b = GRID_STEPS[idx - 1];
if a - fov > fov - b {
b
} else {
a
}
}
}
}
}

View File

@@ -1,35 +1,31 @@
pub mod raytracing;
mod triangulation;
pub mod uv;
pub mod raytracing;
use al_api::hips::ImageExt;
use al_api::hips::ImageMetadata;
use al_core::colormap::Colormap;
use al_core::VertexArrayObject;
use al_core::VecData;
use al_core::shader::Shader;
use al_core::WebGlContext;
use al_core::image::Image;
use al_core::image::format::ChannelType;
use al_core::colormap::Colormaps;
use al_core::image::format::ChannelType;
use al_core::image::Image;
use al_core::shader::Shader;
use al_core::webgl_ctx::GlWrapper;
use al_core::VecData;
use al_core::VertexArrayObject;
use al_core::WebGlContext;
use crate::math::{angle::Angle, vector::dist2};
use crate::ProjectionType;
use crate::math::{vector::dist2, angle::Angle};
use crate::camera::CameraViewPort;
use crate::{shader::ShaderManager, survey::config::HiPSConfig};
use crate::{
math::lonlat::LonLatT,
utils,
};
use crate::renderable::utils::BuildPatchIndicesIter;
use crate::{math::lonlat::LonLatT, utils};
use crate::{shader::ShaderManager, survey::config::HiPSConfig};
use crate::math::lonlat::LonLat;
use crate::downloader::request::allsky::Allsky;
use crate::healpix::{cell::HEALPixCell, coverage::HEALPixCoverage};
use crate::math::lonlat::LonLat;
use crate::time::Time;
// Recursively compute the number of subdivision needed for a cell
@@ -37,14 +33,14 @@ use crate::time::Time;
use crate::survey::buffer::ImageSurveyTextures;
use crate::survey::texture::Texture;
use crate::survey::view::HEALPixCellsInView;
use raytracing::RayTracer;
use uv::{TileCorner, TileUVW};
use cgmath::{Matrix, Matrix4};
use web_sys::{WebGl2RenderingContext};
use std::fmt::Debug;
use wasm_bindgen::JsValue;
use web_sys::WebGl2RenderingContext;
// Identity matrix
const ID: &Matrix4<f64> = &Matrix4::new(
@@ -55,12 +51,13 @@ const ID_R: &Matrix4<f64> = &Matrix4::new(
-1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0,
);
const M: f64 = 280.0*280.0;
const N: f64 = 150.0*150.0;
const M: f64 = 280.0 * 280.0;
const N: f64 = 150.0 * 150.0;
const RAP: f64 = 0.7;
fn is_too_large(cell: &HEALPixCell, camera: &CameraViewPort, projection: &ProjectionType) -> bool {
let vertices = cell.vertices()
let vertices = cell
.vertices()
.iter()
.filter_map(|(lon, lat)| {
let vertex = crate::math::lonlat::radec_to_xyzw(Angle(*lon), Angle(*lat));
@@ -71,20 +68,16 @@ fn is_too_large(cell: &HEALPixCell, camera: &CameraViewPort, projection: &Projec
if vertices.len() < 4 {
false
} else {
let d1 = dist2(&vertices[0], &vertices[2]);
let d2 = dist2(&vertices[1], &vertices[3]);
let d1 = dist2(vertices[0].as_ref(), &vertices[2].as_ref());
let d2 = dist2(vertices[1].as_ref(), &vertices[3].as_ref());
if d1 > M || d2 > M {
true
} else if d1 < N && d2 < N {
false
} else {
let rap = if d2 > d1 {
d1 / d2
} else {
d2 / d1
};
rap<RAP
let rap = if d2 > d1 { d1 / d2 } else { d2 / d1 };
rap < RAP
}
}
}
@@ -98,15 +91,15 @@ fn num_subdivision(cell: &HEALPixCell, camera: &CameraViewPort, projection: &Pro
// Largest deformation cell among the cells of a specific depth
let largest_center_to_vertex_dist =
cdshealpix::largest_center_to_vertex_distance(d, 0.0, cdshealpix::TRANSITION_LATITUDE);
healpix::largest_center_to_vertex_distance(d, 0.0, healpix::TRANSITION_LATITUDE);
let smallest_center_to_vertex_dist =
cdshealpix::largest_center_to_vertex_distance(d, 0.0, cdshealpix::LAT_OF_SQUARE_CELL);
healpix::largest_center_to_vertex_distance(d, 0.0, healpix::LAT_OF_SQUARE_CELL);
let (lon, lat) = cell.center();
let center_to_vertex_dist = cdshealpix::largest_center_to_vertex_distance(d, lon, lat);
let center_to_vertex_dist = healpix::largest_center_to_vertex_distance(d, lon, lat);
let skewed_factor = (center_to_vertex_dist - smallest_center_to_vertex_dist)
/ (largest_center_to_vertex_dist - smallest_center_to_vertex_dist);
/ (largest_center_to_vertex_dist - smallest_center_to_vertex_dist);
if is_too_large(cell, camera, projection) || cell.is_on_pole() || skewed_factor > 0.25 {
num_sub += 1;
@@ -134,7 +127,7 @@ impl<'a, 'b> TextureToDraw<'a, 'b> {
}
}
}
/*
pub trait RecomputeRasterizer {
// Returns:
// * The UV of the starting tile in the global 4096x4096 texture
@@ -161,7 +154,7 @@ impl RecomputeRasterizer for Move {
view: &'b HEALPixCellsInView,
survey: &'a ImageSurveyTextures,
) -> Vec<TextureToDraw<'a, 'b>> {
let cells_to_draw = view.get_cells();
let cells_to_draw = view.get_cells();
let mut textures = Vec::with_capacity(view.num_of_cells());
for cell in cells_to_draw {
@@ -285,6 +278,8 @@ impl RecomputeRasterizer for UnZoom {
}
}
*/
pub fn get_raster_shader<'a>(
cmap: &Colormap,
gl: &WebGlContext,
@@ -297,11 +292,26 @@ pub fn get_raster_shader<'a>(
crate::shader::get_shader(gl, shaders, "RasterizerVS", "RasterizerColorFS")
} else {
if config.tex_storing_unsigned_int {
crate::shader::get_shader(gl, shaders, "RasterizerVS", "RasterizerGrayscale2ColormapUnsignedFS")
crate::shader::get_shader(
gl,
shaders,
"RasterizerVS",
"RasterizerGrayscale2ColormapUnsignedFS",
)
} else if config.tex_storing_integers {
crate::shader::get_shader(gl, shaders, "RasterizerVS", "RasterizerGrayscale2ColormapIntegerFS")
crate::shader::get_shader(
gl,
shaders,
"RasterizerVS",
"RasterizerGrayscale2ColormapIntegerFS",
)
} else {
crate::shader::get_shader(gl, shaders, "RasterizerVS", "RasterizerGrayscale2ColormapFS")
crate::shader::get_shader(
gl,
shaders,
"RasterizerVS",
"RasterizerGrayscale2ColormapFS",
)
}
}
}
@@ -317,9 +327,19 @@ pub fn get_raytracer_shader<'a>(
crate::shader::get_shader(gl, shaders, "RayTracerVS", "RayTracerColorFS")
} else {
if config.tex_storing_unsigned_int {
crate::shader::get_shader(gl, shaders, "RayTracerVS", "RayTracerGrayscale2ColormapUnsignedFS")
crate::shader::get_shader(
gl,
shaders,
"RayTracerVS",
"RayTracerGrayscale2ColormapUnsignedFS",
)
} else if config.tex_storing_integers {
crate::shader::get_shader(gl, shaders, "RayTracerVS", "RayTracerGrayscale2ColormapIntegerFS")
crate::shader::get_shader(
gl,
shaders,
"RayTracerVS",
"RayTracerGrayscale2ColormapIntegerFS",
)
} else {
crate::shader::get_shader(gl, shaders, "RayTracerVS", "RayTracerGrayscale2ColormapFS")
}
@@ -330,8 +350,6 @@ pub struct HiPS {
//color: Color,
// The image survey texture buffer
textures: ImageSurveyTextures,
// Keep track of the cells in the FOV
view: HEALPixCellsInView,
// The projected vertices data
// For WebGL2 wasm, the data are interleaved
@@ -364,8 +382,6 @@ pub struct HiPS {
gl: WebGlContext,
min_depth_tile: u8,
depth: u8,
depth_tile: u8,
footprint_moc: Option<HEALPixCoverage>,
}
@@ -490,20 +506,16 @@ impl HiPS {
let min_depth_tile = config.get_min_depth_tile();
let textures = ImageSurveyTextures::new(gl, config)?;
let view = HEALPixCellsInView::new();
let gl = gl.clone();
let depth = 0;
let depth_tile = 0;
let _depth = 0;
let _depth_tile = 0;
let footprint_moc = None;
// request the allsky texture
Ok(HiPS {
// The image survey texture buffer
textures,
// Keep track of the cells in the FOV
view,
num_idx,
vao,
@@ -519,16 +531,76 @@ impl HiPS {
idx_vertices,
min_depth_tile,
depth,
depth_tile,
footprint_moc,
})
}
pub fn update(&mut self, camera: &CameraViewPort, projection: &ProjectionType) {
let vertices_recomputation_needed = self.textures.reset_available_tiles() | camera.has_moved();
pub fn look_for_new_tiles<'a>(
&'a mut self,
camera: &'a mut CameraViewPort,
) -> Option<impl Iterator<Item = &'a HEALPixCell> + 'a> {
// do not add tiles if the view is already at depth 0
let depth_tile = camera
.get_tile_depth()
.min(self.get_config().get_max_tile_depth());
let min_depth_tile = self.get_min_depth_tile();
let delta_depth = self.get_config().delta_depth();
let min_bound_depth = min_depth_tile.max(delta_depth);
// do not ask to query tiles that:
// * either do not exist because < to min_depth_tile
// * either are part of a base tile already handled i.e. tiles < delta_depth
if depth_tile >= min_bound_depth {
let survey_frame = self.get_config().get_frame();
let tile_cells_iter = camera
.get_hpx_cells(depth_tile, survey_frame)
//.flat_map(move |cell| {
// let texture_cell = cell.get_texture_cell(delta_depth);
// texture_cell.get_tile_cells(delta_depth)
//})
.filter(move |tile_cell| {
if let Some(moc) = self.footprint_moc.as_ref() {
if moc.intersects_cell(tile_cell) {
!self.update_priority_tile(tile_cell)
} else {
false
}
} else {
!self.update_priority_tile(tile_cell)
}
});
/*if depth_tile >= min_depth_tile + 3 {
// Retrieve the grand-grand parent cells but not if it is root ones as it may interfere with already done requests
let tile_cells_ancestor_iter =
(&tile_cells_iter).map(|tile_cell| tile_cell.ancestor(3));
tile_cells_iter.chain(tile_cells_ancestor_iter);
}*/
/*let tile_cells: HashSet<_> = if let Some(moc) = survey.get_moc() {
tile_cells_iter
.filter(|tile_cell| moc.intersects_cell(tile_cell))
.collect()
} else {
tile_cells_iter.collect()
};*/
Some(tile_cells_iter)
} else {
None
}
}
pub fn contains_tile(&self, cell: &HEALPixCell) -> bool {
self.textures.contains_tile(cell)
}
pub fn update(&mut self, camera: &mut CameraViewPort, projection: &ProjectionType) {
let vertices_recomputation_needed =
self.textures.reset_available_tiles() | camera.has_moved();
if vertices_recomputation_needed {
self.recompute_vertices(camera, projection);
}
@@ -552,7 +624,7 @@ impl HiPS {
self.textures
.start_time
.map(|start_time| {
let fading = (Time::now().0 - start_time.0) / crate::app::BLENDING_ANIM_DURATION;
let fading = (Time::now().0 - start_time.0) / crate::app::BLENDING_ANIM_DURATION.0;
fading.clamp(0.0, 1.0)
})
.unwrap_or(0.0)
@@ -560,12 +632,12 @@ impl HiPS {
pub fn is_allsky(&self) -> bool {
self.textures.config().is_allsky
}
pub fn reset_frame(&mut self) {
self.view.reset_frame();
}
/*pub fn reset_frame(&mut self) {
self.view.reset_frame();
}*/
// Position given is in the camera space
pub fn read_pixel(
&self,
@@ -574,15 +646,16 @@ impl HiPS {
) -> Result<JsValue, JsValue> {
// 1. Convert it to the hips frame system
let cfg = self.textures.config();
let camera_frame = camera.get_system();
let hips_frame = &cfg.get_frame();
let camera_frame = camera.get_coo_system();
let hips_frame = cfg.get_frame();
let pos = crate::coosys::apply_coo_system(camera_frame, hips_frame, &pos.vector());
// Get the array of textures from that survey
let tile_depth = camera.get_tile_depth().min(cfg.get_max_depth());
let pos_tex = self
.textures
.get_pixel_position_in_texture(&pos.lonlat(), self.view.get_depth())?;
.get_pixel_position_in_texture(&pos.lonlat(), tile_depth)?;
let slice_idx = pos_tex.z as usize;
let texture_array = self.textures.get_texture_array();
@@ -602,7 +675,7 @@ impl HiPS {
}
}
pub fn recompute_vertices(&mut self, camera: &CameraViewPort, projection: &ProjectionType) {
pub fn recompute_vertices(&mut self, camera: &mut CameraViewPort, projection: &ProjectionType) {
self.position.clear();
self.uv_start.clear();
self.uv_end.clear();
@@ -613,7 +686,7 @@ impl HiPS {
let cfg = self.textures.config();
// Get the coo system transformation matrix
let selected_frame = camera.get_system();
let selected_frame = camera.get_coo_system();
let channel = cfg.get_format().get_channel();
let hips_frame = cfg.get_frame();
@@ -621,10 +694,12 @@ impl HiPS {
let mut off_indices = 0;
for cell in self.view.get_cells() {
let depth = camera.get_tile_depth().min(cfg.get_max_depth());
let view_cells: Vec<_> = camera.get_hpx_cells(depth, hips_frame).cloned().collect();
for cell in &view_cells {
// filter textures that are not in the moc
let cell = if let Some(moc) = self.footprint_moc.as_ref() {
if moc.contains(cell) {
if moc.intersects_cell(cell) {
Some(cell)
} else {
if channel == ChannelType::RGB8U {
@@ -677,7 +752,12 @@ impl HiPS {
}
};
if let Some(TextureToDraw {cell, starting_texture, ending_texture}) = texture_to_draw {
if let Some(TextureToDraw {
cell,
starting_texture,
ending_texture,
}) = texture_to_draw
{
let uv_0 = TileUVW::new(cell, starting_texture, cfg);
let uv_1 = TileUVW::new(cell, ending_texture, cfg);
let start_time = ending_texture.start_time().as_millis();
@@ -693,16 +773,20 @@ impl HiPS {
let n_vertices_per_segment = n_segments_by_side + 1;
let mut pos = vec![];
for (idx, lonlat) in crate::healpix::utils::grid_lonlat::<f64>(cell, n_segments_by_side as u16)
.iter()
.enumerate() {
for (idx, lonlat) in
crate::healpix::utils::grid_lonlat::<f64>(cell, n_segments_by_side as u16)
.iter()
.enumerate()
{
let lon = lonlat.lon();
let lat = lonlat.lat();
let xyzw = crate::math::lonlat::radec_to_xyzw(lon, lat);
let xyzw = crate::coosys::apply_coo_system(&hips_frame, &selected_frame, &xyzw);
let ndc = projection.model_to_normalized_device_space(&xyzw, camera)
let xyzw =
crate::coosys::apply_coo_system(hips_frame, selected_frame, &xyzw);
let ndc = projection
.model_to_normalized_device_space(&xyzw, camera)
.map(|v| [v.x as f32, v.y as f32]);
let i: usize = idx / n_vertices_per_segment;
@@ -710,22 +794,22 @@ impl HiPS {
let hj0 = (j as f32) / n_segments_by_side_f32;
let hi0 = (i as f32) / n_segments_by_side_f32;
let d01s = uv_0[TileCorner::BottomRight].x - uv_0[TileCorner::BottomLeft].x;
let d02s = uv_0[TileCorner::TopLeft].y - uv_0[TileCorner::BottomLeft].y;
let d01e = uv_1[TileCorner::BottomRight].x - uv_1[TileCorner::BottomLeft].x;
let d02e = uv_1[TileCorner::TopLeft].y - uv_1[TileCorner::BottomLeft].y;
let uv_start = [
uv_0[TileCorner::BottomLeft].x + hj0 * d01s,
uv_0[TileCorner::BottomLeft].y + hi0 * d02s,
uv_0[TileCorner::BottomLeft].z
uv_0[TileCorner::BottomLeft].z,
];
let uv_end = [
uv_1[TileCorner::BottomLeft].x + hj0 * d01e,
uv_1[TileCorner::BottomLeft].y + hi0 * d02e,
uv_1[TileCorner::BottomLeft].z
uv_1[TileCorner::BottomLeft].z,
];
self.uv_start.extend(uv_start);
@@ -737,37 +821,32 @@ impl HiPS {
pos.push(ndc);
}
let patch_indices = BuildPatchIndicesIter::new(
&(0..=n_segments_by_side),
&(0..=n_segments_by_side),
n_vertices_per_segment,
&pos,
camera
).flatten()
.map(|indices| [
let patch_indices_iter = BuildPatchIndicesIter::new(
&(0..=n_segments_by_side),
&(0..=n_segments_by_side),
n_vertices_per_segment,
&pos,
camera,
)
.flatten()
.map(|indices| {
[
indices.0 + off_indices,
indices.1 + off_indices,
indices.2 + off_indices
])
.flatten()
.collect::<Vec<_>>();
indices.2 + off_indices,
]
})
.flatten();
self.idx_vertices.extend(patch_indices_iter);
off_indices += pos.len() as u16;
// Replace options with an arbitrary vertex
let position = pos.into_iter()
.map(|ndc| {
if let Some(ndc) = ndc {
ndc
} else {
[0.0, 0.0]
}
})
.flatten()
.collect::<Vec<_>>();
self.position.extend(position);
self.idx_vertices.extend(patch_indices);
let position_iter = pos
.into_iter()
.map(|ndc| ndc.unwrap_or([0.0, 0.0]))
.flatten();
self.position.extend(position_iter);
}
}
}
@@ -811,25 +890,23 @@ impl HiPS {
);
}
pub fn refresh_view(&mut self, camera: &CameraViewPort) {
/*pub fn (&mut self, camera: &CameraViewPort, proj: &ProjectionType) {
let cfg = self.textures.config();
let max_tile_depth = cfg.get_max_tile_depth();
let delta_depth = cfg.delta_depth();
//let delta_depth = cfg.delta_depth();
let hips_frame = cfg.get_frame();
//let hips_frame = cfg.get_frame();
// Compute that depth
let camera_tile_depth = camera.get_tile_depth();
self.depth_tile = camera_tile_depth.min(max_tile_depth);
// Set the depth of the HiPS textures
self.depth = if self.depth_tile > delta_depth {
/*self.depth = if self.depth_tile > delta_depth {
self.depth_tile - delta_depth
} else {
0
};
self.view.refresh(self.depth_tile, hips_frame, camera);
}
};*/
}*/
// Return a boolean to signal if the tile is present or not in the survey
pub fn update_priority_tile(&mut self, cell: &HEALPixCell) -> bool {
@@ -851,10 +928,7 @@ impl HiPS {
self.textures.push(&cell, image, time_request)
}
pub fn add_allsky(
&mut self,
allsky: Allsky,
) -> Result<(), JsValue> {
pub fn add_allsky(&mut self, allsky: Allsky) -> Result<(), JsValue> {
self.textures.push_allsky(allsky)
}
@@ -869,21 +943,16 @@ impl HiPS {
self.textures.config_mut()
}
#[inline]
/*#[inline]
pub fn get_view(&self) -> &HEALPixCellsInView {
&self.view
}
}*/
#[inline]
pub fn get_min_depth_tile(&self) -> u8 {
self.min_depth_tile
}
#[inline]
pub fn get_depth(&self) -> u8 {
self.depth
}
#[inline]
pub fn is_ready(&self) -> bool {
self.textures.is_ready()
@@ -904,14 +973,18 @@ impl HiPS {
cfg: &ImageMetadata,
) -> Result<(), JsValue> {
// Get the coo system transformation matrix
let selected_frame = camera.get_system();
let selected_frame = camera.get_coo_system();
let hips_cfg = self.textures.config();
let hips_frame = hips_cfg.get_frame();
let c = selected_frame.to(&hips_frame);
let c = selected_frame.to(hips_frame);
// Get whether the camera mode is longitude reversed
//let longitude_reversed = hips_cfg.longitude_reversed;
let rl = if camera.get_longitude_reversed() { ID_R } else { ID };
let rl = if camera.get_longitude_reversed() {
ID_R
} else {
ID
};
// Retrieve the model and inverse model matrix
let w2v = c * (*camera.get_w2m()) * rl;
@@ -940,14 +1013,9 @@ impl HiPS {
if raytracing {
// Triangle are defined in CCW
self.gl.cull_face(WebGl2RenderingContext::BACK);
let shader = get_raytracer_shader(
cmap,
&self.gl,
shaders,
&config,
)?;
let shader = get_raytracer_shader(cmap, &self.gl, shaders, &config)?;
let shader = shader.bind(&self.gl);
shader
.attach_uniforms_from(camera)
@@ -960,7 +1028,7 @@ impl HiPS {
.attach_uniform("current_time", &utils::get_current_time())
.attach_uniform("opacity", &opacity)
.attach_uniforms_from(colormaps);
raytracer.draw(&shader);
} else {
// Depending on if the longitude is reversed, triangles are either defined in:
@@ -972,7 +1040,7 @@ impl HiPS {
} else {
self.gl.cull_face(WebGl2RenderingContext::BACK);
}
// The rasterizer has a buffer containing:
// - The vertices of the HEALPix cells for the most refined survey
// - The starting and ending uv for the blending animation
@@ -985,14 +1053,8 @@ impl HiPS {
// - The UVs are changed if:
// * new cells are added/removed (because new cells are added)
// * there are new available tiles for the GPU
let shader = get_raster_shader(
cmap,
&self.gl,
shaders,
&config,
)?
.bind(&self.gl);
let shader = get_raster_shader(cmap, &self.gl, shaders, &config)?.bind(&self.gl);
shader
.attach_uniforms_from(camera)
.attach_uniforms_from(&self.textures)
@@ -1012,7 +1074,7 @@ impl HiPS {
0,
);
}
// Depending on if the longitude is reversed, triangles are either defined in:
// - CCW for longitude_reversed = false
// - CW for longitude_reversed = true
@@ -1030,4 +1092,4 @@ impl HiPS {
Ok(())
}
}
}

View File

@@ -1,5 +1,5 @@
use crate::{camera::CameraViewPort, math::projection::Projection};
use crate::domain::sdf::ProjDefType;
use crate::{camera::CameraViewPort, math::projection::Projection};
use al_core::VecData;
use al_core::{shader::ShaderBound, Texture2D, VertexArrayObject, WebGlContext};
@@ -38,13 +38,11 @@ use cgmath::{InnerSpace, Vector2};
const SIZE_POSITION_TEX: usize = 2048;
fn generate_xyz_position(projection: &ProjectionType) -> Vec<f32> {
let (w, h) = (SIZE_POSITION_TEX, SIZE_POSITION_TEX);
let mut data = Vec::with_capacity(SIZE_POSITION_TEX * SIZE_POSITION_TEX * 3);
unsafe { data.set_len(SIZE_POSITION_TEX * SIZE_POSITION_TEX * 3); }
let mut data = vec![0.0; SIZE_POSITION_TEX * SIZE_POSITION_TEX * 3];
let mut set_pixel = |r: f32, g: f32, b: f32, x: usize, y: usize| {
data[3*(y*w + x)] = r;
data[3*(y*w + x) + 1] = g;
data[3*(y*w + x) + 2] = b;
data[3 * (y * w + x)] = r;
data[3 * (y * w + x) + 1] = g;
data[3 * (y * w + x) + 2] = b;
};
let mut t1 = 1.0;
@@ -66,25 +64,25 @@ fn generate_xyz_position(projection: &ProjectionType) -> Vec<f32> {
d |= ((pos.x * 0.5 + 0.5) * (1024.0 as f64)) as u32;
data.push(d);*/
t1 = pos.x as f32;
t2 = pos.y as f32;
t3 = pos.z as f32;
set_pixel(t1, t2, t3, x, y);
if x > 0 {
set_pixel(t1, t2, t3, x-1, y);
set_pixel(t1, t2, t3, x - 1, y);
}
if y > 0 {
set_pixel(t1, t2, t3, x, y-1);
set_pixel(t1, t2, t3, x, y - 1);
}
if x < w - 1 {
set_pixel(t1, t2, t3, x+1, y);
set_pixel(t1, t2, t3, x + 1, y);
}
if y < h - 1 {
set_pixel(t1, t2, t3, x, y+1);
set_pixel(t1, t2, t3, x, y + 1);
}
} else {
set_pixel(t1, t2, t3, x, y);

View File

@@ -1,14 +1,22 @@
use wcs::ImgXY;
use std::ops::RangeInclusive;
use wcs::ImgXY;
use crate::camera::CameraViewPort;
use crate::math::projection::ProjectionType;
use wcs::WCS;
use al_api::coo_system::CooSystem;
use crate::math::angle::ToAngle;
use crate::math::projection::ProjectionType;
use crate::renderable::utils::BuildPatchIndicesIter;
use al_api::coo_system::CooSystem;
use wcs::WCS;
pub fn get_grid_params(xy_min: &(f64, f64), xy_max: &(f64, f64), max_tex_size: u64, num_tri_per_tex_patch: u64) -> (impl Iterator<Item=(u64, f32)> + Clone, impl Iterator<Item=(u64, f32)> + Clone) {
pub fn get_grid_params(
xy_min: &(f64, f64),
xy_max: &(f64, f64),
max_tex_size: u64,
num_tri_per_tex_patch: u64,
) -> (
impl Iterator<Item = (u64, f32)> + Clone,
impl Iterator<Item = (u64, f32)> + Clone,
) {
let x_range_len = (xy_max.0 - xy_min.0) as u64;
let y_range_len = (xy_max.1 - xy_min.1) as u64;
@@ -22,7 +30,10 @@ pub fn get_grid_params(xy_min: &(f64, f64), xy_max: &(f64, f64), max_tex_size: u
let step = (step_x.max(step_y)).max(1); // at least one pixel!
(get_coord_uv_it(xmin, xmax, step, max_tex_size), get_coord_uv_it(ymin, ymax, step, max_tex_size))
(
get_coord_uv_it(xmin, xmax, step, max_tex_size),
get_coord_uv_it(ymin, ymax, step, max_tex_size),
)
}
#[derive(Clone)]
@@ -43,7 +54,7 @@ impl StepCoordIterator {
start,
step,
end,
cur
cur,
}
}
}
@@ -52,11 +63,10 @@ impl Iterator for StepCoordIterator {
type Item = u64;
fn next(&mut self) -> Option<Self::Item> {
if self.cur == self.start {
// starting case
self.cur = self.start - (self.start % self.step) + self.step;
Some(self.start)
} else if self.cur < self.end {
// ongoing case
@@ -70,44 +80,44 @@ impl Iterator for StepCoordIterator {
}
}
fn get_coord_uv_it(xmin: u64, xmax: u64, step: usize, max_tex_size: u64) -> impl Iterator<Item=(u64, f32)> + Clone {
let get_uv_in_tex_chunk = move |x: u64| {
((x % max_tex_size) as f32) / (max_tex_size as f32)
};
fn get_coord_uv_it(
xmin: u64,
xmax: u64,
step: usize,
max_tex_size: u64,
) -> impl Iterator<Item = (u64, f32)> + Clone {
let get_uv_in_tex_chunk = move |x: u64| ((x % max_tex_size) as f32) / (max_tex_size as f32);
let tex_patch_x = StepCoordIterator::new(xmin, xmax, max_tex_size);
let x_it = std::iter::once((xmin, get_uv_in_tex_chunk(xmin)))
.chain(
tex_patch_x.clone().skip(1)
.map(|x1| {
vec![(x1, 1.0), (x1, 0.0)]
})
.flatten()
tex_patch_x
.clone()
.skip(1)
.map(|x1| vec![(x1, 1.0), (x1, 0.0)])
.flatten(),
)
.chain(
std::iter::once((
xmax,
if xmax % max_tex_size == 0 {
1.0
} else {
get_uv_in_tex_chunk(xmax)
}
))
);
.chain(std::iter::once((
xmax,
if xmax % max_tex_size == 0 {
1.0
} else {
get_uv_in_tex_chunk(xmax)
},
)));
let mut step_x = (xmin..xmax).step_by(step as usize);
let mut cur_step = step_x.next().unwrap();
x_it.clone().zip(x_it.clone().skip(1))
x_it.clone()
.zip(x_it.clone().skip(1))
.map(move |(x1, x2)| {
let mut xk = vec![x1];
while cur_step < x2.0 {
if cur_step > x1.0 {
xk.push(
(cur_step, get_uv_in_tex_chunk(cur_step))
);
xk.push((cur_step, get_uv_in_tex_chunk(cur_step)));
}
if let Some(step) = step_x.next() {
@@ -120,19 +130,17 @@ fn get_coord_uv_it(xmin: u64, xmax: u64, step: usize, max_tex_size: u64) -> impl
xk
})
.flatten()
.chain(
std::iter::once((
xmax,
if xmax % max_tex_size == 0 {
1.0
} else {
get_uv_in_tex_chunk(xmax)
}
))
)
}
.chain(std::iter::once((
xmax,
if xmax % max_tex_size == 0 {
1.0
} else {
get_uv_in_tex_chunk(xmax)
},
)))
}
fn build_range_indices(it: impl Iterator<Item=(u64, f32)> + Clone) -> Vec<RangeInclusive<usize>> {
fn build_range_indices(it: impl Iterator<Item = (u64, f32)> + Clone) -> Vec<RangeInclusive<usize>> {
let mut idx_ranges = vec![];
let mut idx_start = 0;
@@ -158,7 +166,16 @@ fn build_range_indices(it: impl Iterator<Item=(u64, f32)> + Clone) -> Vec<RangeI
}
#[allow(dead_code)]
pub fn get_grid_vertices(xy_min: &(f64, f64), xy_max: &(f64, f64), max_tex_size: u64, num_tri_per_tex_patch: u64, camera: &CameraViewPort, wcs: &WCS, image_coo_sys: &CooSystem, projection: &ProjectionType) -> (Vec<[f32; 2]>, Vec<[f32; 2]>, Vec<u16>, Vec<u32>) {
pub fn get_grid_vertices(
xy_min: &(f64, f64),
xy_max: &(f64, f64),
max_tex_size: u64,
num_tri_per_tex_patch: u64,
camera: &CameraViewPort,
wcs: &WCS,
image_coo_sys: CooSystem,
projection: &ProjectionType,
) -> (Vec<[f32; 2]>, Vec<[f32; 2]>, Vec<u16>, Vec<u32>) {
let (x_it, y_it) = get_grid_params(xy_min, xy_max, max_tex_size, num_tri_per_tex_patch);
let idx_x_ranges = build_range_indices(x_it.clone());
@@ -166,33 +183,42 @@ pub fn get_grid_vertices(xy_min: &(f64, f64), xy_max: &(f64, f64), max_tex_size:
let num_x_vertices = idx_x_ranges.last().unwrap().end() + 1;
let (pos, uv): (Vec<_>, Vec<_>) = y_it.map(move |(y, uvy)|
x_it.clone().map(move |(x, uvx)| {
let ndc = if let Some(lonlat) = wcs.unproj(&ImgXY::new(x as f64, y as f64)) {
let lon = lonlat.lon();
let lat = lonlat.lat();
let xyzw = crate::math::lonlat::radec_to_xyzw(lon.to_angle(), lat.to_angle());
let xyzw = crate::coosys::apply_coo_system(&image_coo_sys, camera.get_system(), &xyzw);
projection.model_to_normalized_device_space(&xyzw, camera)
.map(|v| [v.x as f32, v.y as f32])
} else {
None
};
let (pos, uv): (Vec<_>, Vec<_>) = y_it
.map(move |(y, uvy)| {
x_it.clone().map(move |(x, uvx)| {
let ndc = if let Some(lonlat) = wcs.unproj(&ImgXY::new(x as f64, y as f64)) {
let lon = lonlat.lon();
let lat = lonlat.lat();
(ndc, [uvx, uvy])
let xyzw = crate::math::lonlat::radec_to_xyzw(lon.to_angle(), lat.to_angle());
let xyzw = crate::coosys::apply_coo_system(
image_coo_sys,
camera.get_coo_system(),
&xyzw,
);
projection
.model_to_normalized_device_space(&xyzw, camera)
.map(|v| [v.x as f32, v.y as f32])
} else {
None
};
(ndc, [uvx, uvy])
})
})
).flatten()
.unzip();
.flatten()
.unzip();
let mut indices = vec![];
let mut num_indices = vec![];
for idx_x_range in &idx_x_ranges {
for idx_y_range in &idx_y_ranges {
let build_indices_iter = BuildPatchIndicesIter::new(idx_x_range, idx_y_range, num_x_vertices, &pos, camera);
let build_indices_iter =
BuildPatchIndicesIter::new(idx_x_range, idx_y_range, num_x_vertices, &pos, camera);
let patch_indices = build_indices_iter.flatten()
let patch_indices = build_indices_iter
.flatten()
.map(|indices| [indices.0, indices.1, indices.2])
.flatten()
.collect::<Vec<_>>();
@@ -202,14 +228,9 @@ pub fn get_grid_vertices(xy_min: &(f64, f64), xy_max: &(f64, f64), max_tex_size:
}
}
let pos = pos.into_iter()
.map(|ndc| {
if let Some(ndc) = ndc {
ndc
} else {
[0.0, 0.0]
}
})
let pos = pos
.into_iter()
.map(|ndc| if let Some(ndc) = ndc { ndc } else { [0.0, 0.0] })
.collect();
(pos, uv, indices, num_indices)
@@ -221,12 +242,7 @@ mod tests {
#[test]
fn test_grid_vertices() {
let (x, y) = super::get_grid_params(
&(0.0, 0.0),
&(40.0, 40.0),
20,
4
);
let (x, y) = super::get_grid_params(&(0.0, 0.0), &(40.0, 40.0), 20, 4);
let x = x.collect::<Vec<_>>();
let y = y.collect::<Vec<_>>();
@@ -234,12 +250,7 @@ mod tests {
assert_eq!(x.len(), 6);
assert_eq!(y.len(), 6);
let (x, y) = super::get_grid_params(
&(0.0, 0.0),
&(50.0, 40.0),
20,
5
);
let (x, y) = super::get_grid_params(&(0.0, 0.0), &(50.0, 40.0), 20, 5);
let x = x.collect::<Vec<_>>();
let y = y.collect::<Vec<_>>();
@@ -247,12 +258,7 @@ mod tests {
assert_eq!(x.len(), 8);
assert_eq!(y.len(), 6);
let (x, y) = super::get_grid_params(
&(0.0, 0.0),
&(7000.0, 7000.0),
4096,
2
);
let (x, y) = super::get_grid_params(&(0.0, 0.0), &(7000.0, 7000.0), 4096, 2);
let x = x.collect::<Vec<_>>();
let y = y.collect::<Vec<_>>();
@@ -260,26 +266,25 @@ mod tests {
assert_eq!(x.len(), 5);
assert_eq!(y.len(), 5);
let (x, y) = super::get_grid_params(
&(0.0, 0.0),
&(3000.0, 7000.0),
4096,
2
);
let (x, y) = super::get_grid_params(&(0.0, 0.0), &(3000.0, 7000.0), 4096, 2);
let x = x.collect::<Vec<_>>();
let y = y.collect::<Vec<_>>();
assert_eq!(x, &[(0, 0.0), (3000, 0.7324219)]);
assert_eq!(y, &[(0, 0.0), (3500, 0.8544922), (4096, 1.0), (4096, 0.0), (7000, 0.7089844)]);
let (x, y) = super::get_grid_params(
&(0.0, 0.0),
&(4096.0, 4096.0),
4096,
1
assert_eq!(
y,
&[
(0, 0.0),
(3500, 0.8544922),
(4096, 1.0),
(4096, 0.0),
(7000, 0.7089844)
]
);
let (x, y) = super::get_grid_params(&(0.0, 0.0), &(4096.0, 4096.0), 4096, 1);
let x_idx_rng = super::build_range_indices(x.clone());
let y_idx_rng = super::build_range_indices(y.clone());
@@ -292,26 +297,25 @@ mod tests {
assert_eq!(x_idx_rng, &[0..=1]);
assert_eq!(y_idx_rng, &[0..=1]);
let (x, y) = super::get_grid_params(
&(0.0, 0.0),
&(11000.0, 7000.0),
4096,
1
);
let (x, y) = super::get_grid_params(&(0.0, 0.0), &(11000.0, 7000.0), 4096, 1);
let x = x.collect::<Vec<_>>();
let y = y.collect::<Vec<_>>();
assert_eq!(x, &[(0, 0.0), (4096, 1.0), (4096, 0.0), (8192, 1.0), (8192, 0.0), (11000, 0.6855469)]);
assert_eq!(
x,
&[
(0, 0.0),
(4096, 1.0),
(4096, 0.0),
(8192, 1.0),
(8192, 0.0),
(11000, 0.6855469)
]
);
assert_eq!(y, &[(0, 0.0), (4096, 1.0), (4096, 0.0), (7000, 0.7089844)]);
let (x, y) = super::get_grid_params(
&(0.0, 0.0),
&(4096.0, 4096.0),
4096,
1
);
let (x, y) = super::get_grid_params(&(0.0, 0.0), &(4096.0, 4096.0), 4096, 1);
let x = x.collect::<Vec<_>>();
let y = y.collect::<Vec<_>>();
@@ -319,17 +323,20 @@ mod tests {
assert_eq!(x, &[(0, 0.0), (4096, 1.0)]);
assert_eq!(y, &[(0, 0.0), (4096, 1.0)]);
let (x, y) = super::get_grid_params(
&(3000.0, 4000.0),
&(4096.0, 7096.0),
4096,
1
);
let (x, y) = super::get_grid_params(&(3000.0, 4000.0), &(4096.0, 7096.0), 4096, 1);
let x = x.collect::<Vec<_>>();
let y = y.collect::<Vec<_>>();
assert_eq!(x, &[(3000, 0.7324219), (4096, 1.0)]);
assert_eq!(y, &[(4000, 0.9765625), (4096, 1.0), (4096, 0.0), (7096, 0.7324219)]);
assert_eq!(
y,
&[
(4000, 0.9765625),
(4096, 1.0),
(4096, 0.0),
(7096, 0.7324219)
]
);
}
}
}

View File

@@ -1,40 +1,37 @@
pub mod grid;
pub mod subdivide_texture;
use std::vec;
use std::marker::Unpin;
use std::cmp::Ordering;
use std::fmt::Debug;
use std::marker::Unpin;
use std::vec;
use al_api::coo_system::CooSystem;
use cgmath::Zero;
use futures::stream::{TryStreamExt};
use futures::stream::TryStreamExt;
use futures::AsyncRead;
use wasm_bindgen::JsValue;
use web_sys::WebGl2RenderingContext;
use fitsrs::{
hdu::{
data::stream,
}
};
use fitsrs::hdu::data::stream;
use wcs::{ImgXY, WCS};
use al_api::hips::ImageMetadata;
use al_api::fov::CenteredFoV;
use al_api::hips::ImageMetadata;
use al_core::{VertexArrayObject, Texture2D};
use al_core::WebGlContext;
use al_core::VecData;
use al_core::webgl_ctx::GlWrapper;
use al_core::image::format::*;
use al_core::webgl_ctx::GlWrapper;
use al_core::VecData;
use al_core::WebGlContext;
use al_core::{Texture2D, VertexArrayObject};
use crate::camera::CameraViewPort;
use crate::math::lonlat::LonLat;
use crate::Colormaps;
use crate::ProjectionType;
use crate::ShaderManager;
use crate::Colormaps;
use crate::math::lonlat::LonLat;
use std::ops::Range;
@@ -70,11 +67,15 @@ pub struct Image {
max_tex_size: usize,
}
use futures::io::BufReader;
use fitsrs::hdu::AsyncHDU;
use fitsrs::hdu::header::extension;
use fitsrs::hdu::AsyncHDU;
use futures::io::BufReader;
pub fn compute_automatic_cuts<T>(slice: &mut [T], first_percent: i32, second_percent: i32) -> Range<T>
pub fn compute_automatic_cuts<T>(
slice: &mut [T],
first_percent: i32,
second_percent: i32,
) -> Range<T>
where
T: PartialOrd + Copy,
{
@@ -82,8 +83,18 @@ where
let first_pct_idx = ((first_percent as f32) * 0.01 * (n as f32)) as usize;
let last_pct_idx = ((second_percent as f32) * 0.01 * (n as f32)) as usize;
let min_val = crate::utils::select_kth_smallest(slice, 0, n - 1, first_pct_idx);
let max_val = crate::utils::select_kth_smallest(slice, 0, n - 1, last_pct_idx);
let min_val = {
let (_, min_val, _) = slice.select_nth_unstable_by(first_pct_idx, |a, b| {
a.partial_cmp(b).unwrap_or(Ordering::Greater)
});
*min_val
};
let max_val = {
let (_, max_val, _) = slice.select_nth_unstable_by(last_pct_idx, |a, b| {
a.partial_cmp(b).unwrap_or(Ordering::Greater)
});
*max_val
};
min_val..max_val
}
@@ -95,7 +106,7 @@ impl Image {
//reader: &'a mut BufReader<R>,
) -> Result<Self, JsValue>
where
R: AsyncRead + Unpin + Debug + 'a
R: AsyncRead + Unpin + Debug + 'a,
{
// Load the fits file
let header = hdu.get_header();
@@ -105,8 +116,10 @@ impl Image {
if naxis == 0 {
return Err(JsValue::from_str("The fits is empty, NAXIS=0"));
}
let max_tex_size = WebGl2RenderingContext::get_parameter(gl, WebGl2RenderingContext::MAX_TEXTURE_SIZE)?
.as_f64().unwrap_or(4096.0) as usize;
let max_tex_size =
WebGl2RenderingContext::get_parameter(gl, WebGl2RenderingContext::MAX_TEXTURE_SIZE)?
.as_f64()
.unwrap_or(4096.0) as usize;
let scale = header
.get_parsed::<f64>(b"BSCALE ")
@@ -135,71 +148,62 @@ impl Image {
let height = h as f64;
let data = hdu.get_data_mut();
let (textures, channel, mut cuts) = match data {
stream::Data::U8(data) => {
let reader = data
.map_ok(|v| {
v[0].to_le_bytes()
})
.into_async_read();
let reader = data.map_ok(|v| v[0].to_le_bytes()).into_async_read();
let (textures, samples) = subdivide_texture::build::<R8UI, _>(gl, w, h, reader, max_tex_size).await?;
let (textures, samples) =
subdivide_texture::build::<R8UI, _>(gl, w, h, reader, max_tex_size).await?;
let mut samples = samples
.into_iter()
.filter_map(|v| if v == (blank as u8) {
None
} else {
Some(v)
})
.filter_map(|v| if v == (blank as u8) { None } else { Some(v) })
.collect::<Vec<_>>();
let cuts = compute_automatic_cuts(&mut samples, 1, 99);
(textures, ChannelType::R8UI, (cuts.start as f32)..(cuts.end as f32))
},
(
textures,
ChannelType::R8UI,
(cuts.start as f32)..(cuts.end as f32),
)
}
stream::Data::I16(data) => {
let reader = data
.map_ok(|v| {
v[0].to_le_bytes()
})
.into_async_read();
let reader = data.map_ok(|v| v[0].to_le_bytes()).into_async_read();
let (textures, samples) = subdivide_texture::build::<R16I, _>(gl, w, h, reader, max_tex_size).await?;
let (textures, samples) =
subdivide_texture::build::<R16I, _>(gl, w, h, reader, max_tex_size).await?;
let mut samples = samples
.into_iter()
.filter_map(|v| if v == (blank as i16) {
None
} else {
Some(v)
})
.filter_map(|v| if v == (blank as i16) { None } else { Some(v) })
.collect::<Vec<_>>();
let cuts = compute_automatic_cuts(&mut samples, 1, 99);
(textures, ChannelType::R16I, (cuts.start as f32)..(cuts.end as f32))
},
(
textures,
ChannelType::R16I,
(cuts.start as f32)..(cuts.end as f32),
)
}
stream::Data::I32(data) => {
let reader = data
.map_ok(|v| {
v[0].to_le_bytes()
})
.into_async_read();
let reader = data.map_ok(|v| v[0].to_le_bytes()).into_async_read();
let (textures, samples) = subdivide_texture::build::<R32I, _>(gl, w, h, reader, max_tex_size).await?;
let (textures, samples) =
subdivide_texture::build::<R32I, _>(gl, w, h, reader, max_tex_size).await?;
let mut samples = samples
.into_iter()
.filter_map(|v| if v == (blank as i32) {
None
} else {
Some(v)
})
.filter_map(|v| if v == (blank as i32) { None } else { Some(v) })
.collect::<Vec<_>>();
let cuts = compute_automatic_cuts(&mut samples, 1, 99);
(textures, ChannelType::R32I, (cuts.start as f32)..(cuts.end as f32))
},
(
textures,
ChannelType::R32I,
(cuts.start as f32)..(cuts.end as f32),
)
}
stream::Data::I64(data) => {
let reader = data
.map_ok(|v| {
@@ -208,40 +212,46 @@ impl Image {
})
.into_async_read();
let (textures, samples) = subdivide_texture::build::<R32I, _>(gl, w, h, reader, max_tex_size).await?;
let (textures, samples) =
subdivide_texture::build::<R32I, _>(gl, w, h, reader, max_tex_size).await?;
let mut samples = samples
.into_iter()
.filter_map(|v| if v == (blank as i32) {
None
} else {
Some(v as i32)
.filter_map(|v| {
if v == (blank as i32) {
None
} else {
Some(v as i32)
}
})
.collect::<Vec<_>>();
let cuts = compute_automatic_cuts(&mut samples, 1, 99);
(textures, ChannelType::R32I, (cuts.start as f32)..(cuts.end as f32))
},
(
textures,
ChannelType::R32I,
(cuts.start as f32)..(cuts.end as f32),
)
}
stream::Data::F32(data) => {
let reader = data
.map_ok(|v| {
v[0].to_le_bytes()
})
.into_async_read();
let (textures, samples) = subdivide_texture::build::<R32F, _>(gl, w, h, reader, max_tex_size).await?;
let reader = data.map_ok(|v| v[0].to_le_bytes()).into_async_read();
let (textures, samples) =
subdivide_texture::build::<R32F, _>(gl, w, h, reader, max_tex_size).await?;
let mut samples = samples
.into_iter()
.filter_map(|v| if v == blank || v.is_nan() || v.is_zero() {
None
} else {
Some(v)
.filter_map(|v| {
if v == blank || v.is_nan() || v.is_zero() {
None
} else {
Some(v)
}
})
.collect::<Vec<_>>();
let cuts = compute_automatic_cuts(&mut samples, 1, 99);
(textures, ChannelType::R32F, cuts)
},
}
stream::Data::F64(data) => {
let reader = data
.map_ok(|v| {
@@ -250,21 +260,24 @@ impl Image {
})
.into_async_read();
let (textures, samples) = subdivide_texture::build::<R32F, _>(gl, w, h, reader, max_tex_size).await?;
let (textures, samples) =
subdivide_texture::build::<R32F, _>(gl, w, h, reader, max_tex_size).await?;
let mut samples = samples
.into_iter()
.filter_map(|v| if v == blank || v.is_nan() || v.is_zero() {
None
} else {
Some(v)
.filter_map(|v| {
if v == blank || v.is_nan() || v.is_zero() {
None
} else {
Some(v)
}
})
.collect::<Vec<_>>();
let cuts = compute_automatic_cuts(&mut samples, 1, 99);
(textures, ChannelType::R32F, cuts)
},
}
};
let num_indices = vec![];
@@ -274,7 +287,7 @@ impl Image {
// Define the buffers
let vao = {
let mut vao = VertexArrayObject::new(gl);
#[cfg(feature = "webgl2")]
vao.bind_for_update()
// layout (location = 0) in vec2 ndc_pos;
@@ -327,30 +340,29 @@ impl Image {
let gl = gl.clone();
// Compute the fov
let center = wcs.unproj_lonlat(&ImgXY::new(width / 2.0, height / 2.0))
let center = wcs
.unproj_lonlat(&ImgXY::new(width / 2.0, height / 2.0))
.ok_or(JsValue::from_str("(w / 2, h / 2) px cannot be unprojected"))?;
let top_lonlat = wcs.unproj_lonlat(&ImgXY::new(width / 2.0, height))
let top_lonlat = wcs
.unproj_lonlat(&ImgXY::new(width / 2.0, height))
.ok_or(JsValue::from_str("(w / 2, h) px cannot be unprojected"))?;
let left_lonlat = wcs.unproj_lonlat(&ImgXY::new(0.0, height / 2.0))
let left_lonlat = wcs
.unproj_lonlat(&ImgXY::new(0.0, height / 2.0))
.ok_or(JsValue::from_str("(0, h / 2) px cannot be unprojected"))?;
let half_fov1 = crate::math::lonlat::ang_between_lonlat(
top_lonlat.into(),
center.clone().into()
);
let half_fov2 = crate::math::lonlat::ang_between_lonlat(
left_lonlat.into(),
center.clone().into()
);
let half_fov1 =
crate::math::lonlat::ang_between_lonlat(top_lonlat.into(), center.clone().into());
let half_fov2 =
crate::math::lonlat::ang_between_lonlat(left_lonlat.into(), center.clone().into());
let half_fov = half_fov1.max(half_fov2);
// ra and dec must be given in ICRS coo system
let center = {
use crate::LonLatT;
let center: LonLatT<_> = center.into();
let center = crate::coosys::apply_coo_system(&image_coo_sys, &CooSystem::ICRS, &center.vector());
let center =
crate::coosys::apply_coo_system(image_coo_sys, CooSystem::ICRS, &center.vector());
center.lonlat()
};
@@ -395,7 +407,11 @@ impl Image {
Ok(image)
}
pub fn update(&mut self, camera: &CameraViewPort, projection: &ProjectionType) -> Result<(), JsValue> {
pub fn update(
&mut self,
camera: &CameraViewPort,
projection: &ProjectionType,
) -> Result<(), JsValue> {
if camera.has_moved() {
self.recompute_vertices(camera, projection)?;
}
@@ -403,7 +419,11 @@ impl Image {
Ok(())
}
pub fn recompute_vertices(&mut self, camera: &CameraViewPort, projection: &ProjectionType) -> Result<(), JsValue> {
pub fn recompute_vertices(
&mut self,
camera: &CameraViewPort,
projection: &ProjectionType,
) -> Result<(), JsValue> {
let (width, height) = self.wcs.img_dimensions();
let width = width as f64;
let height = height as f64;
@@ -411,17 +431,26 @@ impl Image {
// Determine the x and y pixels ranges that must be drawn into the screen
let (x_mesh_range, y_mesh_range) = if let Some(vertices) = camera.get_vertices() {
// The field of view is defined, so we can compute its projection into the wcs
let (mut x_fov_proj_range, mut y_fov_proj_range) = (std::f64::INFINITY..std::f64::NEG_INFINITY, std::f64::INFINITY..std::f64::NEG_INFINITY);
let (mut x_fov_proj_range, mut y_fov_proj_range) = (
std::f64::INFINITY..std::f64::NEG_INFINITY,
std::f64::INFINITY..std::f64::NEG_INFINITY,
);
for vertex in vertices.iter() {
let xyzw = crate::coosys::apply_coo_system(camera.get_system(), &self.image_coo_sys, vertex);
let xyzw = crate::coosys::apply_coo_system(
camera.get_coo_system(),
self.image_coo_sys,
vertex,
);
let lonlat = xyzw.lonlat();
let lon = lonlat.lon();
let lat = lonlat.lat();
let img_vert = self.wcs.proj(&wcs::LonLat::new(lon.to_radians(), lat.to_radians()));
let img_vert = self
.wcs
.proj(&wcs::LonLat::new(lon.to_radians(), lat.to_radians()));
if let Some(img_vert) = img_vert {
x_fov_proj_range.start = x_fov_proj_range.start.min(img_vert.x());
@@ -438,7 +467,8 @@ impl Image {
x.start <= y.end && y.start <= x.end
};
let fov_image_overlapping = is_ranges_overlapping(&x_fov_proj_range, &(0.0..width)) && is_ranges_overlapping(&y_fov_proj_range, &(0.0..height));
let fov_image_overlapping = is_ranges_overlapping(&x_fov_proj_range, &(0.0..width))
&& is_ranges_overlapping(&y_fov_proj_range, &(0.0..height));
if fov_image_overlapping {
if camera.get_field_of_view().contains_pole() {
@@ -447,8 +477,10 @@ impl Image {
} else {
// The fov is overlapping the image, we must render it!
// clamp the texture
let x_mesh_range = x_fov_proj_range.start.max(0.0)..x_fov_proj_range.end.min(width);
let y_mesh_range = y_fov_proj_range.start.max(0.0)..y_fov_proj_range.end.min(height);
let x_mesh_range =
x_fov_proj_range.start.max(0.0)..x_fov_proj_range.end.min(width);
let y_mesh_range =
y_fov_proj_range.start.max(0.0)..y_fov_proj_range.end.min(height);
// Select the textures overlapping the fov
let id_min_tx = (x_mesh_range.start as u64) / (self.max_tex_size as u64);
@@ -461,7 +493,8 @@ impl Image {
self.idx_tex = (id_min_tx..=id_max_tx)
.flat_map(|id_tx| {
(id_min_ty..=id_max_ty).map(move |id_ty| (id_ty + id_tx*num_texture_y) as usize)
(id_min_ty..=id_max_ty)
.map(move |id_ty| (id_ty + id_tx * num_texture_y) as usize)
})
.collect::<Vec<_>>();
@@ -481,7 +514,8 @@ impl Image {
};
const MAX_NUM_TRI_PER_SIDE_IMAGE: usize = 25;
let num_vertices = ((self.centered_fov.fov / 360.0) * (MAX_NUM_TRI_PER_SIDE_IMAGE as f64)).ceil() as u64;
let num_vertices =
((self.centered_fov.fov / 360.0) * (MAX_NUM_TRI_PER_SIDE_IMAGE as f64)).ceil() as u64;
let (pos, uv, indices, num_indices) = grid::get_grid_vertices(
&(x_mesh_range.start, y_mesh_range.start),
@@ -490,8 +524,8 @@ impl Image {
num_vertices,
camera,
&self.wcs,
&self.image_coo_sys,
projection
self.image_coo_sys,
projection,
);
self.pos = unsafe { crate::utils::transmute_vec(pos).map_err(|s| JsValue::from_str(s))? };
self.uv = unsafe { crate::utils::transmute_vec(uv).map_err(|s| JsValue::from_str(s))? };
@@ -501,7 +535,8 @@ impl Image {
self.num_indices = num_indices;
// vertices contains ndc positions and texture UVs
self.vao.bind_for_update()
self.vao
.bind_for_update()
.update_array(
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
@@ -521,7 +556,12 @@ impl Image {
}
// Draw the image
pub fn draw(&self, shaders: &mut ShaderManager, colormaps: &Colormaps, cfg: &ImageMetadata) -> Result<(), JsValue> {
pub fn draw(
&self,
shaders: &mut ShaderManager,
colormaps: &Colormaps,
cfg: &ImageMetadata,
) -> Result<(), JsValue> {
self.gl.enable(WebGl2RenderingContext::BLEND);
let ImageMetadata {
@@ -534,12 +574,18 @@ impl Image {
let shader = match self.channel {
ChannelType::R32F => crate::shader::get_shader(&self.gl, shaders, "FitsVS", "FitsFS")?,
#[cfg(feature = "webgl2")]
ChannelType::R32I => crate::shader::get_shader(&self.gl, shaders, "FitsVS", "FitsFSInteger")?,
ChannelType::R32I => {
crate::shader::get_shader(&self.gl, shaders, "FitsVS", "FitsFSInteger")?
}
#[cfg(feature = "webgl2")]
ChannelType::R16I => crate::shader::get_shader(&self.gl, shaders, "FitsVS", "FitsFSInteger")?,
ChannelType::R16I => {
crate::shader::get_shader(&self.gl, shaders, "FitsVS", "FitsFSInteger")?
}
#[cfg(feature = "webgl2")]
ChannelType::R8UI => crate::shader::get_shader(&self.gl, shaders, "FitsVS", "FitsFSUnsigned")?,
_ => return Err(JsValue::from_str("Image format type not supported"))
ChannelType::R8UI => {
crate::shader::get_shader(&self.gl, shaders, "FitsVS", "FitsFSUnsigned")?
}
_ => return Err(JsValue::from_str("Image format type not supported")),
};
self.gl.disable(WebGl2RenderingContext::CULL_FACE);
@@ -551,7 +597,8 @@ impl Image {
let texture = &self.textures[idx_tex];
let num_indices = self.num_indices[idx] as i32;
shader.bind(&self.gl)
shader
.bind(&self.gl)
.attach_uniforms_from(colormaps)
.attach_uniforms_with_params_from(color, colormaps)
.attach_uniform("opacity", opacity)
@@ -567,7 +614,7 @@ impl Image {
((off_indices as usize) * std::mem::size_of::<u16>()) as i32,
);
off_indices += self.num_indices[idx];
off_indices += self.num_indices[idx];
}
Ok(())

View File

@@ -1,339 +0,0 @@
use al_core::shader::Shader;
use al_core::text::LetterTexPosition;
use al_core::texture::Texture2D;
use al_core::webgl_ctx::WebGlContext;
use al_core::VertexArrayObject;
use std::collections::HashMap;
pub trait RenderManager {
fn begin_frame(&mut self);
fn end_frame(&mut self);
fn draw(&mut self, camera: &CameraViewPort, color: &ColorRGB, opacity: f32, scale: f32) -> Result<(), JsValue>;
}
pub struct TextRenderManager {
gl: WebGlContext,
shader: Shader,
vao: VertexArrayObject,
font_texture: Texture2D,
letters: HashMap<char, LetterTexPosition>,
#[cfg(feature = "webgl2")]
vertices: Vec<f32>,
#[cfg(feature = "webgl1")]
pos: Vec<f32>,
#[cfg(feature = "webgl1")]
tx: Vec<f32>,
indices: Vec<u16>,
}
use al_core::VecData;
use cgmath::{Rad, Vector2};
use wasm_bindgen::JsValue;
use crate::camera::CameraViewPort;
use al_api::color::ColorRGB;
use web_sys::WebGl2RenderingContext;
use al_api::resources::Resources;
impl TextRenderManager {
/// Init the buffers, VAO and shader
pub fn new(gl: WebGlContext, resources: &Resources) -> Result<Self, JsValue> {
// Create the VAO for the screen
#[cfg(feature = "webgl1")]
let shader = Shader::new(
&gl,
include_str!("../../../glsl/webgl1/text/text_vertex.glsl"),
include_str!("../../../glsl/webgl1/text/text_frag.glsl"),
)?;
#[cfg(feature = "webgl2")]
let shader = Shader::new(
&gl,
include_str!("../../../glsl/webgl2/text/text_vertex.glsl"),
include_str!("../../../glsl/webgl2/text/text_frag.glsl"),
)?;
let mut vao = VertexArrayObject::new(&gl);
#[cfg(feature = "webgl2")]
let vertices = vec![];
#[cfg(feature = "webgl1")]
let pos = vec![];
#[cfg(feature = "webgl1")]
let tx = vec![];
let indices = vec![];
#[cfg(feature = "webgl2")]
vao.bind_for_update()
.add_array_buffer(
"vertices",
7 * std::mem::size_of::<f32>(),
&[2, 2, 2, 1],
&[0, 2 * std::mem::size_of::<f32>(), 4 * std::mem::size_of::<f32>(), 6 * std::mem::size_of::<f32>()],
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&vertices),
)
// Set the element buffer
.add_element_buffer(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u16>(&indices),
);
#[cfg(feature = "webgl1")]
vao.bind_for_update()
.add_array_buffer(
2,
"pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&pos),
)
.add_array_buffer(
2,
"tx",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&tx),
)
// Set the element buffer
.add_element_buffer(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u16>(&indices),
);
/*let al_core::text::Font {
bitmap,
letters,
..
} = al_core::text::rasterize_font(text_size);*/
let letters_filename = resources.get_filename("letters").ok_or(JsValue::from_str("letters loading failed"))?;
let letters_content = resources.get_filename("letters_metadata").ok_or(JsValue::from_str("letters metadata loading failed"))?;
let letters = serde_json::from_str(&letters_content).map_err(|_| JsValue::from_str("serde json failed"))?;
let font_texture = Texture2D::create_from_path::<_, al_core::image::format::RGBA8U>(
&gl,
"letters",
letters_filename,
&[
(
WebGl2RenderingContext::TEXTURE_MIN_FILTER,
WebGl2RenderingContext::LINEAR,
),
(
WebGl2RenderingContext::TEXTURE_MAG_FILTER,
WebGl2RenderingContext::LINEAR,
),
// Prevents s-coordinate wrapping (repeating)
(
WebGl2RenderingContext::TEXTURE_WRAP_S,
WebGl2RenderingContext::CLAMP_TO_EDGE,
),
// Prevents t-coordinate wrapping (repeating)
(
WebGl2RenderingContext::TEXTURE_WRAP_T,
WebGl2RenderingContext::CLAMP_TO_EDGE,
),
],
)?;
Ok(Self {
gl,
shader,
vao,
letters,
font_texture,
#[cfg(feature = "webgl2")]
vertices: vec![],
#[cfg(feature = "webgl1")]
pos: vec![],
#[cfg(feature = "webgl1")]
tx: vec![],
indices: vec![],
})
}
pub fn add_label<A: Into<Rad<f32>>>(
&mut self,
text: &str,
screen_pos: &Vector2<f32>,
angle_rot: A,
) {
// 1. Loop over the text chars to compute the size of the text to plot
let (mut w, mut h) = (0, 0);
for c in text.chars() {
if let Some(l) = self.letters.get(&c) {
w += l.x_advance;
h = std::cmp::max(h, l.h);
}
}
let x_pos = -(w as f32) * 0.5;
let y_pos = -(h as f32) * 0.5;
let f_tex_size = &self.font_texture.get_size();
let mut x_offset = 0.0;
let rot: Rad<_> = angle_rot.into();
for c in text.chars() {
if let Some(l) = self.letters.get(&c) {
let u1 = (l.x_min as f32) / (f_tex_size.0 as f32);
let v1 = (l.y_min as f32) / (f_tex_size.1 as f32);
let u2 = (l.x_min as f32 + l.w as f32) / (f_tex_size.0 as f32);
let v2 = (l.y_min as f32) / (f_tex_size.1 as f32);
let u3 = (l.x_min as f32 + l.w as f32) / (f_tex_size.0 as f32);
let v3 = (l.y_min as f32 + l.h as f32) / (f_tex_size.1 as f32);
let u4 = (l.x_min as f32) / (f_tex_size.0 as f32);
let v4 = (l.y_min as f32 + l.h as f32) / (f_tex_size.1 as f32);
#[cfg(feature = "webgl2")]
let num_vertices = (self.vertices.len() / 7) as u16;
#[cfg(feature = "webgl1")]
let num_vertices = (self.pos.len() / 2) as u16;
let xmin = l.bound_xmin;
let ymin = l.bound_ymin + (l.h as f32);
#[cfg(feature = "webgl2")]
self.vertices.extend([
x_pos + x_offset + xmin,
y_pos - ymin,
u1,
v1,
screen_pos.x,
screen_pos.y,
rot.0,
x_pos + x_offset + (l.w as f32) + xmin,
y_pos - ymin,
u2,
v2,
screen_pos.x,
screen_pos.y,
rot.0,
x_pos + x_offset + (l.w as f32) + xmin,
y_pos + (l.h as f32) - ymin,
u3,
v3,
screen_pos.x,
screen_pos.y,
rot.0,
x_pos + x_offset + xmin,
y_pos + (l.h as f32) - ymin,
u4,
v4,
screen_pos.x,
screen_pos.y,
rot.0,
]);
#[cfg(feature = "webgl1")]
self.pos.extend([
x_pos + x_offset + xmin,
y_pos - ymin,
x_pos + x_offset + (l.w as f32) + xmin,
y_pos - ymin,
x_pos + x_offset + (l.w as f32) + xmin,
y_pos + (l.h as f32) - ymin,
x_pos + x_offset + xmin,
y_pos + (l.h as f32) - ymin,
]);
#[cfg(feature = "webgl1")]
self.tx.extend([u1, v1, u2, v2, u3, v3, u4, v4]);
self.indices.extend([
num_vertices,
num_vertices + 2,
num_vertices + 1,
num_vertices,
num_vertices + 3,
num_vertices + 2,
]);
x_offset += l.x_advance as f32;
}
}
}
pub fn get_width_pixel_size(&self, content: &str) -> f64 {
let mut w = 0;
for c in content.chars() {
if let Some(l) = self.letters.get(&c) {
w += l.x_advance;
}
}
w as f64
}
}
impl RenderManager for TextRenderManager {
fn begin_frame(&mut self) {
#[cfg(feature = "webgl2")]
self.vertices.clear();
#[cfg(feature = "webgl1")]
self.pos.clear();
#[cfg(feature = "webgl1")]
self.tx.clear();
self.indices.clear();
}
fn end_frame(&mut self) {
// update to the GPU
#[cfg(feature = "webgl2")]
self.vao
.bind_for_update()
.update_array(
"vertices",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData(&self.vertices),
)
.update_element_array(WebGl2RenderingContext::DYNAMIC_DRAW, VecData(&self.indices));
#[cfg(feature = "webgl1")]
self.vao
.bind_for_update()
.update_array(
"pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData(&self.pos),
)
.update_array(
"tx",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData(&self.tx),
)
.update_element_array(WebGl2RenderingContext::DYNAMIC_DRAW, VecData(&self.indices));
}
fn draw(&mut self, camera: &CameraViewPort, color: &ColorRGB, opacity: f32, scale: f32) -> Result<(), JsValue> {
self.gl.enable(WebGl2RenderingContext::BLEND);
self.gl.blend_func_separate(
WebGl2RenderingContext::SRC_ALPHA,
WebGl2RenderingContext::ONE_MINUS_SRC_ALPHA,
WebGl2RenderingContext::ONE,
WebGl2RenderingContext::ONE,
); // premultiplied alpha
self.gl.disable(WebGl2RenderingContext::CULL_FACE);
{
let shader = self.shader.bind(&self.gl);
shader.attach_uniform("u_sampler_font", &self.font_texture) // Font letters texture
.attach_uniform("u_screen_size", &camera.get_screen_size())
.attach_uniform("u_dpi", &camera.get_dpi())
.attach_uniform("u_color", &color)
.attach_uniform("u_opacity", &opacity)
.attach_uniform("u_scale", &scale)
.bind_vertex_array_object_ref(&self.vao)
.draw_elements_with_i32(
WebGl2RenderingContext::TRIANGLES,
Some(self.indices.len() as i32),
WebGl2RenderingContext::UNSIGNED_SHORT,
0,
);
}
self.gl.enable(WebGl2RenderingContext::CULL_FACE);
self.gl.disable(WebGl2RenderingContext::BLEND);
Ok(())
}
}

View File

@@ -0,0 +1,216 @@
use cgmath::Vector3;
use crate::ProjectionType;
use crate::CameraViewPort;
use cgmath::InnerSpace;
use crate::math::angle::ToAngle;
use crate::coo_space::XYNDC;
use crate::coo_space::XYZModel;
use crate::LonLatT;
const MAX_ITERATION: usize = 5;
// Requirement:
// * Latitudes between [-0.5*pi; 0.5*pi]
// * Longitudes between [0; 2\pi[
// * (lon1 - lon2).abs() < PI so that is can only either cross the preimary meridian or opposite primary meridian
// (the latest is handled because of the longitudes intervals)
pub fn project(lon1: f64, lat1: f64, lon2: f64, lat2: f64, camera: &CameraViewPort, projection: &ProjectionType) -> Vec<XYNDC> {
let mut vertices = vec![];
let lonlat1 = LonLatT::new(lon1.to_angle(), lat1.to_angle());
let lonlat2 = LonLatT::new(lon2.to_angle(), lat2.to_angle());
let v1: Vector3<_> = lonlat1.vector();
let v2: Vector3<_> = lonlat2.vector();
let p1 = projection.model_to_normalized_device_space(&v1.extend(1.0), camera);
let p2 = projection.model_to_normalized_device_space(&v2.extend(1.0), camera);
match (p1, p2) {
(Some(_), Some(_)) => {
project_line(&mut vertices, &v1, &v2, camera, projection, 0);
},
(None, Some(_)) => {
let (v1, v2) = sub_valid_domain(v2, v1, projection, camera);
project_line(&mut vertices, &v1, &v2, camera, projection, 0);
},
(Some(_), None) => {
let (v1, v2) = sub_valid_domain(v1, v2, projection, camera);
project_line(&mut vertices, &v1, &v2, camera, projection, 0);
},
(None, None) => {
}
}
vertices
}
// Precondition:
// * angular distance between valid_lon and invalid_lon is < PI
// * valid_lon and invalid_lon are well defined, i.e. they can be between [-PI; PI] or [0, 2PI] depending
// whether they cross or not the zero meridian
fn sub_valid_domain(valid_v: XYZModel, invalid_v: XYZModel, projection: &ProjectionType, camera: &CameraViewPort) -> (XYZModel, XYZModel) {
let d_alpha = camera.get_aperture().to_radians() * 0.02;
let mut vv = valid_v;
let mut vi = invalid_v;
while crate::math::vector::angle3(&vv, &vi).to_radians() > d_alpha {
let vm = (vv + vi).normalize();
// check whether is it defined or not
if let Some(_) = projection.model_to_normalized_device_space(&vm.extend(1.0), camera) {
vv = vm;
} else {
vi = vm;
}
}
// Return the valid interval found by dichotomy
(vv, valid_v)
}
fn project_line(
vertices: &mut Vec<XYNDC>,
v1: &XYZModel,
v2: &XYZModel,
camera: &CameraViewPort,
projection: &ProjectionType,
iter: usize,
) -> bool {
let p1 = projection.model_to_normalized_device_space(&v1.extend(1.0), camera);
let p2 = projection.model_to_normalized_device_space(&v2.extend(1.0), camera);
if iter < MAX_ITERATION {
// Project them. We are always facing the camera
let vm = (v1 + v2).normalize();
let pm = projection.model_to_normalized_device_space(&vm.extend(1.0), camera);
match (p1, pm, p2) {
(Some(p1), Some(pm), Some(p2)) => {
let d12 = crate::math::vector::angle3(v1, v2).to_radians();
// Subdivide when until it is > 30 degrees
if d12 > 30.0_f64.to_radians() {
subdivide(
vertices,
v1,
v2,
&vm,
p1,
p2,
pm,
camera,
projection,
iter
);
} else {
// enough to stop the recursion
let ab = pm - p1;
let bc = p2 - pm;
let ab_u = ab.normalize();
let bc_u = bc.normalize();
let dot_abbc = crate::math::vector::dot(&ab_u, &bc_u);
let theta_abbc = dot_abbc.acos();
if theta_abbc.abs() < 5.0_f64.to_radians() {
let det_abbc = crate::math::vector::det(&ab_u, &bc_u);
if det_abbc.abs() < 1e-2 {
vertices.push(p1);
vertices.push(p2);
} else {
// not colinear but enough to stop
vertices.push(p1);
vertices.push(pm);
vertices.push(pm);
vertices.push(p2);
}
} else {
let ab_l = ab.magnitude2();
let bc_l = bc.magnitude2();
let r = (ab_l - bc_l).abs() / (ab_l + bc_l);
if r > 0.8 {
if ab_l < bc_l {
vertices.push(p1);
vertices.push(pm);
} else {
vertices.push(pm);
vertices.push(p2);
}
} else {
// Subdivide a->b and b->c
subdivide(
vertices,
v1,
v2,
&vm,
p1,
p2,
pm,
camera,
projection,
iter
);
}
}
}
true
},
_ => false
}
} else {
false
}
}
fn subdivide(
vertices: &mut Vec<XYNDC>,
v1: &XYZModel,
v2: &XYZModel,
vm: &XYZModel,
p1: XYNDC,
p2: XYNDC,
pm: XYNDC,
camera: &CameraViewPort,
projection: &ProjectionType,
iter: usize
) {
// Subdivide a->b and b->c
if !project_line(
vertices,
v1,
vm,
camera,
projection,
iter + 1
) {
vertices.push(p1);
vertices.push(pm);
}
if !project_line(
vertices,
vm,
v2,
camera,
projection,
iter + 1
) {
vertices.push(pm);
vertices.push(p2);
}
}

View File

@@ -0,0 +1,333 @@
/// This module handles the lines rendering code
pub mod great_circle_arc;
pub mod parallel_arc;
use crate::Abort;
use al_core::shader::Shader;
use al_core::VertexArrayObject;
use al_core::WebGlContext;
use super::Renderer;
use al_api::color::ColorRGBA;
use al_core::SliceData;
use lyon::algorithms::{
math::point,
measure::{PathMeasurements, SampleType},
path::Path,
};
struct Meta {
color: ColorRGBA,
off_indices: usize,
num_indices: usize,
}
#[derive(Clone)]
pub enum Style {
None,
Dashed,
Dotted,
}
pub struct RasterizedLineRenderer {
gl: WebGlContext,
shader: Shader,
vao: VertexArrayObject,
vertices: Vec<f32>,
indices: Vec<u32>,
meta: Vec<Meta>,
}
use wasm_bindgen::JsValue;
use al_core::VecData;
use web_sys::WebGl2RenderingContext;
use crate::camera::CameraViewPort;
use lyon::tessellation::*;
pub struct PathVertices<T>
where
T: AsRef<[[f32; 2]]>,
{
pub vertices: T,
pub closed: bool,
}
impl RasterizedLineRenderer {
/// Init the buffers, VAO and shader
pub fn new(gl: &WebGlContext) -> Result<Self, JsValue> {
let vertices = vec![];
let indices = vec![];
// Create the VAO for the screen
let shader = Shader::new(
&gl,
include_str!("../../../../glsl/webgl2/line/line_vertex.glsl"),
include_str!("../../../../glsl/webgl2/line/line_frag.glsl"),
)?;
let mut vao = VertexArrayObject::new(&gl);
vao.bind_for_update()
.add_array_buffer(
"ndc_pos",
2 * std::mem::size_of::<f32>(),
&[2],
&[0],
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&vertices),
)
// Set the element buffer
.add_element_buffer(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u32>(&indices),
)
.unbind();
let meta = vec![];
let gl = gl.clone();
Ok(Self {
gl,
shader,
vao,
meta,
vertices,
indices,
})
}
pub fn add_fill_paths<T>(
&mut self,
paths: impl Iterator<Item = PathVertices<T>>,
color: &ColorRGBA,
) where
T: AsRef<[[f32; 2]]>,
{
let mut num_indices = 0;
let off_indices = self.indices.len();
let mut geometry: VertexBuffers<[f32; 2], u32> = VertexBuffers::new();
let mut tessellator = FillTessellator::new();
//let mut num_vertices = 0;
for path in paths {
let mut path_builder = Path::builder();
let PathVertices { vertices, closed } = path;
let line: &[[f32; 2]] = vertices.as_ref();
if !line.is_empty() {
let v = &line[0];
path_builder.begin(point(v[0], v[1]));
for v in line.iter().skip(1) {
//let v = clamp_ndc_vertex(v);
path_builder.line_to(point(v[0], v[1]));
}
path_builder.end(closed);
}
// Create the destination vertex and index buffers.
let p = path_builder.build();
// Let's use our own custom vertex type instead of the default one.
// Will contain the result of the tessellation.
let num_vertices = (self.vertices.len() / 2) as u32;
// Compute the tessellation.
tessellator
.tessellate_with_ids(
p.id_iter(),
&p,
Some(&p),
&FillOptions::default()
.with_intersections(false)
.with_fill_rule(FillRule::NonZero)
.with_tolerance(5e-3),
&mut BuffersBuilder::new(&mut geometry, |vertex: FillVertex| {
vertex.position().to_array()
})
.with_vertex_offset(num_vertices),
)
.unwrap_abort();
}
let VertexBuffers { vertices, indices } = geometry;
num_indices += indices.len();
self.vertices.extend(vertices.iter().flatten());
self.indices.extend(indices.iter());
//al_core::info!("num vertices fill", nv);
self.meta.push(Meta {
off_indices,
num_indices,
color: color.clone(),
});
}
pub fn add_stroke_paths<T>(
&mut self,
paths: impl Iterator<Item = PathVertices<T>>,
thickness: f32,
color: &ColorRGBA,
style: &Style,
) where
T: AsRef<[[f32; 2]]>,
{
let num_vertices = (self.vertices.len() / 2) as u32;
let mut path_builder = Path::builder();
match &style {
Style::None => {
for path in paths {
let PathVertices { vertices, closed } = path;
let line: &[[f32; 2]] = vertices.as_ref();
if !line.is_empty() {
//let v = clamp_ndc_vertex(&line[0]);
let v = &line[0];
path_builder.begin(point(v[0], v[1]));
for v in line.iter().skip(1) {
//let v = clamp_ndc_vertex(v);
path_builder.line_to(point(v[0], v[1]));
}
path_builder.end(closed);
}
}
//al_core::info!("num vertices", nv);
}
Style::Dashed => {
for path in paths {
let PathVertices { vertices, closed } = path;
let line: &[[f32; 2]] = vertices.as_ref();
if !line.is_empty() {
let mut line_path_builder = Path::builder();
//let v = clamp_ndc_vertex(&line[0]);
let v = &line[0];
line_path_builder.begin(point(v[0], v[1]));
for v in line.iter().skip(1) {
//let v = clamp_ndc_vertex(v);
line_path_builder.line_to(point(v[0], v[1]));
}
line_path_builder.end(closed);
let path = line_path_builder.build();
// Build the acceleration structure.
let measurements = PathMeasurements::from_path(&path, 1e-2);
let mut sampler =
measurements.create_sampler(&path, SampleType::Normalized);
let path_len = sampler.length();
let step = 1e-2 / path_len;
for i in (0..((1.0 / step) as usize)).step_by(2) {
let start = (i as f32) * step;
let end = (i as f32 + 1.0) * step;
sampler.split_range(start..end, &mut path_builder);
}
}
}
}
Style::Dotted => {}
}
let p = path_builder.build();
// Let's use our own custom vertex type instead of the default one.
// Will contain the result of the tessellation.
let mut geometry: VertexBuffers<[f32; 2], u32> = VertexBuffers::new();
{
let mut tessellator = StrokeTessellator::new();
// Compute the tessellation.
tessellator
.tessellate(
&p,
&StrokeOptions::default().with_line_width(thickness),
&mut BuffersBuilder::new(&mut geometry, |vertex: StrokeVertex| {
vertex.position().to_array()
})
.with_vertex_offset(num_vertices),
)
.unwrap_abort();
}
let VertexBuffers { vertices, indices } = geometry;
let num_indices = indices.len();
let off_indices = self.indices.len();
self.vertices.extend(vertices.iter().flatten());
self.indices.extend(indices.iter());
self.meta.push(Meta {
off_indices,
num_indices,
color: color.clone(),
});
}
pub fn draw(&mut self, _camera: &CameraViewPort) -> Result<(), JsValue> {
self.gl.enable(WebGl2RenderingContext::BLEND);
self.gl.blend_func_separate(
WebGl2RenderingContext::SRC_ALPHA,
WebGl2RenderingContext::ONE_MINUS_SRC_ALPHA,
WebGl2RenderingContext::ONE,
WebGl2RenderingContext::ONE,
);
self.gl.disable(WebGl2RenderingContext::CULL_FACE);
let shader = self.shader.bind(&self.gl);
for meta in self.meta.iter() {
shader
.attach_uniform("u_color", &meta.color) // Strengh of the kernel
.bind_vertex_array_object_ref(&self.vao)
.draw_elements_with_i32(
WebGl2RenderingContext::TRIANGLES,
Some(meta.num_indices as i32),
WebGl2RenderingContext::UNSIGNED_INT,
((meta.off_indices as usize) * std::mem::size_of::<u32>()) as i32,
);
}
self.gl.enable(WebGl2RenderingContext::CULL_FACE);
self.gl.disable(WebGl2RenderingContext::BLEND);
Ok(())
}
}
impl Renderer for RasterizedLineRenderer {
fn begin(&mut self) {
self.vertices.clear();
self.indices.clear();
self.meta.clear();
}
fn end(&mut self) {
// update to the GPU
self.vao
.bind_for_update()
.update_array(
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
SliceData(self.vertices.as_slice()),
)
.update_element_array(
WebGl2RenderingContext::DYNAMIC_DRAW,
SliceData(self.indices.as_slice()),
);
}
}

View File

@@ -0,0 +1,200 @@
use crate::ProjectionType;
use crate::CameraViewPort;
use cgmath::InnerSpace;
use crate::math::angle::ToAngle;
use crate::math::{TWICE_PI};
use crate::LonLatT;
const MAX_ITERATION: usize = 4;
// * Remark
//
// - Parallel latitude between [-0.5*pi; 0.5*pi]
// - First longitude between [0; 2\pi[
// - Second lon length between [0; 2\pi[
// - (lon1 - lon2).abs() < PI
//
// * Returns
// A list of lines vertices
pub fn project(lat: f64, mut lon1: f64, lon2: f64, camera: &CameraViewPort, projection: &ProjectionType) -> Vec<[f32; 2]> {
let mut vertices = vec![];
let lon_len = crate::math::sph_geom::distance_from_two_lon(lon1, lon2);
let mut lon2 = lon1 + lon_len;
// Can only cross the 0 meridian but not 0 and 180 ones
if lon2 > TWICE_PI {
// it crosses the zero meridian
lon2 -= TWICE_PI;
// lon1 is > PI because the lon len is <= PI
lon1 -= TWICE_PI;
}
// We know (lon1, lat) can be projected as it is a requirement of that method
let v1 = crate::math::lonlat::proj(&LonLatT::new(lon1.to_angle(), lat.to_angle()), projection, camera);
let v2 = crate::math::lonlat::proj(&LonLatT::new(lon2.to_angle(), lat.to_angle()), projection, camera);
match (v1, v2) {
(Some(_v1), Some(_v2)) => {
subdivide_multi(&mut vertices, lat, lon1, lon2, camera, projection);
},
(None, Some(_v2)) => {
let (lon1, lon2) = sub_valid_domain(lat, lon2, lon1, projection, camera);
subdivide_multi(&mut vertices, lat, lon1, lon2, camera, projection);
},
(Some(_v1), None) => {
let (lon1, lon2) = sub_valid_domain(lat, lon1, lon2, projection, camera);
subdivide_multi(&mut vertices, lat, lon1, lon2, camera, projection);
},
(None, None) => {}
}
vertices
}
// Precondition:
// * angular distance between valid_lon and invalid_lon is < PI
// * valid_lon and invalid_lon are well defined, i.e. they can be between [-PI; PI] or [0, 2PI] depending
// whether they cross or not the zero meridian
fn sub_valid_domain(lat: f64, valid_lon: f64, invalid_lon: f64, projection: &ProjectionType, camera: &CameraViewPort) -> (f64, f64) {
let d_alpha = camera.get_aperture().to_radians() * 0.02;
let mut l_valid = valid_lon;
let mut l_invalid = invalid_lon;
while (l_valid - l_invalid).abs() > d_alpha {
let lm = (l_valid + l_invalid)*0.5;
// check whether is it defined or not
let mid_lonlat = LonLatT::new(lm.to_angle(), lat.to_angle());
if let Some(_) = crate::math::lonlat::proj(&mid_lonlat, projection, camera) {
l_valid = lm;
} else {
l_invalid = lm;
}
}
// l2 is invalid while l1 is valid
if valid_lon > invalid_lon {
(l_valid, valid_lon)
} else {
(valid_lon, l_valid)
}
}
fn subdivide_multi(
vertices: &mut Vec<[f32; 2]>,
lat: f64,
lon_s: f64,
lon_e: f64,
camera: &CameraViewPort,
projection: &ProjectionType,
) {
let num_vertices = 5;
let dlon = (lon_e - lon_s) / (num_vertices as f64);
for i in 0..num_vertices {
let lon1 = lon_s + (i as f64) * dlon;
let lon2 = lon1 + dlon;
subdivide(vertices, lat, lon1, lon2, camera, projection, 0);
}
}
fn subdivide(
vertices: &mut Vec<[f32; 2]>,
lat: f64,
lon1: f64,
lon2: f64,
camera: &CameraViewPort,
projection: &ProjectionType,
iter: usize,
) -> bool {
let p1 = crate::math::lonlat::proj(&LonLatT::new(lon1.to_angle(), lat.to_angle()), projection, camera);
let p2 = crate::math::lonlat::proj(&LonLatT::new(lon2.to_angle(), lat.to_angle()), projection, camera);
if iter < MAX_ITERATION {
// Project them. We are always facing the camera
let lon0 = (lon1 + lon2)*0.5;
let pm = crate::math::lonlat::proj(&LonLatT::new(lon0.to_angle(), lat.to_angle()), projection, camera);
match (p1, pm, p2) {
(Some(p1), Some(pm), Some(p2)) => {
let ab = pm - p1;
let bc = p2 - pm;
let ab_u = ab.normalize();
let bc_u = bc.normalize();
let dot_abbc = crate::math::vector::dot(&ab_u, &bc_u);
let theta_abbc = dot_abbc.acos();
if theta_abbc.abs() < 5.0_f64.to_radians() {
let det_abbc = crate::math::vector::det(&ab_u, &bc_u);
if det_abbc.abs() < 1e-2 {
vertices.push([p1.x as f32, p1.y as f32]);
vertices.push([p2.x as f32, p2.y as f32]);
} else {
// not colinear but enough to stop
vertices.push([p1.x as f32, p1.y as f32]);
vertices.push([pm.x as f32, pm.y as f32]);
vertices.push([pm.x as f32, pm.y as f32]);
vertices.push([p2.x as f32, p2.y as f32]);
}
} else {
let ab_l = ab.magnitude2();
let bc_l = bc.magnitude2();
let r = (ab_l - bc_l).abs() / (ab_l + bc_l);
if r > 0.8 {
if ab_l < bc_l {
vertices.push([p1.x as f32, p1.y as f32]);
vertices.push([pm.x as f32, pm.y as f32]);
} else {
vertices.push([pm.x as f32, pm.y as f32]);
vertices.push([p2.x as f32, p2.y as f32]);
}
} else {
// Subdivide a->b and b->c
if !subdivide(
vertices,
lat,
lon1,
lon0,
camera,
projection,
iter + 1
) {
vertices.push([p1.x as f32, p1.y as f32]);
vertices.push([pm.x as f32, pm.y as f32]);
}
if !subdivide(
vertices,
lat,
lon0,
lon2,
camera,
projection,
iter + 1
) {
vertices.push([pm.x as f32, pm.y as f32]);
vertices.push([p2.x as f32, p2.y as f32]);
}
}
}
true
},
_ => false
}
} else {
false
}
}

View File

@@ -1,173 +0,0 @@
use al_core::webgl_ctx::WebGl2Context;
use al_core::VertexArrayObject;
use al_core::shader::Shader;
use super::RenderManager;
struct LineMeta {
color: Color,
thickness: f32,
off_idx: usize,
num_idx: usize,
}
pub struct RasterizedLinesRenderManager {
gl: WebGl2Context,
shader: Shader,
vao: VertexArrayObject,
vertices: Vec<f32>,
indices: Vec<u16>,
meta: Vec<LineMeta>,
}
use wasm_bindgen::JsValue;
use cgmath::Vector2;
use al_core::VecData;
use web_sys::WebGl2RenderingContext;
use crate::Color;
use crate::camera::CameraViewPort;
use lyon::math::point;
use lyon::path::Path;
use lyon::tessellation::*;
impl RasterizedLinesRenderManager {
/// Init the buffers, VAO and shader
pub fn new(gl: WebGl2Context, camera: &CameraViewPort) -> Result<Self, JsValue> {
// Create the VAO for the screen
#[cfg(feature = "webgl1")]
let shader = Shader::new(
&gl,
include_str!("../shaders/webgl1/line/line_vertex.glsl"),
include_str!("../shaders/webgl1/line/line_frag.glsl")
)?;
#[cfg(feature = "webgl2")]
let shader = Shader::new(
&gl,
include_str!("../shaders/webgl2/line/line_vertex.glsl"),
include_str!("../shaders/webgl2/line/line_frag.glsl")
)?;
let mut vao = VertexArrayObject::new(&gl);
shader
.bind(&gl)
.bind_vertex_array_object(&mut vao)
.add_array_buffer(
2 * std::mem::size_of::<f32>(),
&[2],
&[0],
WebGl2RenderingContext::STREAM_DRAW,
VecData::<f32>(&vec![]),
)
// Set the element buffer
.add_element_buffer(
WebGl2RenderingContext::STREAM_DRAW,
VecData::<u16>(&vec![]),
)
// Unbind the buffer
.unbind();
let meta = vec![];
Ok(
Self {
gl,
shader,
vao,
meta,
vertices: vec![],
indices: vec![],
}
)
}
pub fn add_path(&mut self, path: &[Vector2<f32>], thickness: f32, color: &Color) {
let mut builder = Path::builder();
if path.is_empty() {
return;
}
builder.begin(point(path[0].x, path[0].y));
for p in path.iter().skip(1) {
builder.line_to(point(p.x, p.y));
}
builder.end(true);
let path = builder.build();
// Let's use our own custom vertex type instead of the default one.
// Will contain the result of the tessellation.
let mut geometry: VertexBuffers<[f32; 2], u16> = VertexBuffers::new();
let mut tessellator = FillTessellator::new();
{
// Compute the tessellation.
tessellator.tessellate_path(
&path,
&FillOptions::default(),
&mut BuffersBuilder::new(&mut geometry, |vertex: FillVertex| {
vertex.position().to_array()
}),
).unwrap_abort();
}
let num_vertices = (self.vertices.len() / 2) as u16;
self.vertices.extend(geometry.vertices.iter().flatten());
for i in geometry.indices.iter_mut() {
*i += num_vertices;
}
let num_idx = geometry.indices.len();
let off_idx = self.indices.len();
self.indices.extend(geometry.indices);
self.meta.push(
LineMeta {
off_idx,
num_idx,
thickness,
color: color.clone(),
}
);
}
}
impl RenderManager for RasterizedLinesRenderManager {
fn begin_frame(&mut self) {
self.vertices.clear();
self.indices.clear();
self.meta.clear();
}
fn end_frame(&mut self) {
// update to the GPU
self.vao.bind_for_update()
.update_array(0, WebGl2RenderingContext::STREAM_DRAW, VecData(&self.vertices))
.update_element_array(WebGl2RenderingContext::STREAM_DRAW, VecData(&self.indices));
}
fn draw(&mut self, window_size: &Vector2<f32>) -> Result<(), JsValue> {
self.gl.enable(WebGl2RenderingContext::BLEND);
self.gl.blend_func(WebGl2RenderingContext::ONE, WebGl2RenderingContext::ONE_MINUS_SRC_ALPHA); // premultiplied alpha
let shader = self.shader.bind(&self.gl);
self.vao.bind(&shader);
for meta in self.meta.iter() {
shader
.attach_uniform("u_color", &meta.color) // Strengh of the kernel
.attach_uniform("u_screen_size", window_size);
self.gl.draw_elements_with_i32(
WebGl2RenderingContext::TRIANGLES,
meta.num_idx as i32,
WebGl2RenderingContext::UNSIGNED_SHORT,
(meta.off_idx as i32) * (std::mem::size_of::<u16>() as i32)
);
}
self.gl.disable(WebGl2RenderingContext::BLEND);
Ok(())
}
}

View File

@@ -1,562 +0,0 @@
use crate::{healpix::{
coverage::HEALPixCoverage,
cell::HEALPixCell
}, shader::ShaderId, math::angle::Angle, CameraViewPort, ShaderManager};
use al_core::{WebGlContext, VertexArrayObject, VecData};
use moclib::{moc::{RangeMOCIterator, RangeMOCIntoIterator}, elem::cell::Cell};
use std::{borrow::Cow, collections::HashMap};
use web_sys::WebGl2RenderingContext;
use al_api::coo_system::CooSystem;
type MOCIdx = String;
use crate::Abort;
pub struct MOC {
vao: VertexArrayObject,
num_indices: Vec<usize>,
first_idx: Vec<usize>,
position: Vec<f32>,
indices: Vec<u32>,
mocs: HashMap<MOCIdx, HierarchicalHpxCoverage>,
adaptative_mocs: HashMap<MOCIdx, Option<HEALPixCoverage>>,
params: HashMap<MOCIdx, al_api::moc::MOC>,
layers: Vec<MOCIdx>,
view: HEALPixCellsInView,
gl: WebGlContext,
}
use crate::survey::view::HEALPixCellsInView;
use cgmath::Vector2;
fn path_along_edge(cell: &HEALPixCell, n_segment_by_side: usize, camera: &CameraViewPort, idx_off: &mut u32, projection: &ProjectionType) -> Option<(Vec<f32>, Vec<u32>)> {
let vertices = cell
.path_along_cell_edge(n_segment_by_side as u32)
.iter()
.filter_map(|(lon, lat)| {
let xyzw = crate::math::lonlat::radec_to_xyzw(Angle(*lon), Angle(*lat));
let xyzw = crate::coosys::apply_coo_system(&CooSystem::ICRS, camera.get_system(), &xyzw);
projection.model_to_normalized_device_space(&xyzw, camera)
.map(|v| [v.x as f32, v.y as f32])
})
.flatten()
.collect::<Vec<_>>();
let cell_inside = vertices.len() == 2*4*n_segment_by_side;
let invalid_tri = |tri_ccw: bool, reversed_longitude: bool| -> bool {
(!reversed_longitude && !tri_ccw) || (reversed_longitude && tri_ccw)
};
let reversed_longitude = camera.get_longitude_reversed();
if cell_inside {
let c0 = crate::math::projection::ndc_to_screen_space(&Vector2::new(vertices[0] as f64, vertices[1] as f64), camera);
let c1 = crate::math::projection::ndc_to_screen_space(&Vector2::new(vertices[2*n_segment_by_side] as f64, vertices[2*n_segment_by_side + 1] as f64), camera);
let c2 = crate::math::projection::ndc_to_screen_space(&Vector2::new(vertices[2*2*n_segment_by_side] as f64, vertices[2*2*n_segment_by_side + 1] as f64), camera);
let c3 = crate::math::projection::ndc_to_screen_space(&Vector2::new(vertices[3*2*n_segment_by_side] as f64, vertices[3*2*n_segment_by_side + 1] as f64), camera);
let first_tri_ccw = crate::math::vector::ccw_tri(&c0, &c1, &c2);
let second_tri_ccw = crate::math::vector::ccw_tri(&c1, &c2, &c3);
let third_tri_ccw = crate::math::vector::ccw_tri(&c2, &c3, &c0);
let fourth_tri_ccw = crate::math::vector::ccw_tri(&c3, &c0, &c1);
let invalid_cell = invalid_tri(first_tri_ccw, reversed_longitude) || invalid_tri(second_tri_ccw, reversed_longitude) || invalid_tri(third_tri_ccw, reversed_longitude) || invalid_tri(fourth_tri_ccw, reversed_longitude);
if !invalid_cell {
let vx = [c0.x, c1.x, c2.x, c3.x];
let vy = [c0.y, c1.y, c2.y, c3.y];
let projeted_cell = HEALPixCellProjeted {
ipix: cell.idx(),
vx,
vy
};
if crate::survey::view::project(projeted_cell, camera, projection).is_none() {
None
} else {
// Generate the iterator: idx_off + 1, idx_off + 1, .., idx_off + 4*n_segment - 1, idx_off + 4*n_segment - 1
let num_vertices = 4 * n_segment_by_side as u32;
let indices = std::iter::once(*idx_off as u32)
.chain((2..2*num_vertices).map(|idx| idx / 2 + *idx_off))
.chain(std::iter::once(*idx_off as u32))
.collect();
*idx_off += num_vertices;
Some((vertices, indices))
}
} else {
None
}
} else {
None
}
}
use al_api::cell::HEALPixCellProjeted;
pub fn rasterize_hpx_cell(cell: &HEALPixCell, n_segment_by_side: usize, camera: &CameraViewPort, idx_off: &mut u32, projection: &ProjectionType) -> Option<(Vec<f32>, Vec<u32>)> {
let n_vertices_per_segment = n_segment_by_side + 1;
let vertices = cell
.grid(n_segment_by_side as u32)
.iter()
.filter_map(|(lon, lat)| {
let xyzw = crate::math::lonlat::radec_to_xyzw(Angle(*lon), Angle(*lat));
let xyzw = crate::coosys::apply_coo_system(&CooSystem::ICRS, camera.get_system(), &xyzw);
projection.model_to_normalized_device_space(&xyzw, camera)
.map(|v| {
[v.x as f32, v.y as f32]
})
})
.flatten()
.collect::<Vec<_>>();
let cell_inside = vertices.len() == 2*(n_segment_by_side+1)*(n_segment_by_side+1);
if cell_inside {
// Generate the iterator: idx_off + 1, idx_off + 1, .., idx_off + 4*n_segment - 1, idx_off + 4*n_segment - 1
let mut indices = Vec::with_capacity(n_segment_by_side * n_segment_by_side * 6);
let num_vertices = (n_segment_by_side+1)*(n_segment_by_side+1);
let longitude_reversed = camera.get_longitude_reversed();
let invalid_tri = |tri_ccw: bool, reversed_longitude: bool| -> bool {
(!reversed_longitude && !tri_ccw) || (reversed_longitude && tri_ccw)
};
for i in 0..n_segment_by_side {
for j in 0..n_segment_by_side {
let idx_0 = j + i * n_vertices_per_segment;
let idx_1 = j + 1 + i * n_vertices_per_segment;
let idx_2 = j + (i + 1) * n_vertices_per_segment;
let idx_3 = j + 1 + (i + 1) * n_vertices_per_segment;
let c0 = crate::math::projection::ndc_to_screen_space(&Vector2::new(vertices[2*idx_0] as f64, vertices[2*idx_0 + 1] as f64), camera);
let c1 = crate::math::projection::ndc_to_screen_space(&Vector2::new(vertices[2*idx_1] as f64, vertices[2*idx_1 + 1] as f64), camera);
let c2 = crate::math::projection::ndc_to_screen_space(&Vector2::new(vertices[2*idx_2] as f64, vertices[2*idx_2 + 1] as f64), camera);
let c3 = crate::math::projection::ndc_to_screen_space(&Vector2::new(vertices[2*idx_3] as f64, vertices[2*idx_3 + 1] as f64), camera);
let first_tri_ccw = !crate::math::vector::ccw_tri(&c0, &c1, &c2);
let second_tri_ccw = !crate::math::vector::ccw_tri(&c1, &c3, &c2);
if invalid_tri(first_tri_ccw, longitude_reversed) || invalid_tri(second_tri_ccw, longitude_reversed) {
return None;
}
let vx = [c0.x, c1.x, c2.x, c3.x];
let vy = [c0.y, c1.y, c2.y, c3.y];
let projeted_cell = HEALPixCellProjeted {
ipix: cell.idx(),
vx,
vy
};
crate::survey::view::project(projeted_cell, camera, projection)?;
indices.push(*idx_off + idx_0 as u32);
indices.push(*idx_off + idx_1 as u32);
indices.push(*idx_off + idx_2 as u32);
indices.push(*idx_off + idx_1 as u32);
indices.push(*idx_off + idx_3 as u32);
indices.push(*idx_off + idx_2 as u32);
}
}
*idx_off += num_vertices as u32;
Some((vertices, indices))
} else {
None
}
}
struct HierarchicalHpxCoverage {
full_moc: HEALPixCoverage,
partially_degraded_moc: HEALPixCoverage,
}
impl HierarchicalHpxCoverage {
fn new(full_moc: HEALPixCoverage) -> Self {
let partially_degraded_moc = HEALPixCoverage(full_moc.degraded(full_moc.depth_max() >> 1));
Self {
full_moc,
partially_degraded_moc
}
}
fn get(&self, depth: u8) -> &HEALPixCoverage {
if depth <= self.partially_degraded_moc.depth_max() {
&self.partially_degraded_moc
} else {
&self.full_moc
}
}
fn get_full_moc(&self) -> &HEALPixCoverage {
&self.full_moc
}
}
use crate::ProjectionType;
impl MOC {
pub fn new(gl: &WebGlContext) -> Self {
let mut vao = VertexArrayObject::new(gl);
// layout (location = 0) in vec2 ndc_pos;
//let vertices = vec![0.0; MAX_NUM_FLOATS_TO_DRAW];
//let indices = vec![0_u16; MAX_NUM_INDICES_TO_DRAW];
//let vertices = vec![];
let position = vec![];
let indices = vec![];
#[cfg(feature = "webgl2")]
vao.bind_for_update()
.add_array_buffer_single(
2,
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&position),
)
// Set the element buffer
.add_element_buffer(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u32>(&indices),
)
.unbind();
#[cfg(feature = "webgl1")]
vao.bind_for_update()
.add_array_buffer(
2,
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<f32>(&position),
)
// Set the element buffer
.add_element_buffer(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u32>(&indices),
)
.unbind();
let num_indices = vec![0];
let first_idx = vec![0];
let gl = gl.clone();
let mocs = HashMap::new();
let adaptative_mocs = HashMap::new();
let layers = vec![];
let params = HashMap::new();
let view = HEALPixCellsInView::new();
Self {
position,
indices,
mocs,
adaptative_mocs,
params,
layers,
num_indices,
first_idx,
vao,
gl,
view,
}
}
pub fn reset_frame(&mut self) {
self.view.reset_frame();
}
fn recompute_draw_mocs(&mut self, camera: &CameraViewPort) {
let view_depth = self.view.get_depth();
let depth = view_depth + 6;
let fov_moc = crate::survey::view::compute_view_coverage(camera, view_depth, &CooSystem::ICRS);
self.adaptative_mocs = self.layers.iter()
.map(|layer| {
let params = self.params.get(layer).unwrap_abort();
let coverage = self.mocs.get(layer).unwrap_abort();
let moc = if !params.is_showing() {
None
} else {
let moc = if params.is_adaptative_display() {
let partially_degraded_moc = coverage.get(depth);
fov_moc.intersection(partially_degraded_moc).degraded(depth)
} else {
let full_moc = coverage.get_full_moc();
fov_moc.intersection(full_moc)
};
Some(HEALPixCoverage(moc))
};
(layer.clone(), moc)
}).collect();
}
pub fn insert(&mut self, moc: HEALPixCoverage, params: al_api::moc::MOC, camera: &CameraViewPort, projection: &ProjectionType) {
let key = params.get_uuid().clone();
self.mocs.insert(key.clone(), HierarchicalHpxCoverage::new(moc));
self.params.insert(key.clone(), params);
self.layers.push(key);
self.recompute_draw_mocs(camera);
self.update_buffers(camera, projection);
// Compute or retrieve the mocs to render
}
pub fn remove(&mut self, params: &al_api::moc::MOC, camera: &CameraViewPort) -> Option<al_api::moc::MOC> {
let key = params.get_uuid();
self.mocs.remove(key);
let moc = self.params.remove(key);
if let Some(index) = self.layers.iter().position(|x| x == key) {
self.layers.remove(index);
self.num_indices.remove(index);
self.first_idx.remove(index);
self.recompute_draw_mocs(camera);
moc
} else {
None
}
}
pub fn set_params(&mut self, params: al_api::moc::MOC, camera: &CameraViewPort, projection: &ProjectionType) -> Option<al_api::moc::MOC> {
let key = params.get_uuid().clone();
let old_params = self.params.insert(key, params);
self.recompute_draw_mocs(camera);
self.update_buffers(camera, projection);
old_params
}
pub fn get(&self, params: &al_api::moc::MOC) -> Option<&HEALPixCoverage> {
let key = params.get_uuid();
self.mocs.get(key).map(|coverage| coverage.get_full_moc())
}
fn update_buffers(&mut self, camera: &CameraViewPort, projection: &ProjectionType) {
self.indices.clear();
self.position.clear();
self.num_indices.clear();
self.first_idx.clear();
let mut idx_off = 0;
for layer in self.layers.iter() {
let moc = self.adaptative_mocs.get(layer).unwrap_abort();
let params = self.params.get(layer).unwrap_abort();
if let Some(moc) = moc {
let depth_max = moc.depth();
let mut indices_moc = vec![];
if params.get_opacity() == 1.0 {
let positions_moc = (&(moc.0)).into_range_moc_iter()
.cells()
.filter_map(|Cell { depth, idx, .. }| {
let delta_depth = depth_max - depth;
let n_segment_by_side = (1 << delta_depth) as usize;
let cell = HEALPixCell(depth, idx);
if let Some((vertices_cell, indices_cell)) = path_along_edge(
&cell,
n_segment_by_side,
camera,
&mut idx_off,
projection
) {
// Generate the iterator: idx_off + 1, idx_off + 1, .., idx_off + 4*n_segment - 1, idx_off + 4*n_segment - 1
indices_moc.extend(indices_cell);
Some(vertices_cell)
} else if depth < 3 {
let mut vertices = vec![];
let depth_sub_cell = 3;
let delta_depth_sub_cell = depth_max - depth_sub_cell;
let n_segment_by_side_sub_cell = (1 << delta_depth_sub_cell) as usize;
for sub_cell in cell.get_children_cells(3 - depth) {
if let Some((vertices_sub_cell, indices_sub_cell)) = path_along_edge(
&sub_cell,
n_segment_by_side_sub_cell,
camera,
&mut idx_off,
projection
) {
indices_moc.extend(indices_sub_cell);
vertices.extend(vertices_sub_cell);
}
}
Some(vertices)
} else {
None
}
})
.flatten()
.collect::<Vec<_>>();
self.first_idx.push(self.indices.len());
self.num_indices.push(indices_moc.len());
self.position.extend(&positions_moc);
self.indices.extend(&indices_moc);
} else {
let positions_moc = (&(moc.0)).into_range_moc_iter()
.cells()
.filter_map(|Cell { depth, idx, .. }| {
let delta_depth = (depth_max as i32 - depth as i32).max(0);
let n_segment_by_side = (1 << delta_depth) as usize;
let cell = HEALPixCell(depth, idx);
if depth < 3 {
let mut vertices = vec![];
let depth_sub_cell = 3;
let delta_depth_sub_cell = depth_max - depth_sub_cell;
let n_segment_by_side_sub_cell = (1 << delta_depth_sub_cell) as usize;
for sub_cell in cell.get_children_cells(3 - depth) {
if let Some((vertices_sub_cell, indices_sub_cell)) = rasterize_hpx_cell(
&sub_cell,
n_segment_by_side_sub_cell,
camera,
&mut idx_off,
projection
) {
indices_moc.extend(indices_sub_cell);
vertices.extend(vertices_sub_cell);
}
}
Some(vertices)
} else if let Some((vertices_cell, indices_cell)) = rasterize_hpx_cell(
&cell,
n_segment_by_side,
camera,
&mut idx_off,
projection
) {
// Generate the iterator: idx_off + 1, idx_off + 1, .., idx_off + 4*n_segment - 1, idx_off + 4*n_segment - 1
indices_moc.extend(indices_cell);
Some(vertices_cell)
} else {
None
}
})
.flatten()
.collect::<Vec<_>>();
self.first_idx.push(self.indices.len());
self.num_indices.push(indices_moc.len());
self.position.extend(&positions_moc);
self.indices.extend(&indices_moc);
}
} else {
self.first_idx.push(self.indices.len());
self.num_indices.push(0);
}
}
self.vao.bind_for_update()
.update_array(
"ndc_pos",
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData(&self.position),
)
.update_element_array(
WebGl2RenderingContext::DYNAMIC_DRAW,
VecData::<u32>(&self.indices),
);
}
pub fn update(&mut self, camera: &CameraViewPort, projection: &ProjectionType) {
if self.is_empty() {
return;
}
// Compute or retrieve the mocs to render
self.view.refresh(camera.get_tile_depth(), CooSystem::ICRS, camera);
if self.view.has_view_changed() {
self.recompute_draw_mocs(camera);
}
self.update_buffers(camera, projection);
}
pub fn is_empty(&self) -> bool {
self.layers.is_empty()
}
pub fn draw(
&self,
shaders: &mut ShaderManager,
camera: &CameraViewPort,
) {
if self.is_empty() {
return;
}
self.gl.blend_func_separate(
WebGl2RenderingContext::SRC_ALPHA,
WebGl2RenderingContext::ONE_MINUS_SRC_ALPHA,
WebGl2RenderingContext::ONE,
WebGl2RenderingContext::ONE,
);
self.gl.enable(WebGl2RenderingContext::BLEND);
let shader = shaders
.get(
&self.gl,
&ShaderId(Cow::Borrowed("GridVS_CPU"), Cow::Borrowed("GridFS_CPU")),
)
.unwrap_abort();
let shaderbound = shader.bind(&self.gl);
for (idx, layer) in self.layers.iter().enumerate() {
let moc = self.params.get(layer).unwrap_abort();
//if moc.is_showing() {
let mode = if moc.get_opacity() == 1.0 {
WebGl2RenderingContext::LINES
} else {
WebGl2RenderingContext::TRIANGLES
};
let color = moc.get_color();
shaderbound
.attach_uniforms_from(camera)
.attach_uniform("color", color)
.attach_uniform("opacity", &moc.get_opacity())
.bind_vertex_array_object_ref(&self.vao)
.draw_elements_with_i32(
mode,
Some(self.num_indices[idx] as i32),
WebGl2RenderingContext::UNSIGNED_INT,
(self.first_idx[idx] * std::mem::size_of::<u32>()) as i32
);
//}
}
self.gl.disable(WebGl2RenderingContext::BLEND);
}
}

View File

@@ -1,10 +1,10 @@
pub mod catalog;
pub mod coverage;
pub mod final_pass;
pub mod grid;
pub mod labels;
pub mod moc;
pub mod image;
pub mod hips;
pub mod image;
pub mod line;
pub mod text;
pub mod utils;
use crate::renderable::image::Image;
@@ -12,25 +12,23 @@ use crate::renderable::image::Image;
use al_core::image::format::ChannelType;
pub use hips::HiPS;
pub use labels::TextRenderManager;
pub use catalog::Manager;
pub use grid::ProjetedGrid;
use al_api::hips::ImageMetadata;
use al_api::color::ColorRGB;
use al_api::hips::HiPSCfg;
use al_api::hips::ImageMetadata;
use al_api::image::ImageParams;
use al_core::VertexArrayObject;
use al_core::SliceData;
use al_core::shader::Shader;
use al_core::WebGlContext;
use al_core::colormap::Colormaps;
use al_core::shader::Shader;
use al_core::SliceData;
use al_core::VertexArrayObject;
use al_core::WebGlContext;
use crate::Abort;
use crate::ProjectionType;
use crate::camera::CameraViewPort;
use crate::shader::ShaderId;
use crate::Abort;
use crate::ProjectionType;
use crate::{shader::ShaderManager, survey::config::HiPSConfig};
// Recursively compute the number of subdivision needed for a cell
@@ -38,10 +36,15 @@ use crate::{shader::ShaderManager, survey::config::HiPSConfig};
use hips::raytracing::RayTracer;
use web_sys::{WebGl2RenderingContext};
use wasm_bindgen::JsValue;
use std::borrow::Cow;
use std::collections::HashMap;
use wasm_bindgen::JsValue;
use web_sys::WebGl2RenderingContext;
pub trait Renderer {
fn begin(&mut self);
fn end(&mut self);
}
pub(crate) type Url = String;
type LayerId = String;
@@ -65,17 +68,22 @@ pub struct Layers {
gl: WebGlContext,
}
const DEFAULT_BACKGROUND_COLOR: ColorRGB = ColorRGB { r: 0.05, g: 0.05, b: 0.05 };
const DEFAULT_BACKGROUND_COLOR: ColorRGB = ColorRGB {
r: 0.05,
g: 0.05,
b: 0.05,
};
fn get_backgroundcolor_shader<'a>(gl: &WebGlContext, shaders: &'a mut ShaderManager) -> &'a Shader {
shaders.get(
gl,
&ShaderId(
Cow::Borrowed("RayTracerFontVS"),
Cow::Borrowed("RayTracerFontFS"),
),
)
.unwrap_abort()
shaders
.get(
gl,
&ShaderId(
Cow::Borrowed("RayTracerFontVS"),
Cow::Borrowed("RayTracerFontFS"),
),
)
.unwrap_abort()
}
pub struct ImageCfg {
@@ -100,10 +108,7 @@ impl ImageCfg {
}
impl Layers {
pub fn new(
gl: &WebGlContext,
projection: &ProjectionType
) -> Result<Self, JsValue> {
pub fn new(gl: &WebGlContext, projection: &ProjectionType) -> Result<Self, JsValue> {
let surveys = HashMap::new();
let images = HashMap::new();
let meta = HashMap::new();
@@ -120,38 +125,36 @@ impl Layers {
let mut screen_vao = VertexArrayObject::new(&gl);
#[cfg(feature = "webgl2")]
screen_vao.bind_for_update()
screen_vao
.bind_for_update()
.add_array_buffer_single(
2,
"pos_clip_space",
WebGl2RenderingContext::STATIC_DRAW,
SliceData::<f32>(&[
-1.0, -1.0,
1.0, -1.0,
1.0, 1.0,
-1.0, 1.0,
]),
SliceData::<f32>(&[-1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0]),
)
// Set the element buffer
.add_element_buffer(WebGl2RenderingContext::STATIC_DRAW, SliceData::<u16>(&[0, 1, 2, 0, 2, 3]))
.add_element_buffer(
WebGl2RenderingContext::STATIC_DRAW,
SliceData::<u16>(&[0, 1, 2, 0, 2, 3]),
)
// Unbind the buffer
.unbind();
#[cfg(feature = "webgl1")]
screen_vao.bind_for_update()
screen_vao
.bind_for_update()
.add_array_buffer(
2,
"pos_clip_space",
WebGl2RenderingContext::STATIC_DRAW,
SliceData::<f32>(&[
-1.0, -1.0,
1.0, -1.0,
1.0, 1.0,
-1.0, 1.0,
]),
SliceData::<f32>(&[-1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0]),
)
// Set the element buffer
.add_element_buffer(WebGl2RenderingContext::STATIC_DRAW, SliceData::<u16>(&[0, 1, 2, 0, 2, 3]))
.add_element_buffer(
WebGl2RenderingContext::STATIC_DRAW,
SliceData::<u16>(&[0, 1, 2, 0, 2, 3]),
)
// Unbind the buffer
.unbind();
@@ -176,15 +179,14 @@ impl Layers {
pub fn set_survey_url(&mut self, past_url: String, new_url: String) -> Result<(), JsValue> {
if let Some(mut survey) = self.surveys.remove(&past_url) {
// update the root_url
survey.get_config_mut()
.set_root_url(new_url.clone());
survey.get_config_mut().set_root_url(new_url.clone());
self.surveys.insert(new_url.clone(), survey);
// update all the layer urls
for url in self.urls.values_mut() {
if *url == past_url {
*url = new_url.clone();
*url = new_url.clone();
}
}
@@ -194,11 +196,11 @@ impl Layers {
}
}
pub fn reset_frame(&mut self) {
/*pub fn reset_frame(&mut self) {
for survey in self.surveys.values_mut() {
survey.reset_frame();
}
}
}*/
pub fn set_projection(&mut self, projection: &ProjectionType) -> Result<(), JsValue> {
// Recompute the raytracer
@@ -212,29 +214,29 @@ impl Layers {
pub fn draw(
&mut self,
camera: &CameraViewPort,
camera: &mut CameraViewPort,
shaders: &mut ShaderManager,
colormaps: &Colormaps,
projection: &ProjectionType
projection: &ProjectionType,
) -> Result<(), JsValue> {
let raytracer = &self.raytracer;
let raytracing = raytracer.is_rendering(camera/* , depth_texture*/);
let raytracing = raytracer.is_rendering(camera /* , depth_texture*/);
// Check whether a survey to plot is allsky
// if neither are, we draw a font
// if there are, we do not draw nothing
let render_background_color = !self.layers.iter()
.any(|layer| {
let meta = self.meta.get(layer).unwrap_abort();
let url = self.urls.get(layer).unwrap_abort();
if let Some(survey) = self.surveys.get(url) {
let hips_cfg = survey.get_config();
(survey.is_allsky() || hips_cfg.get_format().get_channel() == ChannelType::RGB8U) && meta.opacity == 1.0
} else {
// image fits case
false
}
});
let render_background_color = !self.layers.iter().any(|layer| {
let meta = self.meta.get(layer).unwrap_abort();
let url = self.urls.get(layer).unwrap_abort();
if let Some(survey) = self.surveys.get(url) {
let hips_cfg = survey.get_config();
(survey.is_allsky() || hips_cfg.get_format().get_channel() == ChannelType::RGB8U)
&& meta.opacity == 1.0
} else {
// image fits case
false
}
});
// Need to render transparency font
if render_background_color {
@@ -247,18 +249,19 @@ impl Layers {
&self.screen_vao
};
get_backgroundcolor_shader(&self.gl, shaders).bind(&self.gl).attach_uniforms_from(camera)
get_backgroundcolor_shader(&self.gl, shaders)
.bind(&self.gl)
.attach_uniforms_from(camera)
.attach_uniform("color", &background_color)
.bind_vertex_array_object_ref(vao)
.draw_elements_with_i32(
WebGl2RenderingContext::TRIANGLES,
None,
WebGl2RenderingContext::UNSIGNED_SHORT,
0,
);
.draw_elements_with_i32(
WebGl2RenderingContext::TRIANGLES,
None,
WebGl2RenderingContext::UNSIGNED_SHORT,
0,
);
}
// The first layer must be paint independently of its alpha channel
self.gl.enable(WebGl2RenderingContext::BLEND);
// Pre loop over the layers to see if a HiPS is entirely covering those behind
@@ -271,7 +274,9 @@ impl Layers {
if let Some(survey) = self.surveys.get_mut(url) {
let hips_cfg = survey.get_config();
let fully_covering_survey = (survey.is_allsky() || hips_cfg.get_format().get_channel() == ChannelType::RGB8U) && meta.opacity == 1.0;
let fully_covering_survey = (survey.is_allsky()
|| hips_cfg.get_format().get_channel() == ChannelType::RGB8U)
&& meta.opacity == 1.0;
if fully_covering_survey {
idx_start_layer = idx_layer;
}
@@ -288,22 +293,12 @@ impl Layers {
survey.update(camera, projection);
// 2. Draw it if its opacity is not null
survey.draw(
shaders,
colormaps,
camera,
raytracer,
draw_opt
)?;
survey.draw(shaders, colormaps, camera, raytracer, draw_opt)?;
} else if let Some(image) = self.images.get_mut(url) {
image.update(camera, projection)?;
// 2. Draw it if its opacity is not null
image.draw(
shaders,
colormaps,
draw_opt,
)?;
image.draw(shaders, colormaps, draw_opt)?;
}
}
}
@@ -319,26 +314,32 @@ impl Layers {
Ok(())
}
pub fn remove_layer(&mut self, layer: &str, camera: &mut CameraViewPort) -> Result<usize, JsValue> {
let err_layer_not_found = JsValue::from_str(&format!("Layer {:?} not found, so cannot be removed.", layer));
pub fn remove_layer(
&mut self,
layer: &str,
camera: &mut CameraViewPort,
proj: &ProjectionType,
) -> Result<usize, JsValue> {
let err_layer_not_found = JsValue::from_str(&format!(
"Layer {:?} not found, so cannot be removed.",
layer
));
// Color configs, and urls are indexed by layer
self.meta.remove(layer)
.ok_or(err_layer_not_found.clone())?;
self.meta.remove(layer).ok_or(err_layer_not_found.clone())?;
let url = self.urls.remove(layer).ok_or(err_layer_not_found.clone())?;
// layer from layers does also need to be removed
let id_layer = self.layers.iter()
let id_layer = self
.layers
.iter()
.position(|l| layer == l)
.ok_or(err_layer_not_found)?;
self.layers.remove(id_layer);
// Loop over all the meta for its longitude reversed property
// and set the camera to it if there is at least one
let longitude_reversed = self.meta.values()
.any(|meta| {
meta.longitude_reversed
});
let longitude_reversed = self.meta.values().any(|meta| meta.longitude_reversed);
camera.set_longitude_reversed(longitude_reversed);
camera.set_longitude_reversed(longitude_reversed, proj);
// Check if the url is still used
let url_still_used = self.urls.values().any(|rem_url| rem_url == &url);
@@ -347,34 +348,41 @@ impl Layers {
Ok(id_layer)
} else {
// Resource not needed anymore
if let Some(_) = self.surveys.remove(&url) {
if let Some(s) = self.surveys.remove(&url) {
// A HiPS has been found and removed
let hips_frame = s.get_config().get_frame();
// remove the frame
camera.unregister_view_frame(hips_frame, proj);
Ok(id_layer)
} else if let Some(_) = self.images.remove(&url) {
// A FITS image has been found and removed
Ok(id_layer)
} else {
Err(JsValue::from_str(&format!("Url found {:?} is associated to no surveys.", url)))
Err(JsValue::from_str(&format!(
"Url found {:?} is associated to no surveys.",
url
)))
}
}
}
pub fn rename_layer(
&mut self,
layer: &str,
new_layer: &str,
) -> Result<(), JsValue> {
let err_layer_not_found = JsValue::from_str(&format!("Layer {:?} not found, so cannot be removed.", layer));
pub fn rename_layer(&mut self, layer: &str, new_layer: &str) -> Result<(), JsValue> {
let err_layer_not_found = JsValue::from_str(&format!(
"Layer {:?} not found, so cannot be removed.",
layer
));
// layer from layers does also need to be removed
let id_layer = self.layers.iter()
let id_layer = self
.layers
.iter()
.position(|l| layer == l)
.ok_or(err_layer_not_found.clone())?;
self.layers[id_layer] = new_layer.to_string();
let meta = self.meta.remove(layer)
.ok_or(err_layer_not_found.clone())?;
let meta = self.meta.remove(layer).ok_or(err_layer_not_found.clone())?;
let url = self.urls.remove(layer).ok_or(err_layer_not_found)?;
// Add the new
@@ -384,17 +392,23 @@ impl Layers {
Ok(())
}
pub fn swap_layers(
&mut self,
first_layer: &str,
second_layer: &str,
) -> Result<(), JsValue> {
let id_first_layer = self.layers.iter()
.position(|l| l == first_layer)
.ok_or(JsValue::from_str(&format!("Layer {:?} not found, so cannot be removed.", first_layer)))?;
let id_second_layer = self.layers.iter()
.position(|l| l == second_layer)
.ok_or(JsValue::from_str(&format!("Layer {:?} not found, so cannot be removed.", second_layer)))?;
pub fn swap_layers(&mut self, first_layer: &str, second_layer: &str) -> Result<(), JsValue> {
let id_first_layer =
self.layers
.iter()
.position(|l| l == first_layer)
.ok_or(JsValue::from_str(&format!(
"Layer {:?} not found, so cannot be removed.",
first_layer
)))?;
let id_second_layer =
self.layers
.iter()
.position(|l| l == second_layer)
.ok_or(JsValue::from_str(&format!(
"Layer {:?} not found, so cannot be removed.",
second_layer
)))?;
self.layers.swap(id_first_layer, id_second_layer);
@@ -406,6 +420,7 @@ impl Layers {
gl: &WebGlContext,
hips: HiPSCfg,
camera: &mut CameraViewPort,
proj: &ProjectionType,
) -> Result<&HiPS, JsValue> {
let HiPSCfg {
layer,
@@ -414,13 +429,10 @@ impl Layers {
} = hips;
// 1. Add the layer name
let layer_already_found = self.layers.iter()
.any(|l| {
l == &layer
});
let layer_already_found = self.layers.iter().any(|l| l == &layer);
let idx = if layer_already_found {
let idx = self.remove_layer(&layer, camera)?;
let idx = self.remove_layer(&layer, camera, proj)?;
idx
} else {
self.layers.len()
@@ -433,10 +445,7 @@ impl Layers {
// The layer does not already exist
// Let's check if no other hipses points to the
// same url than `hips`
let url_already_found = self.surveys.keys()
.any(|hips_url| {
hips_url == &url
});
let url_already_found = self.surveys.keys().any(|hips_url| hips_url == &url);
if !url_already_found {
// The url is not processed yet
@@ -451,8 +460,11 @@ impl Layers {
if let Some(initial_fov) = properties.get_initial_fov() {
camera.set_aperture::<P>(Angle((initial_fov).to_radians()));
}*/
camera.register_view_frame(cfg.get_frame(), proj);
let hips = HiPS::new(cfg, gl, camera)?;
// add the frame to the camera
self.surveys.insert(url.clone(), hips);
}
@@ -462,18 +474,14 @@ impl Layers {
self.meta.insert(layer.clone(), meta);
// Loop over all the meta for its longitude reversed property
// and set the camera to it if there is at least one
let longitude_reversed = self.meta.values()
.any(|meta| {
meta.longitude_reversed
});
let longitude_reversed = self.meta.values().any(|meta| meta.longitude_reversed);
camera.set_longitude_reversed(longitude_reversed);
camera.set_longitude_reversed(longitude_reversed, proj);
// Refresh the views of all the surveys
// this is necessary to compute the max depth between the surveys
self.refresh_views(camera);
let hips = self.surveys.get(&url).ok_or(JsValue::from_str("HiPS not found"))?;
let hips = self
.surveys
.get(&url)
.ok_or(JsValue::from_str("HiPS not found"))?;
Ok(hips)
}
@@ -481,6 +489,7 @@ impl Layers {
&mut self,
image: ImageCfg,
camera: &mut CameraViewPort,
proj: &ProjectionType,
) -> Result<&Image, JsValue> {
let ImageCfg {
layer,
@@ -490,13 +499,10 @@ impl Layers {
} = image;
// 1. Add the layer name
let layer_already_found = self.layers.iter()
.any(|s| {
s == &layer
});
let layer_already_found = self.layers.iter().any(|s| s == &layer);
let idx = if layer_already_found {
let idx = self.remove_layer(&layer, camera)?;
let idx = self.remove_layer(&layer, camera, proj)?;
idx
} else {
self.layers.len()
@@ -508,21 +514,15 @@ impl Layers {
self.meta.insert(layer.clone(), meta);
// Loop over all the meta for its longitude reversed property
// and set the camera to it if there is at least one
let longitude_reversed = self.meta.values()
.any(|meta| {
meta.longitude_reversed
});
let longitude_reversed = self.meta.values().any(|meta| meta.longitude_reversed);
camera.set_longitude_reversed(longitude_reversed);
camera.set_longitude_reversed(longitude_reversed, proj);
// 3. Add the fits image
// The layer does not already exist
// Let's check if no other hipses points to the
// same url than `hips`
let fits_already_found = self.images.keys()
.any(|image_url| {
image_url == &url
});
let fits_already_found = self.images.keys().any(|image_url| image_url == &url);
if !fits_already_found {
// The fits has not been loaded yet
@@ -541,7 +541,10 @@ impl Layers {
self.urls.insert(layer.clone(), url.clone());
let fits = self.images.get(&url).ok_or(JsValue::from_str("Fits image not found"))?;
let fits = self
.images
.get(&url)
.ok_or(JsValue::from_str("Fits image not found"))?;
Ok(fits)
}
@@ -556,7 +559,7 @@ impl Layers {
&mut self,
layer: String,
meta: ImageMetadata,
camera: &CameraViewPort,
camera: &mut CameraViewPort,
projection: &ProjectionType,
) -> Result<(), JsValue> {
let layer_ref = layer.as_str();
@@ -573,7 +576,10 @@ impl Layers {
} else if meta_old.visible() && !meta.visible() {
// There is an important point here, if we hide a specific layer
// then we must recompute the vertices of the layers underneath
let layer_idx = self.layers.iter().position(|l| l == layer_ref)
let layer_idx = self
.layers
.iter()
.position(|l| l == layer_ref)
.ok_or(JsValue::from_str("Expect the layer to be found!"))?;
for idx in 0..layer_idx {
@@ -606,16 +612,17 @@ impl Layers {
ready
}
pub fn refresh_views(&mut self, camera: &mut CameraViewPort) {
pub fn update(&mut self, camera: &mut CameraViewPort, proj: &ProjectionType) {
for survey in self.surveys.values_mut() {
survey.refresh_view(camera);
survey.update(camera, proj);
}
}
// Accessors
// HiPSes getters
pub fn get_hips_from_layer(&self, layer: &str) -> Option<&HiPS> {
self.urls.get(layer)
self.urls
.get(layer)
.map(|url| self.surveys.get(url))
.flatten()
}
@@ -654,10 +661,9 @@ impl Layers {
}
pub fn get_image_from_layer(&self, layer: &str) -> Option<&Image> {
self.urls.get(layer)
.map(|url| {
self.images.get(url)
}).flatten()
self.urls
.get(layer)
.map(|url| self.images.get(url))
.flatten()
}
}

View File

@@ -0,0 +1,103 @@
use web_sys::CanvasRenderingContext2d;
use super::Renderer;
pub struct TextRenderManager {
// The text canvas
canvas: HtmlCanvasElement,
ctx: CanvasRenderingContext2d,
color: JsValue,
font_size: u32,
}
use cgmath::{Rad, Vector2};
use wasm_bindgen::JsValue;
use crate::camera::CameraViewPort;
use al_api::color::{ColorRGBA, ColorRGB};
use web_sys::{HtmlCanvasElement};
use crate::Abort;
use wasm_bindgen::JsCast;
impl TextRenderManager {
/// Init the buffers, VAO and shader
pub fn new() -> Result<Self, JsValue> {
let document = web_sys::window().unwrap_abort().document().unwrap_abort();
let canvas = document
// Inside it, retrieve the canvas
.get_elements_by_class_name("aladin-gridCanvas")
.get_with_index(0)
.unwrap_abort()
.dyn_into::<web_sys::HtmlCanvasElement>()?;
let ctx = canvas
.get_context("2d")
.unwrap_abort()
.unwrap_abort()
.dyn_into::<web_sys::CanvasRenderingContext2d>().unwrap_abort();
let color = JsValue::from_str("#00ff00");
let font_size = 30;
Ok(Self {
font_size,
color,
canvas,
ctx,
})
}
pub fn set_color(&mut self, color: &ColorRGB) {
let hex = al_api::color::Color::rgbToHex((color.r * 255.0) as u8, (color.g * 255.0) as u8, (color.b * 255.0) as u8);
self.color = JsValue::from_str(&hex);
}
pub fn set_font_size(&mut self, size: u32) {
self.font_size = size;
}
pub fn add_label<A: Into<Rad<f32>>>(
&mut self,
text: &str,
screen_pos: &Vector2<f32>,
angle: A,
) -> Result<(), JsValue>{
self.ctx.save();
self.ctx.translate(screen_pos.x as f64, screen_pos.y as f64)?;
let rot: Rad<f32> = angle.into();
self.ctx.rotate(rot.0 as f64)?;
self.ctx.set_text_align("center");
self.ctx.fill_text(text, 0.0, 0.0)?;
self.ctx.restore();
Ok(())
}
pub fn draw(&mut self, _camera: &CameraViewPort, _color: &ColorRGBA, _scale: f32) -> Result<(), JsValue> {
Ok(())
}
pub fn clear_text_canvas(&mut self) {
self.ctx.clear_rect(0_f64, 0_f64, self.canvas.width() as f64, self.canvas.height() as f64);
}
}
impl Renderer for TextRenderManager {
fn begin(&mut self) {
self.ctx = self.canvas
.get_context("2d")
.unwrap_abort()
.unwrap_abort()
.dyn_into::<web_sys::CanvasRenderingContext2d>().unwrap_abort();
self.clear_text_canvas();
// reset the font and color
self.ctx.set_font(&format!("{}px verdana, sans-serif", self.font_size));
self.ctx.set_fill_style(&self.color);
}
fn end(&mut self) {}
}

View File

@@ -1,4 +1,6 @@
use std::ops::RangeInclusive;
use cgmath::BaseFloat;
use crate::CameraViewPort;
// This iterator construct indices from a set of vertices defining
@@ -69,7 +71,7 @@ impl<'a> Iterator for BuildPatchIndicesIter<'a> {
let t1 = Triangle::new(&ndc_tl, &ndc_tr, &ndc_bl);
let t2 = Triangle::new(&ndc_tr, &ndc_br, &ndc_bl);
if !t1.is_valid(&self.camera) || !t2.is_valid(&self.camera) {
if !t1.is_invalid(&self.camera) || !t2.is_invalid(&self.camera) {
self.next() // crossing projection tri
} else {
Some([
@@ -83,18 +85,24 @@ impl<'a> Iterator for BuildPatchIndicesIter<'a> {
}
}
struct Triangle<'a> {
v1: &'a [f32; 2],
v2: &'a [f32; 2],
v3: &'a [f32; 2],
pub struct Triangle<'a, S>
where
S: BaseFloat
{
v1: &'a [S; 2],
v2: &'a [S; 2],
v3: &'a [S; 2],
}
impl<'a> Triangle<'a> {
pub fn new(v1: &'a [f32; 2], v2: &'a [f32; 2], v3: &'a [f32; 2]) -> Self {
impl<'a, S> Triangle<'a, S>
where
S: BaseFloat
{
pub fn new(v1: &'a [S; 2], v2: &'a [S; 2], v3: &'a [S; 2]) -> Self {
Self { v1, v2, v3 }
}
pub fn is_valid(&self, camera: &CameraViewPort) -> bool {
pub fn is_invalid(&self, camera: &CameraViewPort) -> bool {
let tri_ccw = self.is_ccw();
let reversed_longitude = camera.get_longitude_reversed();

View File

@@ -1,36 +1,33 @@
use std::cmp::Ordering;
use std::rc::Rc;
use std::collections::BinaryHeap;
use std::collections::HashMap;
use std::rc::Rc;
use al_core::image::format::ChannelType;
use cgmath::Vector3;
use al_api::hips::ImageExt;
use al_core::shader::{SendUniforms, ShaderBound};
use al_core::Texture2DArray;
use al_core::WebGlContext;
use al_core::image::format::ImageFormat;
use al_core::image::Image;
use al_core::image::format::{R32F, R64F, RGB8U, RGBA8U};
#[cfg(feature = "webgl2")]
use al_core::image::format::{R16I, R32I, R8UI};
use al_core::texture::{
TEX_PARAMS
};
use al_core::image::format::{R32F, R64F, RGB8U, RGBA8U};
use al_core::image::Image;
use al_core::shader::{SendUniforms, ShaderBound};
use al_core::texture::TEX_PARAMS;
use al_core::Texture2DArray;
use al_core::WebGlContext;
use crate::healpix::cell::HEALPixCell;
use super::texture::TextureUniforms;
use super::texture::Texture;
use super::config::HiPSConfig;
use crate::time::Time;
use crate::math::lonlat::LonLatT;
use crate::JsValue;
use crate::healpix::cell::NUM_HPX_TILES_DEPTH_ZERO;
use super::texture::Texture;
use super::texture::TextureUniforms;
use crate::downloader::request::allsky::Allsky;
use crate::healpix::cell::HEALPixCell;
use crate::healpix::cell::NUM_HPX_TILES_DEPTH_ZERO;
use crate::math::lonlat::LonLatT;
use crate::time::Time;
use crate::Abort;
use crate::JsValue;
#[derive(Clone, Debug)]
pub struct TextureCellItem {
@@ -39,8 +36,8 @@ pub struct TextureCellItem {
}
impl TextureCellItem {
fn is_root(&self) -> bool {
self.cell.is_root()
fn is_root(&self, delta_depth: u8) -> bool {
self.cell.is_root(delta_depth)
}
}
@@ -195,7 +192,7 @@ impl ImageSurveyTextures {
Texture::new(&HEALPixCell(0, 8), 8, now),
Texture::new(&HEALPixCell(0, 9), 9, now),
Texture::new(&HEALPixCell(0, 10), 10, now),
Texture::new(&HEALPixCell(0, 11), 11, now)
Texture::new(&HEALPixCell(0, 11), 11, now),
];
let channel = config.get_format().get_channel();
@@ -289,23 +286,19 @@ impl ImageSurveyTextures {
Ok(())
}
pub fn push_allsky(
&mut self,
allsky: Allsky,
) -> Result<(), JsValue> {
pub fn push_allsky(&mut self, allsky: Allsky) -> Result<(), JsValue> {
let Allsky {
image, time_req, depth_tile, ..
image,
time_req,
depth_tile,
..
} = allsky;
{
let mutex_locked = image.lock().unwrap_abort();
let images = mutex_locked.as_ref().unwrap_abort();
for (idx, image) in images.iter().enumerate() {
self.push(
&HEALPixCell(depth_tile, idx as u64),
Some(image),
time_req,
)?;
self.push(&HEALPixCell(depth_tile, idx as u64), Some(image), time_req)?;
}
}
@@ -331,11 +324,11 @@ impl ImageSurveyTextures {
) -> Result<(), JsValue> {
if !self.contains_tile(cell) {
// Get the texture cell in which the tile has to be
let tex_cell = cell.get_texture_cell(&self.config);
let tex_cell = cell.get_texture_cell(self.config.delta_depth());
if !self.textures.contains_key(&tex_cell) {
let HEALPixCell(_, idx) = tex_cell;
let texture = if tex_cell.is_root() {
let texture = if tex_cell.is_root(self.config.delta_depth()) {
Texture::new(&tex_cell, idx as i32, time_request)
} else {
// The texture is not among the essential ones
@@ -344,17 +337,14 @@ impl ImageSurveyTextures {
// Pop the oldest requested texture
let oldest_texture = self.heap.pop().unwrap_abort();
// Ensure this is not a base texture
debug_assert!(!oldest_texture.is_root());
debug_assert!(!oldest_texture.is_root(self.config.delta_depth()));
// Remove it from the textures HashMap
let mut texture = self.textures.remove(&oldest_texture.cell).expect(
"Texture (oldest one) has not been found in the buffer of textures",
);
// Clear and assign it to tex_cell
texture.replace(
&tex_cell,
time_request,
);
texture.replace(&tex_cell, time_request);
texture
} else {
@@ -415,7 +405,7 @@ impl ImageSurveyTextures {
};
self.available_tiles_during_frame = true;
if tex_cell.is_root() && texture.is_available() {
if tex_cell.is_root(self.config.delta_depth()) && texture.is_available() {
self.num_root_textures_available += 1;
debug_assert!(self.num_root_textures_available <= NUM_HPX_TILES_DEPTH_ZERO);
@@ -468,7 +458,7 @@ impl ImageSurveyTextures {
// For that purpose, we first need to verify that its
// texture ancestor exists and then, it it contains the tile
pub fn contains_tile(&self, cell: &HEALPixCell) -> bool {
let texture_cell = cell.get_texture_cell(&self.config);
let texture_cell = cell.get_texture_cell(self.config.delta_depth());
if let Some(texture) = self.textures.get(&texture_cell) {
// The texture is present in the buffer
// We must check whether it contains the tile
@@ -485,8 +475,8 @@ impl ImageSurveyTextures {
debug_assert!(self.contains_tile(cell));
// Get the texture cell in which the tile has to be
let texture_cell = cell.get_texture_cell(&self.config);
if texture_cell.is_root() {
let texture_cell = cell.get_texture_cell(self.config.delta_depth());
if texture_cell.is_root(self.config().delta_depth()) {
return;
}
@@ -573,13 +563,14 @@ impl ImageSurveyTextures {
// Get the nearest parent tile found in the CPU buffer
pub fn get_nearest_parent(&self, cell: &HEALPixCell) -> HEALPixCell {
if cell.is_root() {
let dd = self.config.delta_depth();
if cell.is_root(dd) {
// Root cells are in the buffer by definition
*cell
} else {
let mut parent_cell = cell.parent();
while !self.contains(&parent_cell) && !parent_cell.is_root() {
while !self.contains(&parent_cell) && !parent_cell.is_root(dd) {
parent_cell = parent_cell.parent();
}
@@ -649,7 +640,8 @@ fn send_to_gpu<I: Image>(
let idx_row_in_slice = idx_in_slice % num_textures_by_side_slice;
// Row and column indexes of the tile in its texture
let (idx_col_in_tex, idx_row_in_tex) = cell.get_offset_in_texture_cell(cfg);
let delta_depth = cfg.delta_depth();
let (idx_col_in_tex, idx_row_in_tex) = cell.get_offset_in_texture_cell(delta_depth);
// The size of the global texture containing the tiles
let texture_size = cfg.get_texture_size();

View File

@@ -1,6 +1,5 @@
use al_core::{image::format::ImageFormat, image::raw::ImageBuffer};
use al_api::hips::ImageExt;
use al_core::{image::format::ImageFormat, image::raw::ImageBuffer};
#[derive(Debug)]
pub struct EmptyTileImage {
@@ -101,12 +100,10 @@ impl Image for EmptyTileImage {
}
}
use al_core::image::format::{ImageFormatType, RGB8U, RGBA8U, ChannelType};
use al_core::image::format::{ChannelType, ImageFormatType, RGB8U, RGBA8U};
//use super::TileArrayBuffer;
/*use super::{ArrayF32, ArrayF64, ArrayI16, ArrayI32, ArrayU8};
fn create_black_tile(format: FormatImageType, width: i32, value: f32) -> TileArrayBufferImage {
let _num_channels = format.get_num_channels() as i32;
@@ -151,9 +148,6 @@ pub struct HiPSConfig {
// Max depth of the current HiPS tiles
max_depth_texture: u8,
max_depth_tile: u8,
num_textures_by_side_slice: i32,
num_textures_by_slice: i32,
num_slices: i32,
num_textures: usize,
pub is_allsky: bool,
@@ -182,6 +176,10 @@ use crate::HiPSProperties;
use al_api::coo_system::CooSystem;
use wasm_bindgen::JsValue;
const NUM_TEXTURES_BY_SIDE_SLICE: i32 = 8;
const NUM_TEXTURES_BY_SLICE: i32 = NUM_TEXTURES_BY_SIDE_SLICE * NUM_TEXTURES_BY_SIDE_SLICE;
const NUM_SLICES: i32 = 1;
impl HiPSConfig {
/// Define a HiPS configuration
///
@@ -189,17 +187,11 @@ impl HiPSConfig {
///
/// * `properties` - A description of the HiPS, its metadata, available formats etc...
/// * `img_format` - Image format wanted by the user
pub fn new(
properties: &HiPSProperties,
img_ext: ImageExt,
) -> Result<HiPSConfig, JsValue> {
pub fn new(properties: &HiPSProperties, img_ext: ImageExt) -> Result<HiPSConfig, JsValue> {
let root_url = properties.get_url();
// Define the size of the 2d texture array depending on the
// characterics of the client
let num_textures_by_side_slice = 8;
let num_textures_by_slice = num_textures_by_side_slice * num_textures_by_side_slice;
let num_slices = 2;
let num_textures = (num_textures_by_slice * num_slices) as usize;
let num_textures = (NUM_TEXTURES_BY_SLICE * NUM_SLICES) as usize;
let max_depth_tile = properties.get_max_order();
let tile_size = properties.get_tile_size();
@@ -267,8 +259,14 @@ impl HiPSConfig {
))
}
}
ImageExt::Png | ImageExt::Webp => Ok(ImageFormatType { ext: img_ext, channel: ChannelType::RGBA8U }),
ImageExt::Jpeg => Ok(ImageFormatType { ext: img_ext, channel: ChannelType::RGB8U }),
ImageExt::Png | ImageExt::Webp => Ok(ImageFormatType {
ext: img_ext,
channel: ChannelType::RGBA8U,
}),
ImageExt::Jpeg => Ok(ImageFormatType {
ext: img_ext,
channel: ChannelType::RGB8U,
}),
}?;
let dataproduct_subtype = properties.get_dataproduct_subtype().clone();
@@ -317,9 +315,6 @@ impl HiPSConfig {
max_depth_texture,
max_depth_tile,
min_depth_tile,
num_textures_by_side_slice,
num_textures_by_slice,
num_slices,
num_textures,
is_allsky,
@@ -338,7 +333,7 @@ impl HiPSConfig {
format,
tile_size,
dataproduct_subtype,
colored
colored,
};
Ok(hips_config)
@@ -384,10 +379,7 @@ impl HiPSConfig {
)),
})?;
Ok(ImageFormatType {
ext,
channel,
})
Ok(ImageFormatType { ext, channel })
} else {
Err(JsValue::from_str(
"Fits tiles exists but the BITPIX is not found",
@@ -402,7 +394,7 @@ impl HiPSConfig {
ext,
channel: ChannelType::RGBA8U,
})
},
}
ImageExt::Jpeg => {
self.tex_storing_fits = false;
self.tex_storing_unsigned_int = false;
@@ -433,99 +425,94 @@ impl HiPSConfig {
Ok(())
}
#[inline]
pub fn get_root_url(&self) -> &String {
#[inline(always)]
pub fn get_root_url(&self) -> &str {
&self.root_url
}
#[inline]
#[inline(always)]
pub fn set_root_url(&mut self, root_url: String) {
self.root_url = root_url;
}
#[inline]
#[inline(always)]
pub fn set_fits_metadata(&mut self, bscale: f32, bzero: f32, blank: f32) {
self.scale = bscale;
self.offset = bzero;
self.blank = blank;
}
#[inline]
#[inline(always)]
pub fn delta_depth(&self) -> u8 {
self.delta_depth
}
#[inline]
#[inline(always)]
pub fn num_tiles_per_texture(&self) -> usize {
self.num_tiles_per_texture
}
#[inline]
#[inline(always)]
pub fn get_texture_size(&self) -> i32 {
self.texture_size
}
#[inline]
#[inline(always)]
pub fn get_min_depth_tile(&self) -> u8 {
self.min_depth_tile
}
#[inline]
#[inline(always)]
pub fn get_tile_size(&self) -> i32 {
self.tile_size
}
/*
#[inline]
pub fn get_black_tile(&self) -> Rc<TileArrayBufferImage> {
self.tile_config.get_black_tile()
}
*/
#[inline]
#[inline(always)]
pub fn get_max_depth(&self) -> u8 {
self.max_depth_texture
}
#[inline]
#[inline(always)]
pub fn get_frame(&self) -> CooSystem {
self.frame
}
#[inline]
#[inline(always)]
pub fn get_max_tile_depth(&self) -> u8 {
self.max_depth_tile
}
#[inline]
#[inline(always)]
pub fn num_textures(&self) -> usize {
self.num_textures
}
#[inline]
#[inline(always)]
pub fn num_textures_by_side_slice(&self) -> i32 {
self.num_textures_by_side_slice
NUM_TEXTURES_BY_SIDE_SLICE
}
#[inline]
#[inline(always)]
pub fn num_textures_by_slice(&self) -> i32 {
self.num_textures_by_slice
NUM_TEXTURES_BY_SLICE
}
#[inline]
#[inline(always)]
pub fn num_slices(&self) -> i32 {
self.num_slices
NUM_SLICES
}
#[inline]
#[inline(always)]
pub fn get_format(&self) -> ImageFormatType {
self.format
}
#[inline]
#[inline(always)]
pub fn is_colored(&self) -> bool {
self.colored
}
#[inline]
#[inline(always)]
pub fn get_default_image(&self) -> &EmptyTileImage {
&self.empty_image
}

View File

@@ -1,4 +1,3 @@
pub mod buffer;
pub mod config;
pub mod texture;
pub mod view;

View File

@@ -1,4 +1,5 @@
use crate::{healpix::cell::HEALPixCell, time::Time};
use std::collections::HashSet;
pub struct Texture {
@@ -37,11 +38,7 @@ pub struct Texture {
use super::config::HiPSConfig;
impl Texture {
pub fn new(
texture_cell: &HEALPixCell,
idx: i32,
time_request: Time,
) -> Texture {
pub fn new(texture_cell: &HEALPixCell, idx: i32, time_request: Time) -> Texture {
let tiles = HashSet::new();
let start_time = None;
@@ -68,23 +65,26 @@ impl Texture {
// Panic if cell is not contained in the texture
// Do nothing if the texture is full
// Return true if the tile is newly added
pub fn append(&mut self, cell: &HEALPixCell, config: &HiPSConfig, missing: bool) {
let texture_cell = cell.get_texture_cell(config);
pub fn append(&mut self, cell: &HEALPixCell, cfg: &HiPSConfig, missing: bool) {
let texture_cell = cell.get_texture_cell(cfg.delta_depth());
debug_assert!(texture_cell == self.texture_cell);
debug_assert!(!self.full);
self.missing &= missing;
//self.start_time = Some(Time::now());
//self.full = true;
let num_tiles_per_texture = config.num_tiles_per_texture();
if *cell == texture_cell {
let num_tiles_per_texture = cfg.num_tiles_per_texture();
let c = *cell;
if c == texture_cell {
self.num_tiles_written = num_tiles_per_texture;
self.full = true;
self.start_time = Some(Time::now());
} else {
// Sub-tile appending. This code is called for tile size is < 512
// Cell has the good ancestor for this texture
let new_tile = self.tiles.insert(*cell);
let new_tile = self.tiles.insert(c);
// Ensures the tile was not already present in the buffer
// This is the case because already contained cells do not
// lead to new requests
@@ -139,11 +139,7 @@ impl Texture {
}
// Setter
pub fn replace(
&mut self,
texture_cell: &HEALPixCell,
time_request: Time,
) {
pub fn replace(&mut self, texture_cell: &HEALPixCell, time_request: Time) {
// Cancel the tasks copying the tiles contained in the texture
// which have not yet been completed.
//self.clear_tasks_in_progress(config, exec);

View File

@@ -1,255 +0,0 @@
use crate::{coosys, healpix::cell::HEALPixCell};
use std::collections::HashMap;
use crate::math::angle::Angle;
use crate::math::projection::*;
use cgmath::Vector2;
pub fn vertices(cell: &HEALPixCell, camera: &CameraViewPort, projection: &ProjectionType) -> Result<[Vector2<f64>; 4], &'static str> {
let project_vertex = |(lon, lat): (f64, f64)| -> Result<Vector2<f64>, &'static str> {
let vertex = crate::math::lonlat::radec_to_xyzw(Angle(lon), Angle(lat));
projection.view_to_screen_space(&vertex, camera).ok_or("Cannot project")
};
let vertices = cell.vertices();
let reversed_longitude = camera.get_longitude_reversed();
let invalid_tri = |tri_ccw: bool, reversed_longitude: bool| -> bool {
(!reversed_longitude && !tri_ccw) || (reversed_longitude && tri_ccw)
};
let c0 = project_vertex(vertices[0])?;
let c1 = project_vertex(vertices[1])?;
let c2 = project_vertex(vertices[2])?;
let c3 = project_vertex(vertices[3])?;
let first_tri_ccw = crate::math::vector::ccw_tri(&c0, &c1, &c2);
let second_tri_ccw = crate::math::vector::ccw_tri(&c2, &c3, &c0);
//let third_tri_ccw = crate::math::vector::ccw_tri(&c2, &c3, &c0);
//let fourth_tri_ccw = crate::math::vector::ccw_tri(&c3, &c0, &c1);
let invalid_cell = invalid_tri(first_tri_ccw, reversed_longitude) || invalid_tri(second_tri_ccw, reversed_longitude);
if invalid_cell {
Err("Cell out of the view")
} else {
Ok([c0, c1, c2, c3])
}
}
use al_api::cell::HEALPixCellProjeted;
pub fn project(cell: HEALPixCellProjeted, camera: &CameraViewPort, projection: &ProjectionType) -> Option<HEALPixCellProjeted> {
match projection {
ProjectionType::Hpx(_) => {
let tri_idx_in_collignon_zone = |x: f64, y: f64| -> u8 {
let zoom_factor = camera.get_clip_zoom_factor() as f32;
let x = (((x as f32) / camera.get_width()) - 0.5) * zoom_factor;
let y = (((y as f32) / camera.get_height()) - 0.5) * zoom_factor;
let x_zone = ((x + 0.5) * 4.0).floor() as u8;
x_zone + 4 * ((y > 0.0) as u8)
};
let is_in_collignon = |_x: f64, y: f64| -> bool {
let y = (((y as f32) / camera.get_height()) - 0.5) * (camera.get_clip_zoom_factor() as f32);
!(-0.25..=0.25).contains(&y)
};
if is_in_collignon(cell.vx[0], cell.vy[0]) && is_in_collignon(cell.vx[1], cell.vy[1]) && is_in_collignon(cell.vx[2], cell.vy[2]) && is_in_collignon(cell.vx[3], cell.vy[3]) {
let all_vertices_in_same_collignon_region = tri_idx_in_collignon_zone(cell.vx[0], cell.vy[0]) == tri_idx_in_collignon_zone(cell.vx[1], cell.vy[1]) && (tri_idx_in_collignon_zone(cell.vx[0], cell.vy[0]) == tri_idx_in_collignon_zone(cell.vx[2], cell.vy[2])) && (tri_idx_in_collignon_zone(cell.vx[0], cell.vy[0]) == tri_idx_in_collignon_zone(cell.vx[3], cell.vy[3]));
if !all_vertices_in_same_collignon_region {
None
} else {
Some(cell)
}
} else {
Some(cell)
}
},
_ => Some(cell)
}
}
use healpix::coverage::HEALPixCoverage;
pub fn compute_view_coverage(camera: &CameraViewPort, depth: u8, dst_frame: &CooSystem) -> HEALPixCoverage {
if depth <= 1 {
HEALPixCoverage::allsky(depth)
} else {
if let Some(vertices) = camera.get_vertices() {
// The vertices coming from the camera are in a specific coo sys
// but cdshealpix accepts them to be given in ICRS coo sys
let camera_frame = camera.get_system();
let vertices = vertices
.iter()
.map(|v| coosys::apply_coo_system(camera_frame, dst_frame, v))
.collect::<Vec<_>>();
// Check if the polygon is too small with respect to the angular size
// of a cell at depth order
let fov_bbox = camera.get_bounding_box();
let d_lon = fov_bbox.get_lon_size();
let d_lat = fov_bbox.get_lat_size();
let size_hpx_cell = crate::healpix::utils::MEAN_HPX_CELL_RES[depth as usize];
if d_lon < size_hpx_cell && d_lat < size_hpx_cell {
// Polygon is small and this may result in a moc having only a few cells
// One can build the moc from a list of cells
// This particular case avoids falling into a panic in cdshealpix
// See https://github.com/cds-astro/cds-moc-rust/issues/3
let hpx_idxs_iter = vertices
.iter()
.map(|v| {
let (lon, lat) = crate::math::lonlat::xyzw_to_radec(&v);
cdshealpix::nested::hash(depth, lon.0, lat.0)
});
HEALPixCoverage::from_hpx_cells(depth, hpx_idxs_iter, Some(vertices.len()))
} else {
// The polygon is not too small for the depth asked
let inside_vertex = camera.get_center();
let inside_vertex = coosys::apply_coo_system(camera_frame, dst_frame, inside_vertex);
// Prefer to query from_polygon with depth >= 2
let moc = HEALPixCoverage::new(
depth,
&vertices[..],
&inside_vertex.truncate(),
);
moc
}
} else {
HEALPixCoverage::allsky(depth)
}
}
}
use crate::healpix;
// Contains the cells being in the FOV for a specific
pub struct HEALPixCellsInView {
// The set of cells being in the current view for a
// specific image survey
pub depth: u8,
prev_depth: u8,
view_unchanged: bool,
frame: CooSystem,
// flags associating true to cells that
// are new in the fov
cells: HashMap<HEALPixCell, bool>,
// A flag telling whether there has been
// new cells added from the last frame
is_new_cells_added: bool,
coverage: HEALPixCoverage,
}
impl Default for HEALPixCellsInView {
fn default() -> Self {
Self::new()
}
}
use al_api::coo_system::CooSystem;
use crate::camera::CameraViewPort;
impl HEALPixCellsInView {
pub fn new() -> Self {
let cells = HashMap::new();
let coverage = HEALPixCoverage::allsky(0);
let view_unchanged = false;
let frame = CooSystem::ICRS;
Self {
cells,
prev_depth: 0,
depth: 0,
is_new_cells_added: false,
view_unchanged,
frame,
coverage,
}
}
pub fn reset_frame(&mut self) {
self.is_new_cells_added = false;
self.view_unchanged = false;
self.prev_depth = self.get_depth();
}
// This method is called whenever the user does an action
// that moves the camera.
// Everytime the user moves or zoom, the views must be updated
// The new cells obtained are used for sending new requests
pub fn refresh(&mut self, tile_depth: u8, hips_frame: CooSystem, camera: &CameraViewPort) {
self.depth = tile_depth;
self.frame = hips_frame;
// Get the cells of that depth in the current field of view
let coverage = compute_view_coverage(camera, tile_depth, &self.frame);
let new_cells = coverage.flatten_to_fixed_depth_cells()
.map(|idx| {
let cell = HEALPixCell(tile_depth, idx);
let new = !self.cells.contains_key(&cell);
self.is_new_cells_added |= new;
(cell, new)
})
.collect::<HashMap<_, _>>();
self.coverage = coverage;
// If no new cells have been added
self.view_unchanged = !self.is_new_cells_added && new_cells.len() == self.cells.len();
self.cells = new_cells;
}
// Accessors
#[inline]
pub fn get_cells(&self) -> impl Iterator<Item = &HEALPixCell> {
self.cells.keys()
}
#[inline]
pub fn num_of_cells(&self) -> usize {
self.cells.len()
}
#[inline]
pub fn get_depth(&self) -> u8 {
self.depth
}
#[inline]
pub fn get_frame(&self) -> &CooSystem {
&self.frame
}
#[inline]
pub fn is_new(&self, cell: &HEALPixCell) -> bool {
if let Some(&is_cell_new) = self.cells.get(cell) {
is_cell_new
} else {
false
}
}
#[inline]
pub fn get_coverage(&self) -> &HEALPixCoverage {
&self.coverage
}
#[inline]
pub fn is_there_new_cells_added(&self) -> bool {
//self.new_cells.is_there_new_cells_added()
self.is_new_cells_added
}
#[inline]
pub fn has_view_changed(&self) -> bool {
//self.new_cells.is_there_new_cells_added()
!self.view_unchanged
}
}

View File

@@ -1,14 +1,17 @@
use crate::downloader::{query, Downloader};
use crate::renderable::HiPS;
use crate::Abort;
use std::collections::VecDeque;
use std::collections::{VecDeque};
const MAX_NUM_TILE_FETCHING: isize = 8;
const MAX_QUERY_QUEUE_LENGTH: usize = 100;
pub struct TileFetcherQueue {
num_tiles_fetched: isize,
// A stack of queries to fetch
queries: VecDeque<query::Tile>,
base_tile_queries: Vec<query::Tile>,
@@ -19,7 +22,6 @@ impl TileFetcherQueue {
let queries = VecDeque::new();
let base_tile_queries = Vec::new();
Self {
num_tiles_fetched: 0,
queries,
base_tile_queries,
}
@@ -27,46 +29,46 @@ impl TileFetcherQueue {
pub fn clear(&mut self) {
self.queries.clear();
//self.query_set.clear();
}
pub fn append(&mut self, query: query::Tile, downloader: &mut Downloader) {
pub fn append(&mut self, query: query::Tile, _downloader: &mut Downloader) {
// Check if the query has already been done
//if !self.query_set.contains(&query) {
// discard too old tile queries
// this may not be the best thing to do but
if self.queries.len() > MAX_QUERY_QUEUE_LENGTH {
self.queries.pop_front();
self.queries.pop_front();
}
self.queries.push_back(query);
self.fetch(downloader);
self.queries.push_back(query.clone());
}
pub fn append_base_tile(&mut self, query: query::Tile, downloader: &mut Downloader) {
// fetch the base tile
pub fn append_base_tile(&mut self, query: query::Tile, _downloader: &mut Downloader) {
self.base_tile_queries.push(query);
self.fetch(downloader);
}
pub fn notify(&mut self, num_tiles_completed: usize, downloader: &mut Downloader) {
self.num_tiles_fetched -= num_tiles_completed as isize;
pub fn notify(&mut self, downloader: &mut Downloader) {
self.fetch(downloader);
}
fn fetch(&mut self, downloader: &mut Downloader) {
// Fetch the base tiles with higher priority
while self.num_tiles_fetched < MAX_NUM_TILE_FETCHING && !self.base_tile_queries.is_empty() {
let query = self.base_tile_queries.pop().unwrap_abort();
if downloader.fetch(query) {
// The fetch has succeded
self.num_tiles_fetched += 1;
}
while let Some(query) = self.base_tile_queries.pop() {
//if downloader.fetch(query) {
// The fetch has succeded
//self.num_tiles_fetched += 1;
//}
downloader.fetch(query);
}
while self.num_tiles_fetched < MAX_NUM_TILE_FETCHING && !self.queries.is_empty() {
let mut num_fetched_tile = 0;
while num_fetched_tile < MAX_NUM_TILE_FETCHING && !self.queries.is_empty() {
let query = self.queries.pop_back().unwrap_abort();
if downloader.fetch(query) {
// The fetch has succeded
self.num_tiles_fetched += 1;
num_fetched_tile += 1;
}
}
}
@@ -77,7 +79,10 @@ impl TileFetcherQueue {
// The allsky is not mandatory present in a HiPS service but it is better to first try to search for it
downloader.fetch(query::PixelMetadata::new(cfg));
// Try to fetch the MOC
downloader.fetch(query::Moc::new(format!("{}/Moc.fits", cfg.get_root_url()), al_api::moc::MOC::default()));
downloader.fetch(query::Moc::new(
format!("{}/Moc.fits", cfg.get_root_url()),
al_api::moc::MOC::default(),
));
let tile_size = cfg.get_tile_size();
//Request the allsky for the small tile size or if base tiles are not available
@@ -86,8 +91,10 @@ impl TileFetcherQueue {
downloader.fetch(query::Allsky::new(cfg));
} else {
for texture_cell in crate::healpix::cell::ALLSKY_HPX_CELLS_D0 {
for cell in texture_cell.get_tile_cells(cfg) {
let query = query::Tile::new(&cell, cfg);
for cell in texture_cell.get_tile_cells(cfg.delta_depth()) {
let hips_url = cfg.get_root_url();
let format = cfg.get_format();
let query = query::Tile::new(&cell, hips_url.to_string(), format);
self.append_base_tile(query, downloader);
}
}

View File

@@ -4,12 +4,15 @@ pub struct Time(pub f32);
use crate::utils;
use wasm_bindgen::JsValue;
impl Time {
pub fn measure_perf<T>(f: impl FnOnce() -> Result<T, JsValue>) -> Result<T, JsValue> {
pub fn measure_perf<T>(
label: &str,
f: impl FnOnce() -> Result<T, JsValue>,
) -> Result<T, JsValue> {
let start_time = Time::now();
let r = f()?;
let duration = Time::now() - start_time;
// print the duration in the console
al_core::log(&format!("duration: {:?}", duration));
al_core::log(&format!("{:?} time: {:?}", label, duration));
Ok(r)
}
@@ -21,6 +24,10 @@ impl Time {
pub fn as_millis(&self) -> f32 {
self.0
}
pub fn as_secs(&self) -> f32 {
self.as_millis() / 1000.0
}
}
impl From<f32> for DeltaTime {
@@ -43,7 +50,7 @@ impl Sub for Time {
pub struct DeltaTime(pub f32);
impl DeltaTime {
pub fn from_millis(millis: f32) -> Self {
pub const fn from_millis(millis: f32) -> Self {
DeltaTime(millis)
}
@@ -54,6 +61,10 @@ impl DeltaTime {
pub fn as_millis(&self) -> f32 {
self.0
}
pub fn as_secs(&self) -> f32 {
self.as_millis() / 1000.0
}
}
use std::ops::{Add, Mul};

View File

@@ -1,3 +1,6 @@
use std::cmp::Ordering;
use std::ops::Range;
#[allow(unused_macros)]
macro_rules! assert_delta {
($x:expr, $y:expr, $d:expr) => {
@@ -35,6 +38,8 @@ pub fn unmortonize(mut x: u64) -> (u32, u32) {
(x as u32, y as u32)
}
// Transmute utils functions
#[allow(dead_code)]
pub unsafe fn transmute_boxed_slice<I, O>(s: Box<[I]>) -> Box<[O]> {
let len = s.len();
let in_slice_ptr = Box::into_raw(s);
@@ -60,66 +65,32 @@ pub unsafe fn transmute_vec<I, O>(mut s: Vec<I>) -> Result<Vec<O>, &'static str>
}
}
/// Select the kth smallest element in a slice
///
/// This is a basic implementation of quickselect algorithm: https://fr.wikipedia.org/wiki/Quickselect
/// Some features:
/// * The pivot is chosen randomly between l and r
/// * This does a partial sort of `v`
/// * It performs in O(n) in mean time
///
/// # Params
/// * `v` - the slice of values from which the kth smallest element will be found
/// * `l` - the first index of the slice for which the algorithm is applied
/// * `r` - the last index of the slice (inclusive) for which the algorithm is applied
/// * `k` - the index number to find
use rand::Rng;
#[allow(dead_code)]
pub fn select_kth_smallest<T: PartialOrd + Copy>(v: &mut [T], mut l: usize, mut r: usize, k: usize) -> T {
let mut rng = rand::thread_rng();
while l < r {
let pivot = rng.gen_range(l..=r);
let pivot = partition(v, l, r, pivot);
if k == pivot {
return v[k];
} else if k < pivot {
r = pivot - 1;
pub(super) fn merge_overlapping_intervals(mut intervals: Vec<Range<usize>>) -> Vec<Range<usize>> {
intervals.sort_unstable_by(|a, b| {
let cmp = a.start.cmp(&b.start);
if let Ordering::Equal = cmp {
a.end.cmp(&b.end)
} else {
l = pivot + 1;
cmp
}
}
});
v[l]
}
// Merge overlapping intervals in place
let mut j = 0;
#[allow(dead_code)]
fn partition<T: PartialOrd + Copy>(v: &mut [T], l: usize, r: usize, pivot: usize) -> usize {
v.swap(pivot, r);
let pivot = v[r];
let mut j = l;
for i in l..r {
if v[i] < pivot {
v.swap(i, j);
for i in 1..intervals.len() {
// If this is not first Interval and overlaps
// with the previous one
if intervals[j].end >= intervals[i].start {
// Merge previous and current Intervals
intervals[j].end = intervals[j].end.max(intervals[i].end);
} else {
j += 1;
intervals[j] = intervals[i].clone();
}
}
// truncate the indices
intervals.truncate(j + 1);
// swap pivot value to values[j]
v.swap(r, j);
j
intervals
}
mod tests {
#[test]
fn test_select_kth_smallest() {
assert_eq!(super::select_kth_smallest(&mut [2, 4, 5, 9, -1, 5], 0, 5, 2), 4);
assert_eq!(super::select_kth_smallest(&mut [2], 0, 0, 0), 2);
assert_eq!(super::select_kth_smallest(&mut [2, 4, 5, 9, -1, 5], 0, 5, 3), 5);
assert_eq!(super::select_kth_smallest(&mut [2, 4, 5, 9, -1, 5], 0, 5, 4), 5);
assert_eq!(super::select_kth_smallest(&mut [2, 4, 5, 9, -1, 5], 0, 5, 5), 9);
assert_eq!(super::select_kth_smallest(&mut [0, 1, 2, 9, 11, 12], 0, 5, 5), 12);
}
}

View File

@@ -15,6 +15,13 @@
top: 0;
}
.aladin-gridCanvas {
position: absolute;
z-index: 1;
left: 0;
top: 0;
}
.aladin-catalogCanvas {
position: absolute;
z-index: 2;
@@ -807,6 +814,7 @@ canvas {
.aladin-context-menu {
position: fixed;
background: #fff;
color: #000;
z-index: 9999999;
width: 150px;
margin: 0;
@@ -815,7 +823,6 @@ canvas {
box-shadow: 0 0 6px rgba(0,0,0,0.2);
font-size: 12px;
font-family: Verdana, Geneva, Tahoma, sans-serif;
;
}
.aladin-context-menu .aladin-context-menu-item {

View File

@@ -1,8 +1,10 @@
#version 300 es
precision highp float;
precision lowp float;
in vec4 v_rgba;
out vec4 color;
void main() {
// Multiply vertex color with texture color (in linear space).
// Linear color is written and blended in Framebuffer and converted to sRGB later

View File

@@ -1,15 +1,14 @@
#version 300 es
precision highp float;
layout (location = 0) in vec2 pos;
precision lowp float;
layout (location = 0) in vec2 ndc_pos;
uniform vec2 u_screen_size;
uniform vec4 u_color;
out vec4 v_rgba;
void main() {
gl_Position = vec4(
pos,
ndc_pos,
0.0,
1.0
);

View File

@@ -1,20 +0,0 @@
#version 300 es
precision highp float;
in vec2 v_tc;
out vec4 color;
uniform sampler2D u_sampler_font;
uniform float u_opacity;
uniform vec3 u_color;
void main() {
// The texture is set up with `SRGB8_ALPHA8`, so no need to decode here!
float alpha = texture(u_sampler_font, v_tc).r;
alpha = smoothstep(0.1, 0.9, alpha);
// Multiply vertex color with texture color (in linear space).
// Linear color is written and blended in Framebuffer and converted to sRGB later
color = vec4(u_color, u_opacity * alpha);
//color.a = color.a * alpha;
}

Some files were not shown because too many files have changed in this diff Show More