提交测试
3
tests/integration/experimental/dash3d/.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
cypress/test_output
|
||||
cypress/videos
|
||||
cypress/screenshots
|
||||
101
tests/integration/experimental/dash3d/README.md
Normal file
@@ -0,0 +1,101 @@
|
||||
# Dash3D Integration Testing (beta)
|
||||
|
||||
Because Dash3d is still in experimental stage, we opt for only
|
||||
high-level integration testing at this point.
|
||||
|
||||
## Dependencies
|
||||
|
||||
To run tests, install [node js](https://nodejs.org/):
|
||||
```
|
||||
conda install -c conda-forge nodejs
|
||||
```
|
||||
|
||||
Make sure to install system dependencies required by
|
||||
[Cypress](https://docs.cypress.io/guides/getting-started/installing-cypress.html#System-requirements).
|
||||
|
||||
Most front end dependencies are managed by [npm](https://www.npmjs.com/),
|
||||
automatically installed with node. To install front end dependencies, run
|
||||
the following from the **root of kaolin**:
|
||||
```
|
||||
npm install
|
||||
```
|
||||
|
||||
## How to run all tests
|
||||
|
||||
All integration tests are wrapped in python. To run all tests
|
||||
from the root of kaolin:
|
||||
```
|
||||
pytest --capture=tee-sys tests/integration/experimental/dash3d/
|
||||
```
|
||||
|
||||
## Under the Hood
|
||||
|
||||
Currently, the only javascript tests are integration, and we call
|
||||
the javascript test from a python test file. In the future, this
|
||||
may change.
|
||||
|
||||
#### Mocha Tests
|
||||
|
||||
Javascript tests that can run *outside of the browser* are
|
||||
implemented using [Mocha](https://mochajs.org/). Note that
|
||||
currently, **all js tests are called from python**. To run
|
||||
Mocha tests manually, run:
|
||||
```
|
||||
npx mocha "./tests/integration/experimental/dash3d/*.js"
|
||||
```
|
||||
*Note:* his will fail, because it expects
|
||||
data generated by python wrapper tests.
|
||||
|
||||
Failing mocha tests can be debugged in Chrome by running:
|
||||
```
|
||||
./node_modules/mocha/bin/mocha --inspect --inspect-brk path/to/test.js
|
||||
```
|
||||
Then, in Chrome navigate to [chrome://inspect/](chrome://inspect/) and
|
||||
click "Open dedicated DevTools for Node". You may need to manually add
|
||||
the test and `static` sibdirectories under sources.
|
||||
|
||||
#### Cypress Tests
|
||||
|
||||
End-to-end tests that *require a browser* are implemented
|
||||
using [Cypress](https://www.cypress.io/). Note that currently, **all cypress
|
||||
tests are called from python**, but the tests themselves
|
||||
are written in javascript and located in `tests/integration/experimental/dash3d/cypress/integration/`. It is essential to be able
|
||||
to run them manually for debugging.
|
||||
|
||||
First run a python test that spins up a dash3d instance in the
|
||||
background (note that multiple invokations of this may require you
|
||||
to set `--skip_start_dash3d` at the end of the command in case
|
||||
dash3d is already running):
|
||||
```
|
||||
python -m tests.integration.experimental.dash3d.run_e2e_test
|
||||
```
|
||||
This test also runs cypress tests, but in case they fail it's useful to invoke cypress manually.
|
||||
|
||||
To open cypress UI:
|
||||
```
|
||||
npx cypress open --config-file tests/integration/experimental/dash3d/cypress.json
|
||||
```
|
||||
|
||||
Alternatively, run cypress tests automatically (this is called from `run_e2e_test`):
|
||||
```
|
||||
npx cypress run --config-file tests/integration/experimental/dash3d/cypress.json
|
||||
```
|
||||
|
||||
#### Debugging Cypress Tests
|
||||
|
||||
Cypress writes a lot of diagnostic information during testing. Opening debug console in the browser during test execution is helpful. Also, check
|
||||
out the following directories:
|
||||
* screenshots: `tests/integration/experimental/dash3d/cypress/screenshots/`
|
||||
* videos: `tests/integration/experimental/dash3d/cypress/videos/`
|
||||
* renderings from tests: `tests/integration/experimental/dash3d/cypress/test_output/`
|
||||
|
||||
Most of the tests perform visual regression to ensure that the right
|
||||
geometry is passed from the server to the client. As a consequence,
|
||||
changes to rendering properties will break the test and require
|
||||
change to golden files. The `test_output` directory will contain
|
||||
the updated golden files.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
11
tests/integration/experimental/dash3d/cypress.json
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"fileServerFolder": "tests/integration/experimental/dash3d/cypress",
|
||||
"componentFolder" : "tests/integration/experimental/dash3d/cypress/component",
|
||||
"downloadsFolder" : "tests/integration/experimental/dash3d/cypress/downloads",
|
||||
"fixturesFolder": "tests/integration/experimental/dash3d/cypress/fixtures",
|
||||
"integrationFolder": "tests/integration/experimental/dash3d/cypress/integration",
|
||||
"pluginsFile": "tests/integration/experimental/dash3d/cypress/plugins/index.js",
|
||||
"screenshotsFolder": "tests/integration/experimental/dash3d/cypress/screenshots",
|
||||
"supportFile": "tests/integration/experimental/dash3d/cypress/support/index.js",
|
||||
"videosFolder": "tests/integration/experimental/dash3d/cypress/videos"
|
||||
}
|
||||
|
After Width: | Height: | Size: 14 KiB |
|
After Width: | Height: | Size: 12 KiB |
|
After Width: | Height: | Size: 14 KiB |
|
After Width: | Height: | Size: 21 KiB |
|
After Width: | Height: | Size: 12 KiB |
|
After Width: | Height: | Size: 18 KiB |
|
After Width: | Height: | Size: 29 KiB |
|
After Width: | Height: | Size: 29 KiB |
|
After Width: | Height: | Size: 29 KiB |
|
After Width: | Height: | Size: 30 KiB |
|
After Width: | Height: | Size: 29 KiB |
|
After Width: | Height: | Size: 30 KiB |
@@ -0,0 +1,129 @@
|
||||
// Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
const assert = require('assert');
|
||||
|
||||
|
||||
const TYPES_TO_TEST = ['mesh', 'pointcloud'];
|
||||
const NVIEWS = 2;
|
||||
|
||||
// This tests the renderings in the viewports against ground truth images
|
||||
describe('Visual Regression', () => {
|
||||
beforeEach(function() {
|
||||
// To update these can use one of 2 ways:
|
||||
// 1. Look at test output saved in DEBUG_FOLDER
|
||||
// 2. Load dash3d localhost:8008 and run commands like this in console:
|
||||
// nvidia.util.downloadURL('mesh0.png', $("#mesh-view0 canvas")[0].toDataURL())
|
||||
|
||||
// Initial renderings
|
||||
cy.fixture('images/mesh_gt_id0.png').as('mesh0_data'); // fixture data names
|
||||
cy.fixture('images/mesh_output_id0_final.png').as('mesh1_data');
|
||||
cy.fixture('images/pointcloud_input_id0.png').as('pointcloud0_data');
|
||||
cy.fixture('images/pointcloud_output_id0_final.png').as('pointcloud1_data');
|
||||
|
||||
|
||||
// Specific renderings (caused by user input)
|
||||
cy.fixture('images/mesh_gt_id0.png').as('mesh_ground_truth_id0');
|
||||
cy.fixture('images/mesh_gt_id1.png').as('mesh_ground_truth_id1');
|
||||
cy.fixture('images/mesh_output_id0_final.png').as('mesh_output_id0'); // last it
|
||||
cy.fixture('images/mesh_output_id0_it50.png').as('mesh_output_id0_it50');
|
||||
cy.fixture('images/mesh_output_id1_final.png').as('mesh_output_id1'); // last it
|
||||
cy.fixture('images/mesh_output_id1_it50.png').as('mesh_output_id1_it50');
|
||||
cy.fixture('images/pointcloud_input_id0.png').as('pointcloud_input_id0');
|
||||
cy.fixture('images/pointcloud_input_id1.png').as('pointcloud_input_id1');
|
||||
cy.fixture('images/pointcloud_output_id0_final.png').as('pointcloud_output_id0'); // last it
|
||||
cy.fixture('images/pointcloud_output_id0_it50.png').as('pointcloud_output_id0_it50');
|
||||
cy.fixture('images/pointcloud_output_id1_final.png').as('pointcloud_output_id1'); // last it
|
||||
cy.fixture('images/pointcloud_output_id1_it50.png').as('pointcloud_output_id1_it50');
|
||||
})
|
||||
it('Initial Page Rendering', () => {
|
||||
cy.visit('http://localhost:8008/');
|
||||
|
||||
// Note: this part depends on the initial rendering, which may change
|
||||
cy.wait(2000).then(() => {
|
||||
cy.wrap(TYPES_TO_TEST).each((tname) => {
|
||||
cy.wrap([0, 1]).each((v) => {
|
||||
// e.g. '#mesh-view0 canvas'
|
||||
var view_selector = '#' + tname + '-view' + v + ' canvas';
|
||||
var data_name = '@' + tname + v + '_data'; // fixture data name
|
||||
cy.checkCanvasRendering(view_selector, data_name, 'test_initial_render');
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
it('Setting Category and ID', () => {
|
||||
cy.visit('http://localhost:8008/');
|
||||
|
||||
// Select the right id and category and test that we can load
|
||||
// requested geometry in every viewport
|
||||
var cats_per_type = { 'mesh': ['ground_truth', 'output'],
|
||||
'pointcloud': ['input', 'output'] };
|
||||
cy.wait(2000).then(() => {
|
||||
cy.wrap(TYPES_TO_TEST).each((tname) => {
|
||||
cy.wrap([0, 1]).each((view_id) => {
|
||||
cy.wrap(cats_per_type[tname]).each((cat_name) => {
|
||||
cy.wrap([0, 1]).each((mesh_id) => {
|
||||
// e.g. '#mesh-view0 canvas'
|
||||
var view_selector = '#' + tname + '-view' + view_id + ' canvas';
|
||||
var category_selector = '#' + tname + '-header' + view_id + ' select.cat';
|
||||
var id_selector = '#' + tname + '-header' + view_id + ' select.id';
|
||||
var data_name = '@' + tname + '_' + cat_name + '_id' + mesh_id;
|
||||
// Set category and id in the viewport
|
||||
cy.get(id_selector).select('id ' + mesh_id).then(() => {
|
||||
cy.get(category_selector).select(cat_name).wait(1000).then(() => {
|
||||
console.log('Set category ' + cat_name + ' and id ' + mesh_id);
|
||||
// Check rendering
|
||||
cy.checkCanvasRendering(view_selector, data_name, 'test_set_category_and_id');
|
||||
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
it('Setting Global Iteration Number', () => {
|
||||
cy.visit('http://localhost:8008/');
|
||||
|
||||
cy.get('#mesh-header0 select.cat').select('output').then(() => {
|
||||
cy.get('#mesh-header0 select.id').select('id 0').then(() => {
|
||||
cy.get('#mesh-header1 select.cat').select('ground_truth').then(() => {
|
||||
cy.get('#mesh-header1 select.id').select('id 0').then(() => {
|
||||
cy.get('#pointcloud-header0 select.cat').select('output').then(() => {
|
||||
cy.get('#pointcloud-header0 select.id').select('id 0').then(() => {
|
||||
cy.get('#pointcloud-header1 select.cat').select('input').then(() => {
|
||||
cy.get('#pointcloud-header1 select.id').select('id 0').then(() => {
|
||||
cy.get('#timeslider').invoke('val', 50).trigger('change').wait(1000).then(() => {
|
||||
let test_subfolder = 'test_set_its';
|
||||
cy.checkCanvasRendering(
|
||||
'#mesh-view0 canvas', '@mesh_output_id0_it50', test_subfolder);
|
||||
cy.checkCanvasRendering(
|
||||
'#mesh-view1 canvas', '@mesh_ground_truth_id0', test_subfolder);
|
||||
cy.checkCanvasRendering(
|
||||
'#pointcloud-view0 canvas', '@pointcloud_output_id0_it50', test_subfolder);
|
||||
cy.checkCanvasRendering(
|
||||
'#pointcloud-view1 canvas', '@pointcloud_input_id0', test_subfolder);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
})
|
||||
@@ -0,0 +1,21 @@
|
||||
/// <reference types="cypress" />
|
||||
// ***********************************************************
|
||||
// This example plugins/index.js can be used to load plugins
|
||||
//
|
||||
// You can change the location of this file or turn off loading
|
||||
// the plugins file with the 'pluginsFile' configuration option.
|
||||
//
|
||||
// You can read more here:
|
||||
// https://on.cypress.io/plugins-guide
|
||||
// ***********************************************************
|
||||
|
||||
// This function is called when a project is opened or re-opened (e.g. due to
|
||||
// the project's config changing)
|
||||
|
||||
/**
|
||||
* @type {Cypress.PluginConfig}
|
||||
*/
|
||||
module.exports = (on, config) => {
|
||||
// `on` is used to hook into various events Cypress emits
|
||||
// `config` is the resolved Cypress config
|
||||
}
|
||||
@@ -0,0 +1,56 @@
|
||||
// Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
const IMG_WIDTH = 300;
|
||||
const MAX_DIFFERING_PIXELS = Math.floor(IMG_WIDTH * IMG_WIDTH * 0.02);
|
||||
const DEBUG_FOLDER = 'tests/integration/experimental/dash3d/cypress/test_output/';
|
||||
|
||||
Cypress.Commands.add('checkCanvasRendering', (view_selector, data_name, testsubfolder) => {
|
||||
cy.window().then((win) => {
|
||||
expect(cy.get(view_selector));
|
||||
cy.get(view_selector)
|
||||
.then(($el) => {
|
||||
return win.nvidia.test.convertDataUrl($el.get(0).toDataURL(), IMG_WIDTH);
|
||||
})
|
||||
.then((actual) => {
|
||||
cy.get(data_name)
|
||||
.then((img_data) => {
|
||||
return win.nvidia.test.convertDataUrl('data:image/png;base64,' + img_data, IMG_WIDTH);
|
||||
})
|
||||
.then((expected) => {
|
||||
console.log('Actual: ');
|
||||
console.log(actual);
|
||||
console.log('Expected: ');
|
||||
console.log(expected);
|
||||
let cmpare = win.nvidia.test.getImageDiff(expected[0], actual[0]);
|
||||
console.log(cmpare);
|
||||
let fprefix = DEBUG_FOLDER + testsubfolder;
|
||||
cy.writeFile(fprefix + '/expected/' + data_name.slice(1) + '_expected.png',
|
||||
win.nvidia.test.stripBase64Marker(expected[1]), 'base64')
|
||||
.then(() => {
|
||||
cy.writeFile(fprefix + '/actual/' + data_name.slice(1) + '.png',
|
||||
win.nvidia.test.stripBase64Marker(actual[1]), 'base64');
|
||||
})
|
||||
.then(() => {
|
||||
cy.writeFile(fprefix + '/expected/' + data_name.slice(1) + '_diff.png',
|
||||
win.nvidia.test.stripBase64Marker(
|
||||
win.nvidia.test.imageDataToDataUrl(cmpare[0])), 'base64');
|
||||
})
|
||||
.then(() => {
|
||||
expect(cmpare[1]).to.be.lessThan(MAX_DIFFERING_PIXELS);
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,20 @@
|
||||
// ***********************************************************
|
||||
// This example support/index.js is processed and
|
||||
// loaded automatically before your test files.
|
||||
//
|
||||
// This is a great place to put global configuration and
|
||||
// behavior that modifies Cypress.
|
||||
//
|
||||
// You can change the location of this file or turn off
|
||||
// automatically serving support files with the
|
||||
// 'supportFile' configuration option.
|
||||
//
|
||||
// You can read more here:
|
||||
// https://on.cypress.io/configuration
|
||||
// ***********************************************************
|
||||
|
||||
// Import commands.js using ES2015 syntax:
|
||||
import './commands'
|
||||
|
||||
// Alternatively you can use CommonJS syntax:
|
||||
// require('./commands')
|
||||
132
tests/integration/experimental/dash3d/run_e2e_test.py
Normal file
@@ -0,0 +1,132 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
THIS_DIR = os.path.dirname(os.path.realpath(__file__))
|
||||
KAOLIN_ROOT = os.path.realpath(
|
||||
os.path.join(THIS_DIR, os.pardir, os.pardir, os.pardir, os.pardir))
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def obj_paths():
|
||||
samples_dir = os.path.join(KAOLIN_ROOT, 'tests', 'samples')
|
||||
return [os.path.join(samples_dir, 'rocket.obj'),
|
||||
os.path.join(samples_dir, 'model.obj')]
|
||||
|
||||
|
||||
def timelapse_path():
|
||||
return os.path.realpath(
|
||||
os.path.join(KAOLIN_ROOT, 'tests', 'samples', 'timelapse', 'notexture'))
|
||||
|
||||
|
||||
def golden_screenshots_path():
|
||||
return os.path.join(THIS_DIR, 'cypress', 'fixtures')
|
||||
|
||||
|
||||
def cypress_config_path():
|
||||
# Important: must be relative
|
||||
return os.path.join('tests', 'integration', 'experimental', 'dash3d', 'cypress.json')
|
||||
|
||||
|
||||
def port():
|
||||
return 8008
|
||||
|
||||
|
||||
def generate_timelapse_input():
|
||||
objs = ','.join(obj_paths())
|
||||
out_dir = timelapse_path()
|
||||
script = os.path.realpath(
|
||||
os.path.join(KAOLIN_ROOT, 'examples', 'tutorial', 'visualize_main.py'))
|
||||
|
||||
args = f'--skip_normalization --test_objs={objs} --output_dir={out_dir}'
|
||||
command = f'python {script} {args}'
|
||||
logger.info(f'Re-generating timelapse input here: {out_dir}\n by running {command}')
|
||||
if os.path.exists(out_dir):
|
||||
shutil.rmtree(out_dir)
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
ret = os.system(command)
|
||||
if ret != 0:
|
||||
raise RuntimeError('Creation of timelapse failed')
|
||||
|
||||
|
||||
def start_dash3d():
|
||||
script = os.path.realpath(os.path.join(THIS_DIR, 'start_dash3d.sh'))
|
||||
logdir = timelapse_path()
|
||||
_port = port()
|
||||
|
||||
command = f'{script} {logdir} {_port}'
|
||||
logger.info(f'Starting dash3d server in the background by running {command}')
|
||||
ret = os.system(command)
|
||||
|
||||
if ret != 0:
|
||||
raise RuntimeError('Failed to start Dash3D')
|
||||
|
||||
|
||||
def run_cypress():
|
||||
command = 'npx cypress run --config-file {}'.format(cypress_config_path())
|
||||
logger.info(f'Starting cypress by running {command}')
|
||||
os.chdir(KAOLIN_ROOT)
|
||||
ret = os.system(command)
|
||||
if ret != 0:
|
||||
raise RuntimeError('Failed cypress integration test')
|
||||
|
||||
|
||||
def run_end_to_end_integration_tests():
|
||||
print('END 2 END INTEGRATION TEST FOR DASH 3D-------------------------------')
|
||||
print('Timelapse input: {}'.format(timelapse_path()))
|
||||
print('Server: http://localhost:{}'.format(port))
|
||||
print('Golden screenshot files: {}'.format(golden_screenshots_path()))
|
||||
print('Visual comparison results: ')
|
||||
|
||||
|
||||
def run_main(regenerate_timelapse_input,
|
||||
skip_start_dash3d):
|
||||
logging.basicConfig(level=logging.DEBUG,
|
||||
format='%(asctime)s|%(levelname)8s|%(name)15s| %(message)s',
|
||||
handlers=[logging.StreamHandler(sys.stdout)])
|
||||
|
||||
if regenerate_timelapse_input:
|
||||
generate_timelapse_input()
|
||||
|
||||
if not skip_start_dash3d:
|
||||
start_dash3d()
|
||||
|
||||
run_cypress()
|
||||
|
||||
|
||||
class TestBinaryEncoding:
|
||||
def test_server_client_binary_compatibility(self):
|
||||
run_main(regenerate_timelapse_input=False,
|
||||
skip_start_dash3d=False)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
aparser = argparse.ArgumentParser()
|
||||
aparser.add_argument('--regenerate_timelapse_input', action='store_true',
|
||||
help='If set, will regenerate timelapse input in {}')
|
||||
aparser.add_argument('--skip_start_dash3d', action='store_true',
|
||||
help='If set, will skip starting dash3d, which may already be running.')
|
||||
args = aparser.parse_args()
|
||||
|
||||
run_main(regenerate_timelapse_input=args.regenerate_timelapse_input,
|
||||
skip_start_dash3d=args.skip_start_dash3d)
|
||||
46
tests/integration/experimental/dash3d/start_dash3d.sh
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/bin/bash -e
|
||||
set -o nounset
|
||||
|
||||
# Note: when run as a subprocess something is setting this
|
||||
# variable, which causes issues; printing for debug information
|
||||
# and unsetting
|
||||
if [[ -v MKL_THREADING_LAYER ]];
|
||||
then
|
||||
echo "Unsetting MKL_THREADING_LAYER=$MKL_THREADING_LAYER"
|
||||
unset MKL_THREADING_LAYER
|
||||
fi
|
||||
|
||||
# Get the directory where current script is located
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
KAOLIN_ROOT=$SCRIPT_DIR/../../../..
|
||||
|
||||
DASH3D=kaolin-dash3d
|
||||
|
||||
USAGE="$0 [log_directory] (optional: port)
|
||||
|
||||
Runs dash3d in the background using script:
|
||||
$DASH3D
|
||||
"
|
||||
if [ $# -lt 1 ]; then
|
||||
echo "$USAGE"
|
||||
exit
|
||||
fi
|
||||
|
||||
FLAGS="--logdir=$1 --log_level=10" # DEBUG
|
||||
if [ $# -gt 1 ]; then
|
||||
FLAGS="$FLAGS --port=$2"
|
||||
fi
|
||||
|
||||
echo "Running Dash3D in the background using command: "
|
||||
echo "$DASH3D $FLAGS"
|
||||
|
||||
$DASH3D $FLAGS &
|
||||
PID=$!
|
||||
|
||||
sleep 2
|
||||
set +e
|
||||
kill -0 $PID # Check still runs
|
||||
if [ "$?" -ne "0" ]; then
|
||||
echo "Failed to start dash3d"
|
||||
exit 1
|
||||
fi
|
||||
140
tests/integration/experimental/dash3d/test_binary_parse.js
Normal file
@@ -0,0 +1,140 @@
|
||||
// Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
var assert = require('assert');
|
||||
var THREE = require('three');
|
||||
var fs = require('fs');
|
||||
var path = require('path');
|
||||
|
||||
var geometry = require('../../../../kaolin/experimental/dash3d/src/geometry.js');
|
||||
var util = require('../../../../kaolin/experimental/dash3d/src/util.js');
|
||||
|
||||
var binaries = [];
|
||||
before(function(done){
|
||||
util.set_global_log_level('INFO'); // Comment out to DEBUG if needed
|
||||
|
||||
var paths = ['meshes0_1.bin', 'meshes2.bin', 'clouds0_1.bin', 'clouds2.bin'];
|
||||
for (var i = 0; i < paths.length; ++i) {
|
||||
var p = path.join(__dirname, '_out', paths[i]);
|
||||
util.timed_log('Parsing binary file at path ' + p);
|
||||
var res = fs.readFileSync(p);
|
||||
var res_buffer = new Uint8Array(res).buffer;
|
||||
binaries.push(res_buffer);
|
||||
}
|
||||
done();
|
||||
});
|
||||
|
||||
describe("Binary Mesh Parsing", function() {
|
||||
describe("Reading and checking two meshes from _out/meshes0_1.bin", function() {
|
||||
let geos = null;
|
||||
it('two meshes should be parsed', function() {
|
||||
geos = geometry.BufferedGeometriesFromBinary(binaries[0], 0);
|
||||
assert.equal(geos.length, 2);
|
||||
});
|
||||
it('two meshes should have correct number of vertices and faces', function() {
|
||||
assert.equal(geos[0].getAttribute('position').count, 4);
|
||||
assert.equal(geos[0].getIndex().count, 2 * 3);
|
||||
|
||||
assert.equal(geos[1].getAttribute('position').count, 100);
|
||||
assert.equal(geos[1].getIndex().count, 100 * 3);
|
||||
});
|
||||
it('first mesh should have correct geometry values', function() {
|
||||
let expected_face_idx = [0, 1, 2, 2, 1, 3];
|
||||
for (let i = 0; i < expected_face_idx.length; ++i) {
|
||||
assert.equal(geos[0].getIndex().array[i], expected_face_idx[i],
|
||||
'unexpected face index at ' + i);
|
||||
}
|
||||
let expected_positions = [1.0, 2.0, 3.0,
|
||||
10.0, 20.0, 30.0,
|
||||
2.0, 4.0, 6.0,
|
||||
15.0, 25.0, 35.0];
|
||||
for (let i = 0; i < expected_positions.length; ++i) {
|
||||
assert.equal(geos[0].getAttribute('position').array[i],
|
||||
expected_positions[i],
|
||||
'unexpected position at ' + i);
|
||||
}
|
||||
});
|
||||
it('correct bounding box should be computed for both meshes', function() {
|
||||
let bbox = geometry.GetBoundingBox(geos);
|
||||
assert.equal(bbox.min.x, 0);
|
||||
assert.equal(bbox.min.y, 1);
|
||||
assert.equal(bbox.min.z, 2);
|
||||
assert.equal(bbox.max.x, 297);
|
||||
assert.equal(bbox.max.y, 298);
|
||||
assert.equal(bbox.max.z, 299);
|
||||
});
|
||||
});
|
||||
describe("Reading and checking one mesh from _out/meshes2.bin", function() {
|
||||
let geos = null;
|
||||
it('one mesh should be parsed', function() {
|
||||
geos = geometry.BufferedGeometriesFromBinary(binaries[1], 0);
|
||||
assert.equal(geos.length, 1);
|
||||
});
|
||||
it('one mesh should have correct number of vertices and faces', function() {
|
||||
assert.equal(geos[0].getAttribute('position').count, 3000);
|
||||
assert.equal(geos[0].getIndex().count, 6000 * 3);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe("Binary Pointcloud Parsing", function() {
|
||||
describe("Reading and checking two point clouds from _out/clouds0_1.bin", function() {
|
||||
let geos = null;
|
||||
it('two point clouds should be parsed', function() {
|
||||
geos = geometry.PtCloudsFromBinary(binaries[2], 0);
|
||||
assert.equal(geos.length, 2);
|
||||
});
|
||||
it('two point clouds should have correct number of points', function() {
|
||||
assert.equal(geos[0].instanceCount, 4);
|
||||
assert.equal(geos[1].instanceCount, 100);
|
||||
});
|
||||
it('first point cloud should have correct geometry values', function() {
|
||||
let expected_positions = [1.0, 2.0, 3.0,
|
||||
10.0, 20.0, 30.0,
|
||||
2.0, 4.0, 6.0,
|
||||
15.0, 25.0, 35.0];
|
||||
for (let i = 0; i < expected_positions.length; ++i) {
|
||||
assert.equal(geos[0].getAttribute('instanceTranslation').array[i],
|
||||
expected_positions[i],
|
||||
'unexpected position at ' + i);
|
||||
}
|
||||
});
|
||||
it('second point cloud should have correct geometry values', function() {
|
||||
for (let i = 0; i < 300; ++i) {
|
||||
assert.equal(geos[1].getAttribute('instanceTranslation').array[i],
|
||||
i + 0.0,
|
||||
'unexpected position at ' + i);
|
||||
}
|
||||
});
|
||||
it('correct bounding box should be computed for both point clouds', function() {
|
||||
let bbox = geometry.GetBoundingBox(geos);
|
||||
assert.equal(Math.round(bbox.min.x * 1000), 0);
|
||||
assert.equal(Math.round(bbox.min.y * 1000), 1 * 1000);
|
||||
assert.equal(Math.round(bbox.min.z * 1000), 2 * 1000);
|
||||
assert.equal(Math.round(bbox.max.x * 1000), 297 * 1000);
|
||||
assert.equal(Math.round(bbox.max.y * 1000), 298 * 1000);
|
||||
assert.equal(Math.round(bbox.max.z * 1000), 299 * 1000);
|
||||
});
|
||||
});
|
||||
describe("Reading and checking one point cloud from _out/clouds2.bin", function() {
|
||||
let geos = null;
|
||||
it('one point cloud should be parsed', function() {
|
||||
geos = geometry.PtCloudsFromBinary(binaries[3], 0);
|
||||
assert.equal(geos.length, 1);
|
||||
});
|
||||
it('one point cloud should have correct number of points', function() {
|
||||
assert.equal(geos[0].instanceCount, 3000);
|
||||
});
|
||||
});
|
||||
})
|
||||
@@ -0,0 +1,88 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import numpy as np
|
||||
import os
|
||||
import pytest
|
||||
import shutil
|
||||
import torch
|
||||
|
||||
import kaolin
|
||||
|
||||
from kaolin.utils.testing import tensor_info
|
||||
|
||||
from kaolin.experimental.dash3d.util import meshes_to_binary
|
||||
from kaolin.experimental.dash3d.util import point_clouds_to_binary
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir) # Note: comment to keep output directory
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def meshes():
|
||||
vertices0 = np.array([[1.0, 2.0, 3.0],
|
||||
[10.0, 20.0, 30.0],
|
||||
[2.0, 4.0, 6.0],
|
||||
[15.0, 25.0, 35.0]], dtype=np.float32)
|
||||
faces0 = np.array([[0, 1, 2],
|
||||
[2, 1, 3]], dtype=np.int32)
|
||||
vertices1 = np.arange(0, 300).reshape((-1, 3))
|
||||
faces1 = np.stack([np.arange(0, 100),
|
||||
np.mod(np.arange(0, 100) + 1, 100),
|
||||
np.mod(np.arange(0, 100) + 2, 100)]).astype(np.int32).reshape((-1, 3))
|
||||
vertices2 = np.random.random((1, 9000)).reshape((-1, 3))
|
||||
faces2 = np.stack([np.mod(np.arange(0, 6000), 1000),
|
||||
np.ones((6000,)),
|
||||
np.random.randint(0, 2999 + 1, (6000,))]).astype(np.int32).reshape((-1, 3))
|
||||
return {"faces": [faces0, faces1, faces2],
|
||||
"vertices": [vertices0, vertices1, vertices2]}
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def pointclouds():
|
||||
pts0 = np.array([[1.0, 2.0, 3.0],
|
||||
[10.0, 20.0, 30.0],
|
||||
[2.0, 4.0, 6.0],
|
||||
[15.0, 25.0, 35.0]], dtype=np.float32)
|
||||
pts1 = np.arange(0, 300).astype(np.float32).reshape((-1, 3))
|
||||
pts2 = np.random.random((1, 9000)).astype(np.float32).reshape((-1, 3))
|
||||
return {"positions": [pts0, pts1, pts2]}
|
||||
|
||||
|
||||
class TestBinaryEncoding:
|
||||
def test_server_client_binary_compatibility(self, meshes, pointclouds, out_dir):
|
||||
# Encode and write mesh0+mesh1 and mesh2 to binary files
|
||||
binstr = meshes_to_binary(meshes['vertices'][0:2], meshes['faces'][0:2])
|
||||
with open(os.path.join(out_dir, 'meshes0_1.bin'), 'wb') as f:
|
||||
f.write(binstr)
|
||||
binstr = meshes_to_binary([meshes['vertices'][2]], [meshes['faces'][2]])
|
||||
with open(os.path.join(out_dir, 'meshes2.bin'), 'wb') as f:
|
||||
f.write(binstr)
|
||||
|
||||
# Encode and write ptcloud0+ptcloud1 and pointcloud2 to binary files
|
||||
binstr = point_clouds_to_binary(pointclouds['positions'][0:2])
|
||||
with open(os.path.join(out_dir, 'clouds0_1.bin'), 'wb') as f:
|
||||
f.write(binstr)
|
||||
binstr = point_clouds_to_binary([pointclouds['positions'][2]])
|
||||
with open(os.path.join(out_dir, 'clouds2.bin'), 'wb') as f:
|
||||
f.write(binstr)
|
||||
|
||||
# Execute javascript test that checks that these are parsed correctly
|
||||
js_test = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'test_binary_parse.js')
|
||||
os.system('npx mocha {}'.format(js_test)) # TODO: will npx work for everyone?
|
||||
54
tests/python/examples/tutorial/test_usd_kitchenset.py
Normal file
@@ -0,0 +1,54 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import zipfile
|
||||
import urllib.request
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def kitchen_set_dir():
|
||||
# Create temporary output directory
|
||||
data_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_data')
|
||||
os.makedirs(data_dir, exist_ok=True)
|
||||
kitchen_set_path = os.path.join(data_dir, 'kitchenset.zip')
|
||||
|
||||
kitchen_set_url = 'http://graphics.pixar.com/usd/files/Kitchen_set.zip'
|
||||
urllib.request.urlretrieve(kitchen_set_url, kitchen_set_path)
|
||||
with zipfile.ZipFile(kitchen_set_path, 'r') as zip_ref:
|
||||
zip_ref.extractall(data_dir)
|
||||
|
||||
yield os.path.join(data_dir, 'Kitchen_set')
|
||||
shutil.rmtree(data_dir)
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir)
|
||||
|
||||
|
||||
class TestUsdKitchenSet:
|
||||
def test_runs(self, kitchen_set_dir, out_dir):
|
||||
args = f'--kitchen_set_dir={kitchen_set_dir} --output_dir={out_dir}'
|
||||
os.system(f'python examples/tutorial/usd_kitchenset.py {args}')
|
||||
|
||||
# Confirm that there are 426 meshes exported
|
||||
assert len(os.listdir(out_dir)) == 426
|
||||
61
tests/python/examples/tutorial/test_visualize_main.py
Normal file
@@ -0,0 +1,61 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import pytest
|
||||
import shutil
|
||||
import torch
|
||||
|
||||
import kaolin
|
||||
|
||||
from kaolin.utils.testing import tensor_info
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_viz_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir) # Note: comment to keep output directory
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def obj_paths():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
samples_dir = os.path.join(cur_dir, os.pardir, os.pardir, os.pardir, 'samples')
|
||||
return [os.path.join(samples_dir, 'rocket.obj'),
|
||||
os.path.join(samples_dir, 'model.obj')]
|
||||
|
||||
|
||||
class TestVisualizeMain:
|
||||
def test_runs(self, obj_paths, out_dir):
|
||||
objs = ','.join(obj_paths)
|
||||
|
||||
# Check that the main function runs
|
||||
# Note: to run and capture output do:
|
||||
# pytest --capture=tee-sys tests/python/examples/
|
||||
args = '--skip_normalization --test_objs={} --output_dir={}'.format(objs, out_dir)
|
||||
os.system('python examples/tutorial/visualize_main.py {}'.format(args))
|
||||
|
||||
# Spot check one of the outputs
|
||||
for i in range(len(obj_paths)):
|
||||
expected = kaolin.io.obj.import_mesh(obj_paths[i])
|
||||
expected_usd = os.path.join(out_dir, 'output', 'mesh_%d.usd' % i)
|
||||
assert os.path.exists(expected_usd)
|
||||
actual_start = kaolin.io.usd.import_mesh(expected_usd, time=0)
|
||||
actual_end = kaolin.io.usd.import_mesh(expected_usd, time=1000)
|
||||
|
||||
assert torch.allclose(expected.vertices, actual_end.vertices, rtol=1e-03)
|
||||
assert not torch.allclose(expected.vertices, actual_start.vertices)
|
||||
1083
tests/python/kaolin/io/test_dataset.py
Normal file
99
tests/python/kaolin/io/test_gltf.py
Normal file
@@ -0,0 +1,99 @@
|
||||
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import pytest
|
||||
from PIL import Image
|
||||
import numpy as np
|
||||
|
||||
import torch
|
||||
|
||||
import kaolin
|
||||
from kaolin.io import utils, gltf
|
||||
from kaolin.utils.testing import print_namedtuple_attributes, print_dict_attributes, \
|
||||
check_tensor_attribute_shapes, contained_torch_equal, check_allclose
|
||||
|
||||
ROOT_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir, os.pardir, os.pardir, 'samples')
|
||||
SAMPLE_DIR = os.path.join(ROOT_DIR, 'gltf/')
|
||||
|
||||
# TODO(cfujitsang): Add sanity test over a dataset like ShapeNet
|
||||
class TestGLTF:
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_vertices(self):
|
||||
return torch.load(os.path.join(SAMPLE_DIR, 'gt_vertices.pt'))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces(self):
|
||||
return torch.load(os.path.join(SAMPLE_DIR, 'gt_faces.pt'))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_uvs(self):
|
||||
return torch.load(os.path.join(SAMPLE_DIR, 'gt_uvs.pt'))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_uvs_idx(self, expected_faces):
|
||||
return expected_faces
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_diffuse_texture(self):
|
||||
img = Image.open(os.path.join(SAMPLE_DIR, 'Avocado_baseColor.png'))
|
||||
return torch.as_tensor(np.array(img)).float() * (1. / 255.)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_metallic_roughness_texture(self):
|
||||
img = Image.open(os.path.join(SAMPLE_DIR, 'Avocado_roughnessMetallic.png'))
|
||||
return torch.as_tensor(np.array(img)).float() * (1. / 255.)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_roughness_texture(self, expected_metallic_roughness_texture):
|
||||
return expected_metallic_roughness_texture[..., 1:2]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_metallic_texture(self, expected_metallic_roughness_texture):
|
||||
return expected_metallic_roughness_texture[..., 2:3]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_normals_texture(self):
|
||||
img = Image.open(os.path.join(SAMPLE_DIR, 'Avocado_normal.png'))
|
||||
return (torch.as_tensor(np.array(img)).float() * (2. / 255.) - 1.)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_material_assignments(self, expected_faces):
|
||||
return torch.zeros((expected_faces.shape[0],), dtype=torch.short)
|
||||
|
||||
def test_import_mesh(
|
||||
self, expected_vertices, expected_faces,
|
||||
expected_uvs, expected_face_uvs_idx,
|
||||
expected_diffuse_texture, expected_roughness_texture,
|
||||
expected_metallic_texture, expected_normals_texture,
|
||||
expected_material_assignments
|
||||
):
|
||||
mesh = gltf.import_mesh(os.path.join(
|
||||
SAMPLE_DIR,
|
||||
'Avocado.gltf'
|
||||
))
|
||||
assert torch.equal(mesh.vertices, expected_vertices)
|
||||
assert torch.equal(mesh.faces, expected_faces)
|
||||
assert torch.equal(mesh.uvs, expected_uvs)
|
||||
assert torch.equal(mesh.face_uvs_idx, expected_face_uvs_idx)
|
||||
assert torch.equal(mesh.materials[0].diffuse_texture,
|
||||
expected_diffuse_texture)
|
||||
assert torch.equal(mesh.materials[0].roughness_texture,
|
||||
expected_roughness_texture)
|
||||
assert torch.equal(mesh.materials[0].metallic_texture,
|
||||
expected_metallic_texture)
|
||||
assert torch.equal(mesh.materials[0].normals_texture,
|
||||
expected_normals_texture)
|
||||
assert torch.equal(mesh.material_assignments,
|
||||
expected_material_assignments)
|
||||
373
tests/python/kaolin/io/test_materials.py
Normal file
@@ -0,0 +1,373 @@
|
||||
# Copyright (c) 2019-2023 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import os
|
||||
import shutil
|
||||
|
||||
import torch
|
||||
import pytest
|
||||
|
||||
from kaolin.io import materials as kal_materials
|
||||
from kaolin.io import usd, obj
|
||||
from kaolin.utils.testing import contained_torch_equal
|
||||
|
||||
_misc_attributes = [
|
||||
'diffuse_colorspace',
|
||||
'roughness_colorspace',
|
||||
'metallic_colorspace',
|
||||
'clearcoat_colorspace',
|
||||
'clearcoat_roughness_colorspace',
|
||||
'opacity_colorspace',
|
||||
'ior_colorspace',
|
||||
'specular_colorspace',
|
||||
'normals_colorspace',
|
||||
'displacement_colorspace',
|
||||
'is_specular_workflow'
|
||||
]
|
||||
|
||||
_value_attributes = [
|
||||
'diffuse_color',
|
||||
'roughness_value',
|
||||
'metallic_value',
|
||||
'clearcoat_value',
|
||||
'clearcoat_roughness_value',
|
||||
'opacity_value',
|
||||
'opacity_threshold',
|
||||
'ior_value',
|
||||
'specular_color',
|
||||
'displacement_value',
|
||||
]
|
||||
|
||||
_texture_attributes = [
|
||||
'diffuse_texture',
|
||||
'roughness_texture',
|
||||
'metallic_texture',
|
||||
'clearcoat_texture',
|
||||
'clearcoat_roughness_texture',
|
||||
'opacity_texture',
|
||||
'ior_texture',
|
||||
'specular_texture',
|
||||
'displacement_texture'
|
||||
]
|
||||
|
||||
# Seed for texture sampling
|
||||
# TODO(cfujitsang): This might fix the seed for the whole pytest.
|
||||
torch.random.manual_seed(0)
|
||||
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir)
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def material_values():
|
||||
params = {
|
||||
'diffuse_color': (0., 1., 0.),
|
||||
'roughness_value': 0.1,
|
||||
'metallic_value': 1.,
|
||||
'specular_color': (1., 0., 0.),
|
||||
'is_specular_workflow': True,
|
||||
}
|
||||
yield params
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def material_textures():
|
||||
params = {
|
||||
'diffuse_texture': torch.rand((3, 256, 256)),
|
||||
'roughness_texture': torch.rand((1, 256, 256)),
|
||||
'metallic_texture': torch.rand((1, 256, 256)),
|
||||
'normals_texture': torch.rand((1, 256, 256)),
|
||||
'specular_texture': torch.rand((3, 256, 256)),
|
||||
}
|
||||
yield params
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def mesh():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
obj_mesh = obj.import_mesh(os.path.join(cur_dir, os.pardir, os.pardir,
|
||||
os.pardir, 'samples/rocket.obj'), with_normals=True,
|
||||
with_materials=True, error_handler=obj.skip_error_handler)
|
||||
return obj_mesh
|
||||
|
||||
|
||||
class TestPBRMaterial:
|
||||
def test_separate_texture_path(self, out_dir, material_values):
|
||||
file_path = os.path.join(out_dir, 'pbr_test.usda')
|
||||
mat = kal_materials.PBRMaterial(**material_values)
|
||||
mat.write_to_usd(file_path, '/World/Looks/pbr', texture_dir='texture')
|
||||
|
||||
material_in = kal_materials.PBRMaterial().read_from_usd(file_path, '/World/Looks/pbr', texture_path='texture')
|
||||
|
||||
assert mat.diffuse_color == pytest.approx(material_in.diffuse_color, 0.1)
|
||||
assert mat.roughness_value == pytest.approx(material_in.roughness_value, 0.1)
|
||||
assert mat.metallic_value == pytest.approx(material_in.metallic_value, 0.1)
|
||||
assert mat.specular_color == pytest.approx(material_in.specular_color, 0.1)
|
||||
assert mat.is_specular_workflow == material_in.is_specular_workflow
|
||||
|
||||
def test_cycle_values(self, out_dir, material_values):
|
||||
file_path = os.path.join(out_dir, 'pbr_test.usda')
|
||||
mat = kal_materials.PBRMaterial(**material_values)
|
||||
mat.write_to_usd(file_path, '/World/Looks/pbr')
|
||||
|
||||
material_in = kal_materials.PBRMaterial().read_from_usd(file_path, '/World/Looks/pbr')
|
||||
|
||||
assert mat.diffuse_color == pytest.approx(material_in.diffuse_color, 0.1)
|
||||
assert mat.roughness_value == pytest.approx(material_in.roughness_value, 0.1)
|
||||
assert mat.metallic_value == pytest.approx(material_in.metallic_value, 0.1)
|
||||
assert mat.specular_color == pytest.approx(material_in.specular_color, 0.1)
|
||||
assert mat.is_specular_workflow == material_in.is_specular_workflow
|
||||
|
||||
def test_cycle_textures(self, out_dir, material_textures):
|
||||
"""Cycle test for textures. This conversion is lossy!"""
|
||||
file_path = os.path.join(out_dir, 'pbr_tex_test.usda')
|
||||
mat = kal_materials.PBRMaterial(**material_textures)
|
||||
mat.write_to_usd(file_path, '/World/Looks/pbr')
|
||||
|
||||
material_in = kal_materials.PBRMaterial().read_from_usd(file_path, '/World/Looks/pbr')
|
||||
assert torch.allclose(mat.diffuse_texture, material_in.diffuse_texture, atol=1e-2)
|
||||
assert torch.allclose(mat.roughness_texture, material_in.roughness_texture, atol=1e-2)
|
||||
assert torch.allclose(mat.metallic_texture, material_in.metallic_texture, atol=1e-2)
|
||||
assert torch.allclose(mat.normals_texture, material_in.normals_texture, atol=1e-2)
|
||||
assert torch.allclose(mat.specular_texture, material_in.specular_texture, atol=1e-2)
|
||||
assert mat.is_specular_workflow == material_in.is_specular_workflow
|
||||
|
||||
def test_material_values(self, out_dir):
|
||||
out_path = os.path.join(out_dir, 'pbr_material_values.usda')
|
||||
stage = usd.create_stage(out_path)
|
||||
|
||||
tests = {
|
||||
'Default': {},
|
||||
'Diffuse': {'diffuse_color': (0., 1., 0.)},
|
||||
'SpecularRoughness': {
|
||||
'diffuse_color': (1., 0., 0.),
|
||||
'roughness_value': 0.1,
|
||||
'specular_color': (0., 0., 1.),
|
||||
'is_specular_workflow': True
|
||||
},
|
||||
'Metallic': {
|
||||
'diffuse_color': (0., 1., 0.),
|
||||
'metallic_value': 1.,
|
||||
'is_specular_workflow': False
|
||||
},
|
||||
'Clearcoat': {'clearcoat_value': 1.},
|
||||
'ClearcoatRougness': {'clearcoat_roughness_value': 1.},
|
||||
'Opacity': {'opacity_value': 0.5},
|
||||
'OpacityThreshold': {'opacity_threshold': 0.5},
|
||||
'Ior': {'ior_value': 1.},
|
||||
'Displacement': {'displacement_value': 0.1},
|
||||
}
|
||||
for test_name, params in tests.items():
|
||||
prim = stage.DefinePrim(f'/World/{test_name}', 'Sphere')
|
||||
mat = kal_materials.PBRMaterial(**params)
|
||||
mat.write_to_usd(out_path, f'/World/Looks/{test_name}', bound_prims=[prim])
|
||||
stage.Save()
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
# TODO(jlafleche) Render the two mesh for visual comparison
|
||||
golden = os.path.join(out_dir, os.pardir, os.pardir, os.pardir,
|
||||
os.pardir, 'samples/golden/pbr_material_values.usda')
|
||||
assert open(golden).read() == open(out_path).read()
|
||||
|
||||
def test_material_textures(self, out_dir, mesh):
|
||||
def _create_checkerboard(val1, val2):
|
||||
channels = len(val1)
|
||||
checkerboard = torch.ones((channels, 2, 2)) * torch.tensor(val1)[:, None, None]
|
||||
checkerboard[:, 0, 0] = torch.tensor(val2)
|
||||
checkerboard[:, 1, 1] = torch.tensor(val2)
|
||||
checkerboard = torch.nn.functional.interpolate(checkerboard[None, ...], scale_factor=128)[0]
|
||||
return checkerboard
|
||||
out_path = os.path.join(out_dir, 'pbr_material_textures.usda')
|
||||
stage = usd.create_stage(out_path)
|
||||
|
||||
tests = {
|
||||
'Default': {},
|
||||
'Diffuse': {'diffuse_texture': _create_checkerboard((0., 1., 0.), (0., 0., 1.)),
|
||||
'diffuse_colorspace': 'sRGB'},
|
||||
'Roughness': {'roughness_texture': _create_checkerboard((0.1,), (0.9,)), 'roughness_colorspace': 'raw'},
|
||||
'Metallic': {'metallic_texture': _create_checkerboard((0.1,), (0.9,)), 'metallic_colorspace': 'raw'},
|
||||
'Clearcoat': {'clearcoat_texture': _create_checkerboard((0.01,), (0.9,)), 'metallic_colorspace': 'raw'},
|
||||
'ClearcoatRoughness': {'clearcoat_roughness_texture': _create_checkerboard((0.1,), (0.9,)), 'metallic_colorspace': 'raw'},
|
||||
'Opacity': {'opacity_texture': _create_checkerboard((0.1,), (0.9,)), 'opacity_threshold': 0.5,
|
||||
'opacity_colorspace': 'raw'},
|
||||
'Ior': {'ior_texture': _create_checkerboard((0.1,), (0.9,)), 'ior_colorspace': 'raw'},
|
||||
'Normal': {'normals_texture': _create_checkerboard((0., 0., 1.,), (0., 0.5, 0.5)),
|
||||
'normals_colorspace': 'raw'},
|
||||
'Specular': {'specular_texture': _create_checkerboard((1., 0., 0.), (0., 0., 1.)),
|
||||
'is_specular_workflow': True, 'specular_colorspace': 'raw'},
|
||||
'Displacement': {'displacement_texture': _create_checkerboard((0.1,), (0.9,)),
|
||||
'displacement_colorspace': 'raw'},
|
||||
}
|
||||
|
||||
for test_name, params in tests.items():
|
||||
mat = kal_materials.PBRMaterial(**params)
|
||||
prim = usd.add_mesh(stage, f'/World/{test_name}', mesh.vertices, mesh.faces,
|
||||
uvs=mesh.uvs,
|
||||
face_uvs_idx=mesh.face_uvs_idx,
|
||||
face_normals=mesh.normals[mesh.face_normals_idx].view(-1, 3))
|
||||
mat.write_to_usd(out_path, f'/World/Looks/{test_name}', bound_prims=[prim])
|
||||
stage.Save()
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
# TODO(jlafleche) Render the two mesh for visual comparison
|
||||
golden = os.path.join(out_dir, os.pardir, os.pardir, os.pardir,
|
||||
os.pardir, 'samples/golden/pbr_material_textures.usda')
|
||||
assert open(golden).read() == open(out_path).read()
|
||||
|
||||
def test_colorspace(self, out_dir, mesh):
|
||||
out_path = os.path.join(out_dir, 'colorspace_auto.usda')
|
||||
stage = usd.create_stage(out_path)
|
||||
|
||||
def _create_checkerboard(val1, val2):
|
||||
channels = len(val1)
|
||||
checkerboard = torch.ones((channels, 2, 2)) * torch.tensor(val1)[:, None, None]
|
||||
checkerboard[:, 0, 0] = torch.tensor(val2)
|
||||
checkerboard[:, 1, 1] = torch.tensor(val2)
|
||||
checkerboard = torch.nn.functional.interpolate(checkerboard[None, ...], scale_factor=128)[0]
|
||||
return checkerboard
|
||||
|
||||
single_channel_texture = _create_checkerboard((0.2,), (0.8,))
|
||||
rgb_texture = _create_checkerboard((0., 0.4, 0.), (0., 0., 0.4))
|
||||
|
||||
texture = {'metallic_texture': single_channel_texture, 'metallic_colorspace': 'auto',
|
||||
'roughness_texture': single_channel_texture, 'roughness_colorspace': 'raw',
|
||||
'diffuse_texture': rgb_texture, 'diffuse_colorspace': 'sRGB'}
|
||||
material = kal_materials.PBRMaterial(**texture)
|
||||
|
||||
prim = usd.add_mesh(stage, '/World/colorspace_test', mesh.vertices, mesh.faces,
|
||||
uvs=mesh.uvs,
|
||||
face_uvs_idx=mesh.face_uvs_idx,
|
||||
face_normals=mesh.normals[mesh.face_normals_idx].view(-1, 3))
|
||||
material.write_to_usd(out_path, '/World/Looks/colorspace_test', bound_prims=[prim])
|
||||
|
||||
material_in = kal_materials.PBRMaterial().read_from_usd(out_path, '/World/Looks/colorspace_test')
|
||||
|
||||
assert material_in.diffuse_colorspace == 'sRGB'
|
||||
assert material_in.metallic_colorspace == 'auto'
|
||||
assert material_in.roughness_colorspace == 'raw'
|
||||
|
||||
@pytest.mark.parametrize('device', [None, 'cuda:0'])
|
||||
@pytest.mark.parametrize('non_blocking', [False, True])
|
||||
def test_cuda(self, material_values, material_textures, device, non_blocking):
|
||||
mat = kal_materials.PBRMaterial(**material_values, **material_textures)
|
||||
cuda_mat = mat.cuda(device=device, non_blocking=non_blocking)
|
||||
for param_name in _value_attributes + _texture_attributes:
|
||||
val = getattr(mat, param_name)
|
||||
cuda_val = getattr(cuda_mat, param_name)
|
||||
if val is None:
|
||||
assert cuda_val is None
|
||||
else:
|
||||
assert torch.equal(cuda_val, val.cuda())
|
||||
assert not val.is_cuda
|
||||
|
||||
for param_name in _misc_attributes:
|
||||
assert getattr(mat, param_name) == getattr(cuda_mat, param_name)
|
||||
|
||||
def test_cpu(self, material_values, material_textures):
|
||||
mat = kal_materials.PBRMaterial(**material_values, **material_textures)
|
||||
# see test_cuda for guarantee that this is reliable
|
||||
cuda_mat = mat.cuda()
|
||||
cpu_mat = cuda_mat.cpu()
|
||||
for param_name in _value_attributes + _texture_attributes:
|
||||
cpu_val = getattr(cpu_mat, param_name)
|
||||
cuda_val = getattr(cuda_mat, param_name)
|
||||
if cuda_val is None:
|
||||
assert cpu_val is None
|
||||
else:
|
||||
assert torch.equal(cpu_val, cuda_val.cpu())
|
||||
assert cuda_val.is_cuda
|
||||
|
||||
for param_name in _misc_attributes:
|
||||
assert getattr(cpu_mat, param_name) == getattr(cuda_mat, param_name)
|
||||
|
||||
def test_contiguous(self, material_values, material_textures):
|
||||
strided_material_textures = {
|
||||
k: torch.as_strided(v, (v.shape[0], int(v.shape[1] / 2), int(v.shape[2])), (1, 2, 2))
|
||||
for k, v in material_textures.items()
|
||||
}
|
||||
mat = kal_materials.PBRMaterial(**material_values, **strided_material_textures)
|
||||
contiguous_mat = mat.contiguous()
|
||||
for param_name in _texture_attributes:
|
||||
val = getattr(mat, param_name)
|
||||
contiguous_val = getattr(contiguous_mat, param_name)
|
||||
if contiguous_val is None:
|
||||
assert contiguous_val is None
|
||||
else:
|
||||
assert torch.equal(contiguous_val, val.contiguous())
|
||||
assert not val.is_contiguous()
|
||||
|
||||
for param_name in _value_attributes:
|
||||
if contiguous_val is None:
|
||||
assert contiguous_val is None
|
||||
else:
|
||||
assert torch.equal(getattr(mat, param_name), getattr(contiguous_mat, param_name))
|
||||
|
||||
for param_name in _misc_attributes:
|
||||
assert getattr(mat, param_name) == getattr(contiguous_mat, param_name)
|
||||
|
||||
class TestUtilities:
|
||||
@pytest.mark.parametrize('any_error_handler', [obj.skip_error_handler, obj.ignore_error_handler,
|
||||
obj.create_missing_materials_error_handler,
|
||||
obj.default_error_handler])
|
||||
@pytest.mark.parametrize('material_assignments_shape', [1, 2]) # face indices, or start,end ranges
|
||||
def test_process_materials_and_assignments(self, any_error_handler, material_assignments_shape):
|
||||
materials_dict = {
|
||||
'bricks': {'Ka': torch.rand((3,)), 'Kd': torch.rand((3,)), 'material_name': 'bricks'},
|
||||
'grass': {'Ka': torch.rand((3,)), 'Kd': torch.rand((3,)), 'material_name': 'grass'}}
|
||||
if material_assignments_shape == 2:
|
||||
material_assignments_dict = { # Using start,end ranges
|
||||
'bricks': torch.LongTensor([[0, 10], [15, 20]]),
|
||||
'grass': torch.LongTensor([[10, 15], [21, 22], [25, 30]])}
|
||||
else:
|
||||
material_assignments_dict = { # Equivalent to above, but using full list of faces
|
||||
'bricks': torch.LongTensor(list(range(0, 10)) + list(range(15, 20))),
|
||||
'grass': torch.LongTensor(list(range(10, 15)) + list(range(21, 22)) + list(range(25, 30)))}
|
||||
path = 'path'
|
||||
num_faces = 30
|
||||
expected_materials = [materials_dict['bricks'], materials_dict['grass']]
|
||||
expected_assignments = torch.ShortTensor(
|
||||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, -1, 1, -1, -1, -1, 1, 1, 1, 1, 1])
|
||||
|
||||
# This should succeed with any error handler
|
||||
materials, material_assignments = kal_materials.process_materials_and_assignments(
|
||||
materials_dict, material_assignments_dict, any_error_handler, num_faces)
|
||||
assert contained_torch_equal(materials, expected_materials)
|
||||
assert torch.equal(material_assignments, expected_assignments)
|
||||
|
||||
# Now let's add assignment to a non-existent material
|
||||
material_assignments_dict['kitties'] = torch.LongTensor([[22, 25]])
|
||||
if any_error_handler == obj.default_error_handler:
|
||||
with pytest.raises(obj.MaterialNotFoundError):
|
||||
materials, material_assignments = kal_materials.process_materials_and_assignments(
|
||||
materials_dict, material_assignments_dict, any_error_handler, num_faces, error_context_str=path)
|
||||
elif any_error_handler in [obj.skip_error_handler, obj.ignore_error_handler]:
|
||||
# Ignore extra assignment
|
||||
materials, material_assignments = kal_materials.process_materials_and_assignments(
|
||||
materials_dict, material_assignments_dict, any_error_handler, num_faces, error_context_str=path)
|
||||
assert contained_torch_equal(materials, expected_materials)
|
||||
assert torch.equal(material_assignments, expected_assignments)
|
||||
elif any_error_handler == obj.create_missing_materials_error_handler:
|
||||
expected_assignments[22:25] = 2
|
||||
materials, material_assignments = kal_materials.process_materials_and_assignments(
|
||||
materials_dict, material_assignments_dict, any_error_handler, num_faces)
|
||||
assert [m['material_name'] for m in materials] == ['bricks', 'grass', 'kitties']
|
||||
assert contained_torch_equal(materials[:2], expected_materials)
|
||||
110
tests/python/kaolin/io/test_modelnet.py
Normal file
@@ -0,0 +1,110 @@
|
||||
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import os
|
||||
import copy
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.io.dataset import KaolinDatasetItem
|
||||
from kaolin.io.off import return_type
|
||||
from kaolin.io.modelnet import ModelNet
|
||||
|
||||
MODELNET_PATH = os.getenv('KAOLIN_TEST_MODELNET_PATH')
|
||||
MODELNET_TEST_CATEGORY_LABELS = ['bathtub']
|
||||
MODELNET_TEST_CATEGORY_LABELS_2 = ['desk']
|
||||
MODELNET_TEST_CATEGORY_LABELS_MULTI = ['bathtub', 'desk']
|
||||
|
||||
ALL_CATEGORIES = [
|
||||
MODELNET_TEST_CATEGORY_LABELS,
|
||||
MODELNET_TEST_CATEGORY_LABELS_2,
|
||||
MODELNET_TEST_CATEGORY_LABELS_MULTI,
|
||||
]
|
||||
|
||||
# Skip test in a CI environment
|
||||
@pytest.mark.skipif(MODELNET_PATH is None,
|
||||
reason="'KAOLIN_TEST_MODELNET_PATH' environment variable is not set.")
|
||||
@pytest.mark.parametrize('categories', ALL_CATEGORIES)
|
||||
@pytest.mark.parametrize('split', ['train', 'test'])
|
||||
@pytest.mark.parametrize('index', [0, -1])
|
||||
@pytest.mark.parametrize('use_transform', [True, False])
|
||||
@pytest.mark.parametrize('output_dict', [True, False])
|
||||
class TestModelNet(object):
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def transform(self, output_dict, use_transform):
|
||||
if use_transform:
|
||||
if output_dict:
|
||||
def transform(inputs):
|
||||
outputs = copy.copy(inputs)
|
||||
outputs['mesh'] = return_type(
|
||||
vertices=outputs['mesh'].vertices + 1.,
|
||||
faces=outputs['mesh'].faces,
|
||||
face_colors=outputs['mesh'].face_colors
|
||||
)
|
||||
return outputs
|
||||
return transform
|
||||
else:
|
||||
def transform(inputs):
|
||||
outputs = KaolinDatasetItem(
|
||||
data=return_type(
|
||||
vertices=inputs.data.vertices + 1.,
|
||||
faces=inputs.data.faces,
|
||||
face_colors=inputs.data.face_colors
|
||||
),
|
||||
attributes=inputs.attributes)
|
||||
return outputs
|
||||
return transform
|
||||
else:
|
||||
return None
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def modelnet_dataset(self, categories, split, transform, output_dict):
|
||||
return ModelNet(root=MODELNET_PATH,
|
||||
categories=categories,
|
||||
split=split,
|
||||
transform=transform,
|
||||
output_dict=output_dict)
|
||||
|
||||
def test_basic_getitem(self, modelnet_dataset, index, output_dict):
|
||||
assert len(modelnet_dataset) > 0
|
||||
|
||||
if index == -1:
|
||||
index = len(modelnet_dataset) - 1
|
||||
|
||||
item = modelnet_dataset[index]
|
||||
if output_dict:
|
||||
data = item['mesh']
|
||||
attributes = item
|
||||
else:
|
||||
data = item.data
|
||||
attributes = item.attributes
|
||||
assert isinstance(data, return_type)
|
||||
assert isinstance(attributes, dict)
|
||||
|
||||
assert isinstance(data.vertices, torch.Tensor)
|
||||
assert len(data.vertices.shape) == 2
|
||||
assert data.vertices.shape[1] == 3
|
||||
assert isinstance(data.faces, torch.Tensor)
|
||||
assert len(data.faces.shape) == 2
|
||||
|
||||
assert isinstance(attributes['name'], str)
|
||||
assert isinstance(attributes['path'], Path)
|
||||
assert isinstance(attributes['label'], str)
|
||||
|
||||
assert isinstance(data.face_colors, torch.Tensor)
|
||||
|
||||
412
tests/python/kaolin/io/test_obj.py
Normal file
@@ -0,0 +1,412 @@
|
||||
# Copyright (c) 2019,20-22, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.io import utils
|
||||
from kaolin.io import obj
|
||||
from kaolin.utils.testing import print_namedtuple_attributes, print_dict_attributes, \
|
||||
check_tensor_attribute_shapes, contained_torch_equal, check_allclose
|
||||
|
||||
ROOT_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir, os.pardir, os.pardir, 'samples')
|
||||
SIMPLE_DIR = os.path.join(ROOT_DIR, 'simple_obj/')
|
||||
|
||||
|
||||
def io_data_path(fname):
|
||||
""" Return path relative to tests/samples/io"""
|
||||
return os.path.join(ROOT_DIR, 'io', fname)
|
||||
|
||||
|
||||
# TODO(cfujitsang): Add sanity test over a dataset like ShapeNet
|
||||
|
||||
def _get_mtl_names(mtl_path):
|
||||
names = []
|
||||
with open(mtl_path, 'r') as f:
|
||||
for line in f.readlines():
|
||||
data = line.split()
|
||||
if len(data) == 0:
|
||||
continue
|
||||
if data[0] == 'newmtl':
|
||||
names.append(data[1])
|
||||
names.sort()
|
||||
return names
|
||||
|
||||
|
||||
class TestLoadObj:
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_vertices(self):
|
||||
return torch.FloatTensor([
|
||||
[-0.1, -0.1, -0.1],
|
||||
[0.1, -0.1, -0.1],
|
||||
[-0.1, 0.1, -0.1],
|
||||
[0.1, 0.1, -0.1],
|
||||
[-0.1, -0.1, 0.1],
|
||||
[0.1, -0.1, 0.1]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces(self):
|
||||
return torch.LongTensor([
|
||||
[0, 1, 3, 2],
|
||||
[1, 0, 4, 5]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces_triangulated(self):
|
||||
return torch.LongTensor([
|
||||
[0, 1, 3], [0, 3, 2],
|
||||
[1, 0, 4], [1, 4, 5]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces_heterogeneous(self):
|
||||
return torch.LongTensor([
|
||||
[0, 1, 3],
|
||||
[0, 3, 2],
|
||||
[1, 0, 4]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_uvs(self):
|
||||
return torch.Tensor([
|
||||
[0.0, 0.0],
|
||||
[0.0, 1.0],
|
||||
[1.0, 0.0],
|
||||
[1.0, 1.0]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_uvs_idx(self):
|
||||
return torch.LongTensor([
|
||||
[0, 1, 3, 2],
|
||||
[3, 1, 0, 2]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_uvs_idx_triangulated(self):
|
||||
return torch.LongTensor([
|
||||
[0, 1, 3], [0, 3, 2],
|
||||
[3, 1, 0], [3, 0, 2]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_uvs_idx_heterogeneous(self):
|
||||
return torch.LongTensor([
|
||||
[0, 1, 3],
|
||||
[0, 3, 2],
|
||||
[3, 1, 0]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_normals(self):
|
||||
return torch.FloatTensor([
|
||||
[0., 0., -1.],
|
||||
[0., -1., 0.],
|
||||
[-0.333334, -0.333333, -0.333333],
|
||||
[0.333334, -0.333333, -0.333333]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_normals_idx(self):
|
||||
return torch.LongTensor([
|
||||
[2, 3, 0, 0],
|
||||
[3, 2, 1, 1]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_normals_idx_triangulated(self):
|
||||
return torch.LongTensor([
|
||||
[2, 3, 0], [2, 0, 0],
|
||||
[3, 2, 1], [3, 1, 1]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_normals_idx_heterogeneous(self):
|
||||
return torch.LongTensor([
|
||||
[2, 3, 0],
|
||||
[2, 0, 0],
|
||||
[3, 2, 1]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_materials(self):
|
||||
return [
|
||||
{'material_name': 'Material.001',
|
||||
'Ka': torch.tensor([0.5, 0.6, 0.7]),
|
||||
'Kd': torch.tensor([0.4, 0.3, 0.2]),
|
||||
'Ks': torch.tensor([0.1, 0.3, 0.5]),
|
||||
'map_Ka': torch.ByteTensor([[[102, 127, 178], [153, 178, 178]],
|
||||
[[127, 127, 153], [127, 178, 204]]]),
|
||||
'map_Kd': torch.ByteTensor([[[102, 102, 76], [102, 51, 25]],
|
||||
[[76, 76, 51], [127, 76, 51]]])
|
||||
},
|
||||
{'material_name': 'Material.002',
|
||||
'Ka': torch.tensor([0.7, 0.7, 0.7]),
|
||||
'Kd': torch.tensor([0.2, 0.2, 0.2]),
|
||||
'Ks': torch.tensor([0.1, 0.1, 0.1]),
|
||||
'map_Ka': torch.ByteTensor([[[178, 178, 153], [178, 178, 102]],
|
||||
[[178, 178, 204], [178, 178, 229]]]),
|
||||
'map_Kd': torch.ByteTensor([[[51, 51, 51], [51, 51, 51]],
|
||||
[[51, 51, 51], [51, 51, 51]]]),
|
||||
'map_Ks': torch.ByteTensor([[[0, 0, 25], [0, 0, 25]],
|
||||
[[51, 51, 25], [51, 51, 25]]])
|
||||
},
|
||||
{'material_name': 'Material.003',
|
||||
'Ka': torch.tensor([0., 0., 0.]),
|
||||
'Kd': torch.tensor([0., 0., 0.]),
|
||||
'Ks': torch.tensor([0., 0., 0.])
|
||||
}
|
||||
]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_material_assignments_heterogeneous(self):
|
||||
return torch.ShortTensor([0, 0, 1])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_material_assignments(self):
|
||||
return torch.ShortTensor([0, 1])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_material_assignments_triangulated(self):
|
||||
return torch.ShortTensor([0, 0, 1, 1])
|
||||
|
||||
@pytest.mark.parametrize('with_normals', [False, True])
|
||||
@pytest.mark.parametrize('with_materials', [False, True])
|
||||
def test_import_mesh(self, with_normals, with_materials,
|
||||
expected_vertices, expected_faces, expected_uvs, expected_face_uvs_idx, expected_normals,
|
||||
expected_face_normals_idx, expected_materials, expected_material_assignments):
|
||||
outputs = obj.import_mesh(os.path.join(SIMPLE_DIR, 'model.obj'),
|
||||
with_materials=with_materials, with_normals=with_normals,
|
||||
error_handler=obj.skip_error_handler)
|
||||
assert torch.equal(outputs.vertices, expected_vertices)
|
||||
assert torch.equal(outputs.faces, expected_faces)
|
||||
if with_materials:
|
||||
assert torch.allclose(outputs.uvs, expected_uvs)
|
||||
assert torch.equal(outputs.face_uvs_idx, expected_face_uvs_idx)
|
||||
assert contained_torch_equal(outputs.materials, expected_materials, approximate=True)
|
||||
assert contained_torch_equal(outputs.material_assignments, expected_material_assignments)
|
||||
else:
|
||||
assert outputs.materials is None
|
||||
assert outputs.material_assignments is None
|
||||
if with_normals:
|
||||
assert torch.equal(outputs.normals, expected_normals)
|
||||
assert torch.equal(outputs.face_normals_idx, expected_face_normals_idx)
|
||||
else:
|
||||
assert outputs.normals is None
|
||||
assert outputs.face_normals_idx is None
|
||||
|
||||
@pytest.mark.parametrize('with_normals', [False, True])
|
||||
@pytest.mark.parametrize('with_materials', [False, True])
|
||||
def test_import_mesh_triangulate(self, with_normals, with_materials,
|
||||
expected_vertices, expected_faces_triangulated,
|
||||
expected_uvs, expected_face_uvs_idx_triangulated,
|
||||
expected_normals, expected_face_normals_idx_triangulated, expected_materials,
|
||||
expected_material_assignments_triangulated):
|
||||
outputs = obj.import_mesh(os.path.join(SIMPLE_DIR, 'model.obj'),
|
||||
with_materials=with_materials, with_normals=with_normals,
|
||||
error_handler=obj.skip_error_handler,
|
||||
triangulate=True)
|
||||
# TODO: might want to write a function for this if/else testing block; it's repeated everywhere
|
||||
assert torch.equal(outputs.vertices, expected_vertices)
|
||||
assert torch.equal(outputs.faces, expected_faces_triangulated)
|
||||
if with_materials:
|
||||
assert torch.allclose(outputs.uvs, expected_uvs)
|
||||
assert torch.equal(outputs.face_uvs_idx, expected_face_uvs_idx_triangulated)
|
||||
assert contained_torch_equal(outputs.materials, expected_materials, approximate=True)
|
||||
assert contained_torch_equal(outputs.material_assignments, expected_material_assignments_triangulated)
|
||||
else:
|
||||
assert outputs.materials is None
|
||||
assert outputs.material_assignments is None
|
||||
|
||||
if with_normals:
|
||||
assert torch.equal(outputs.normals, expected_normals)
|
||||
assert torch.equal(outputs.face_normals_idx, expected_face_normals_idx_triangulated)
|
||||
else:
|
||||
assert outputs.normals is None
|
||||
assert outputs.face_normals_idx is None
|
||||
|
||||
@pytest.mark.parametrize('with_normals', [False, True])
|
||||
def test_error_import_mesh(self, with_normals):
|
||||
with pytest.raises(obj.MaterialLoadError):
|
||||
outputs = obj.import_mesh(os.path.join(SIMPLE_DIR, 'model.obj'),
|
||||
with_materials=True, with_normals=with_normals,
|
||||
error_handler=obj.default_error_handler)
|
||||
|
||||
@pytest.mark.parametrize('with_normals', [False, True])
|
||||
def test_warn_import_mesh(self, with_normals):
|
||||
with pytest.warns(UserWarning):
|
||||
outputs = obj.import_mesh(os.path.join(SIMPLE_DIR, "model.obj"),
|
||||
with_materials=True, with_normals=with_normals,
|
||||
error_handler=obj.skip_error_handler)
|
||||
|
||||
@pytest.mark.parametrize('use_triangulate_shortcut', [True, False])
|
||||
@pytest.mark.parametrize('with_normals', [False, True])
|
||||
@pytest.mark.parametrize('with_materials', [False, True])
|
||||
def test_import_mesh_heterogeneous(self, with_normals, with_materials, expected_vertices,
|
||||
expected_faces_heterogeneous, expected_face_uvs_idx_heterogeneous,
|
||||
expected_uvs, expected_materials, expected_material_assignments_heterogeneous,
|
||||
expected_normals, expected_face_normals_idx_heterogeneous,
|
||||
use_triangulate_shortcut):
|
||||
if use_triangulate_shortcut:
|
||||
kwargs = {'triangulate': True}
|
||||
else:
|
||||
kwargs = {'heterogeneous_mesh_handler': utils.mesh_handler_naive_triangulate}
|
||||
outputs = obj.import_mesh(os.path.join(SIMPLE_DIR, 'model_heterogeneous.obj'),
|
||||
with_materials=with_materials, with_normals=with_normals,
|
||||
error_handler=obj.skip_error_handler, **kwargs)
|
||||
assert torch.equal(outputs.vertices, expected_vertices)
|
||||
assert torch.equal(outputs.faces, expected_faces_heterogeneous)
|
||||
|
||||
if with_materials:
|
||||
assert torch.allclose(outputs.uvs, expected_uvs)
|
||||
assert torch.equal(outputs.face_uvs_idx, expected_face_uvs_idx_heterogeneous)
|
||||
assert contained_torch_equal(outputs.materials, expected_materials, approximate=True)
|
||||
assert contained_torch_equal(outputs.material_assignments, expected_material_assignments_heterogeneous)
|
||||
else:
|
||||
assert outputs.materials is None
|
||||
assert outputs.material_assignments is None
|
||||
|
||||
if with_normals:
|
||||
assert torch.equal(outputs.normals, expected_normals)
|
||||
assert torch.equal(outputs.face_normals_idx, expected_face_normals_idx_heterogeneous)
|
||||
else:
|
||||
assert outputs.normals is None
|
||||
assert outputs.face_normals_idx is None
|
||||
|
||||
@pytest.mark.parametrize('triangulate', [False, True])
|
||||
def test_import_mesh_heterogeneous_skip(self, triangulate):
|
||||
outputs = obj.import_mesh(os.path.join(SIMPLE_DIR, 'model_heterogeneous.obj'),
|
||||
with_materials=True, with_normals=True,
|
||||
error_handler=obj.skip_error_handler,
|
||||
triangulate=triangulate,
|
||||
heterogeneous_mesh_handler=utils.heterogeneous_mesh_handler_skip)
|
||||
assert outputs is None
|
||||
|
||||
@pytest.mark.parametrize('triangulate', [False, True])
|
||||
def test_import_mesh_heterogeneous_fail(self, triangulate):
|
||||
"""Test that import fails when importing heterogeneous mesh without handler"""
|
||||
if not triangulate:
|
||||
with pytest.raises(utils.NonHomogeneousMeshError):
|
||||
obj.import_mesh(os.path.join(SIMPLE_DIR, 'model_heterogeneous.obj'),
|
||||
with_materials=True, with_normals=True,
|
||||
error_handler=obj.skip_error_handler,
|
||||
triangulate=triangulate)
|
||||
else:
|
||||
obj.import_mesh(os.path.join(SIMPLE_DIR, 'model_heterogeneous.obj'),
|
||||
with_materials=True, with_normals=True,
|
||||
error_handler=obj.skip_error_handler,
|
||||
triangulate=triangulate)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_large_values(self):
|
||||
num_vertices = 0
|
||||
num_faces = 0
|
||||
num_uvs = 0
|
||||
num_normals = 0
|
||||
num_face_groups = 0
|
||||
material_names = []
|
||||
|
||||
# Process core attributes and materials
|
||||
with open(os.path.join(ROOT_DIR, "model.obj")) as f:
|
||||
for line in f.readlines():
|
||||
data = line.split()
|
||||
if len(data) == 0:
|
||||
continue
|
||||
if data[0] == 'f':
|
||||
num_faces += 1
|
||||
elif data[0] == 'v':
|
||||
num_vertices += 1
|
||||
elif data[0] == 'vt':
|
||||
num_uvs += 1
|
||||
elif data[0] == 'vn':
|
||||
num_normals += 1
|
||||
elif data[0] == 'usemtl':
|
||||
num_face_groups += 1
|
||||
elif data[0] == 'mtllib':
|
||||
material_names.extend(_get_mtl_names(os.path.join(ROOT_DIR, data[1])))
|
||||
|
||||
material_names.sort()
|
||||
num_materials = len(material_names)
|
||||
|
||||
# Process material assignments in alphabetical order
|
||||
active_mtl = None
|
||||
face_idx = 0
|
||||
material_assignments = torch.zeros((num_faces,)).short() - 1
|
||||
with open(os.path.join(ROOT_DIR, "model.obj")) as f:
|
||||
for line in f.readlines():
|
||||
data = line.split()
|
||||
if len(data) == 0:
|
||||
continue
|
||||
if data[0] == 'f':
|
||||
if active_mtl is not None:
|
||||
material_assignments[face_idx] = active_mtl
|
||||
face_idx += 1
|
||||
elif data[0] == 'usemtl':
|
||||
active_mtl = material_names.index(data[1])
|
||||
|
||||
return {'num_vertices': num_vertices,
|
||||
'num_faces': num_faces,
|
||||
'num_uvs': num_uvs,
|
||||
'num_normals': num_normals,
|
||||
'num_materials': num_materials,
|
||||
'num_face_groups': num_face_groups,
|
||||
'material_assignments': material_assignments}
|
||||
|
||||
@pytest.mark.parametrize('with_normals', [False, True])
|
||||
@pytest.mark.parametrize('with_materials', [False, True])
|
||||
def test_large_obj(self, with_materials, with_normals, expected_large_values):
|
||||
outputs = obj.import_mesh(os.path.join(ROOT_DIR, "model.obj"),
|
||||
with_materials=with_materials, with_normals=with_normals)
|
||||
assert outputs.vertices.shape == (expected_large_values['num_vertices'], 3)
|
||||
assert outputs.faces.shape == (expected_large_values['num_faces'], 3)
|
||||
if with_materials:
|
||||
assert outputs.uvs.shape == (expected_large_values['num_uvs'], 2)
|
||||
assert outputs.face_uvs_idx.shape == (expected_large_values['num_faces'], 3)
|
||||
assert len(outputs.materials) == expected_large_values['num_materials']
|
||||
assert contained_torch_equal(outputs.material_assignments, expected_large_values['material_assignments'])
|
||||
else:
|
||||
assert outputs.uvs is None
|
||||
assert outputs.face_uvs_idx is None
|
||||
assert outputs.materials is None
|
||||
assert outputs.material_assignments is None
|
||||
if with_normals:
|
||||
assert outputs.normals.shape == (expected_large_values['num_normals'], 3)
|
||||
assert outputs.face_normals_idx.shape == (expected_large_values['num_faces'], 3)
|
||||
else:
|
||||
assert outputs.normals is None
|
||||
assert outputs.face_normals_idx is None
|
||||
|
||||
|
||||
|
||||
|
||||
class TestDiverseInputs:
|
||||
@pytest.fixture(scope='class')
|
||||
def expected_sizes(self):
|
||||
# TODO: compare actual face UVs and normals once consistent between OBJ and USD
|
||||
return {'ico_smooth': {'vertices': [42, 3], 'faces': [80, 3], 'normals': [42, 3], 'uvs': [63, 2]},
|
||||
'ico_flat': {'vertices': [42, 3], 'faces': [80, 3], 'normals': [80, 3], 'uvs': [63, 2]},
|
||||
'fox': {'vertices': [5002, 3], 'faces': [10000, 3], 'normals': [5002, 3], 'uvs': [5505, 2]}}
|
||||
|
||||
@pytest.mark.parametrize('bname', ['ico_smooth', 'ico_flat', 'fox'])
|
||||
def test_read_write_read(self, bname, expected_sizes):
|
||||
# TODO: also test materials
|
||||
fname = io_data_path(f'{bname}.obj')
|
||||
read_mesh = obj.import_mesh(fname, with_normals=True, with_materials=True)
|
||||
|
||||
# DEBUG INFORMATION (uncomment when debugging)
|
||||
# print_namedtuple_attributes(read_mesh, f'Read OBJ mesh {bname}')
|
||||
assert check_tensor_attribute_shapes(read_mesh, **expected_sizes[bname])
|
||||
61
tests/python/kaolin/io/test_off.py
Normal file
@@ -0,0 +1,61 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.io import off
|
||||
|
||||
ROOT_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), '../../../samples/')
|
||||
SIMPLE_DIR = os.path.join(ROOT_DIR, 'simple_off/')
|
||||
|
||||
class TestLoadOff:
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_vertices(self):
|
||||
return torch.FloatTensor([
|
||||
[-0.1, -0.1, -0.1],
|
||||
[ 0.1, -0.1, -0.1],
|
||||
[-0.1, 0.1, -0.1],
|
||||
[ 0.1, 0.1, -0.1],
|
||||
[-0.1, -0.1, 0.1],
|
||||
[ 0.1, -0.1, 0.1]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces(self):
|
||||
return torch.LongTensor([
|
||||
[1, 2, 4, 3],
|
||||
[2, 1, 5, 6]
|
||||
])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_colors(self):
|
||||
return torch.LongTensor([
|
||||
[128, 128, 128],
|
||||
[0, 0, 255],
|
||||
])
|
||||
|
||||
@pytest.mark.parametrize('with_face_colors', [False, True])
|
||||
def test_import_mesh(self, with_face_colors, expected_vertices, expected_faces,
|
||||
expected_face_colors):
|
||||
outputs = off.import_mesh(os.path.join(SIMPLE_DIR, 'model.off'),
|
||||
with_face_colors=with_face_colors)
|
||||
assert torch.equal(outputs.vertices, expected_vertices)
|
||||
assert torch.equal(outputs.faces, expected_faces)
|
||||
if with_face_colors:
|
||||
assert torch.equal(outputs.face_colors, expected_face_colors)
|
||||
else:
|
||||
assert outputs.face_colors is None
|
||||
157
tests/python/kaolin/io/test_render.py
Normal file
@@ -0,0 +1,157 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
import math
|
||||
import os
|
||||
import pytest
|
||||
import random
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from PIL import Image
|
||||
|
||||
from kaolin.io import render
|
||||
from kaolin.render.camera import generate_perspective_projection
|
||||
from kaolin.utils.testing import FLOAT_TYPES
|
||||
|
||||
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
SAMPLE_DIR = os.path.join(ROOT_DIR, os.pardir, os.pardir, os.pardir, 'samples', 'synthetic')
|
||||
|
||||
class TestImportView:
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_rgb(self):
|
||||
path = os.path.join(SAMPLE_DIR, '0_rgb.png')
|
||||
return torch.from_numpy(
|
||||
np.array(Image.open(path))
|
||||
)[:, :, :3].float() / 255.
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_depth_linear(self):
|
||||
path = os.path.join(SAMPLE_DIR, '0_depth_linear.npy')
|
||||
return torch.from_numpy(np.load(path))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_semantic(self):
|
||||
path = os.path.join(SAMPLE_DIR, '0_semantic.npy')
|
||||
return torch.from_numpy(np.load(path))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_instance(self):
|
||||
path = os.path.join(SAMPLE_DIR, '0_instance.npy')
|
||||
return torch.from_numpy(np.load(path))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_normals(self):
|
||||
path = os.path.join(SAMPLE_DIR, '0_normals.png')
|
||||
return torch.from_numpy(
|
||||
np.array(Image.open(path))
|
||||
)[:, :, :3].float() / 255.
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_json(self):
|
||||
path = os.path.join(SAMPLE_DIR, '0_metadata.json')
|
||||
with open(path, 'r') as f:
|
||||
fjson = json.load(f)
|
||||
return fjson
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_metadata(self, expected_json):
|
||||
asset_transforms = torch.FloatTensor(expected_json['asset_transforms'][0][1])
|
||||
cam_transform = torch.FloatTensor(expected_json['camera_properties']['tf_mat'])
|
||||
aspect_ratio = (expected_json['camera_properties']['resolution']['width'] /
|
||||
expected_json['camera_properties']['resolution']['height'])
|
||||
focal_length = expected_json['camera_properties']['focal_length']
|
||||
horizontal_aperture = expected_json['camera_properties']['horizontal_aperture']
|
||||
fov = 2 * math.atan(horizontal_aperture / (2 * focal_length))
|
||||
return {
|
||||
'cam_transform': cam_transform[:, :3],
|
||||
'asset_transforms': asset_transforms,
|
||||
'cam_proj': generate_perspective_projection(fov, aspect_ratio),
|
||||
'clipping_range': expected_json['camera_properties']['clipping_range']
|
||||
}
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_bbox_2d_tight(self, expected_json):
|
||||
return expected_json['bbox_2d_tight']
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_bbox_2d_loose(self, expected_json):
|
||||
return expected_json['bbox_2d_loose']
|
||||
|
||||
@pytest.mark.parametrize('with_rgb', [True, False])
|
||||
@pytest.mark.parametrize('with_depth_linear', [True, False])
|
||||
@pytest.mark.parametrize('with_semantic', [True, False])
|
||||
@pytest.mark.parametrize('with_instance', [True, False])
|
||||
@pytest.mark.parametrize('with_normals', [True, False])
|
||||
@pytest.mark.parametrize('with_bbox_2d_tight', [True, False])
|
||||
@pytest.mark.parametrize('with_bbox_2d_loose', [True, False])
|
||||
def test_import_synthetic_view(self, expected_rgb, expected_depth_linear,
|
||||
expected_semantic, expected_instance,
|
||||
expected_normals, expected_bbox_2d_tight,
|
||||
expected_bbox_2d_loose, expected_metadata,
|
||||
with_rgb, with_depth_linear, with_semantic,
|
||||
with_instance, with_normals, with_bbox_2d_tight,
|
||||
with_bbox_2d_loose):
|
||||
output = render.import_synthetic_view(SAMPLE_DIR, 0,
|
||||
rgb=with_rgb,
|
||||
depth_linear=with_depth_linear,
|
||||
semantic=with_semantic,
|
||||
instance=with_instance,
|
||||
normals=with_normals,
|
||||
bbox_2d_tight=with_bbox_2d_tight,
|
||||
bbox_2d_loose=with_bbox_2d_loose)
|
||||
|
||||
if with_rgb:
|
||||
assert torch.equal(output['rgb'], expected_rgb)
|
||||
else:
|
||||
assert 'rgb' not in output
|
||||
|
||||
if with_depth_linear:
|
||||
assert torch.equal(output['depth_linear'], expected_depth_linear)
|
||||
else:
|
||||
assert 'depth_linear' not in output
|
||||
|
||||
if with_semantic:
|
||||
assert torch.equal(output['semantic'], expected_semantic)
|
||||
else:
|
||||
assert 'semantic' not in output
|
||||
|
||||
if with_instance:
|
||||
assert torch.equal(output['instance'], expected_instance)
|
||||
else:
|
||||
assert 'instance' not in output
|
||||
|
||||
if with_normals:
|
||||
assert torch.equal(output['normals'], expected_normals)
|
||||
else:
|
||||
assert 'normals' not in output
|
||||
|
||||
if with_bbox_2d_tight:
|
||||
assert output['bbox_2d_tight'] == expected_bbox_2d_tight
|
||||
else:
|
||||
assert 'bbox_2d_tight' not in output
|
||||
|
||||
if with_bbox_2d_loose:
|
||||
assert output['bbox_2d_loose'] == expected_bbox_2d_loose
|
||||
else:
|
||||
assert 'bbox_2d_loose' not in output
|
||||
|
||||
assert expected_metadata.keys() == output['metadata'].keys()
|
||||
assert torch.equal(expected_metadata['cam_transform'],
|
||||
output['metadata']['cam_transform'])
|
||||
assert torch.equal(expected_metadata['cam_proj'],
|
||||
output['metadata']['cam_proj'])
|
||||
assert (expected_metadata['clipping_range'] ==
|
||||
output['metadata']['clipping_range'])
|
||||
207
tests/python/kaolin/io/test_shapenet.py
Normal file
@@ -0,0 +1,207 @@
|
||||
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pathlib import Path
|
||||
import os
|
||||
import copy
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
import random
|
||||
|
||||
from kaolin.rep import SurfaceMesh
|
||||
from kaolin.io.dataset import KaolinDatasetItem
|
||||
from kaolin.io import shapenet
|
||||
from kaolin.utils.testing import contained_torch_equal
|
||||
|
||||
SHAPENETV1_PATH = os.getenv('KAOLIN_TEST_SHAPENETV1_PATH')
|
||||
SHAPENETV2_PATH = os.getenv('KAOLIN_TEST_SHAPENETV2_PATH')
|
||||
SHAPENET_TEST_CATEGORY_SYNSETS = ['02933112']
|
||||
SHAPENET_TEST_CATEGORY_LABELS = ['dishwasher']
|
||||
SHAPENET_TEST_CATEGORY_MULTI = ['mailbox', '04379243']
|
||||
|
||||
ALL_CATEGORIES = [
|
||||
SHAPENET_TEST_CATEGORY_SYNSETS,
|
||||
SHAPENET_TEST_CATEGORY_LABELS,
|
||||
SHAPENET_TEST_CATEGORY_MULTI
|
||||
]
|
||||
|
||||
@pytest.mark.parametrize('version', [
|
||||
pytest.param('v1', marks=pytest.mark.skipif(
|
||||
SHAPENETV1_PATH is None,
|
||||
reason="'KAOLIN_TEST_SHAPENETV1_PATH' environment variable is not set."
|
||||
)),
|
||||
pytest.param('v2', marks=pytest.mark.skipif(
|
||||
SHAPENETV2_PATH is None,
|
||||
reason="'KAOLIN_TEST_SHAPENETV2_PATH' environment variable is not set."
|
||||
))
|
||||
])
|
||||
@pytest.mark.parametrize('categories', ALL_CATEGORIES)
|
||||
@pytest.mark.parametrize('with_materials', [True, False])
|
||||
@pytest.mark.parametrize('output_dict', [True, False])
|
||||
@pytest.mark.parametrize('use_transform', [True, False])
|
||||
class TestShapeNet(object):
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def transform(self, output_dict, use_transform):
|
||||
if use_transform:
|
||||
if output_dict:
|
||||
def transform(inputs):
|
||||
outputs = copy.copy(inputs)
|
||||
outputs['mesh'] = SurfaceMesh(
|
||||
vertices=outputs['mesh'].vertices + 1.,
|
||||
faces=outputs['mesh'].faces,
|
||||
uvs=outputs['mesh'].uvs,
|
||||
face_uvs_idx=outputs['mesh'].face_uvs_idx,
|
||||
materials=outputs['mesh'].materials,
|
||||
material_assignments=outputs['mesh'].material_assignments,
|
||||
normals=outputs['mesh'].normals,
|
||||
face_normals_idx=outputs['mesh'].face_normals_idx
|
||||
)
|
||||
return outputs
|
||||
return transform
|
||||
else:
|
||||
def transform(inputs):
|
||||
outputs = KaolinDatasetItem(
|
||||
data=SurfaceMesh(
|
||||
vertices=inputs.data.vertices + 1.,
|
||||
faces=inputs.data.faces,
|
||||
uvs=inputs.data.uvs,
|
||||
face_uvs_idx=inputs.data.face_uvs_idx,
|
||||
materials=inputs.data.materials,
|
||||
material_assignments=inputs.data.material_assignments,
|
||||
normals=inputs.data.normals,
|
||||
face_normals_idx=inputs.data.face_normals_idx
|
||||
),
|
||||
attributes=inputs.attributes)
|
||||
return outputs
|
||||
return transform
|
||||
else:
|
||||
return None
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def shapenet_dataset(self, version, categories, with_materials, transform, output_dict):
|
||||
if version == 'v1':
|
||||
ds = shapenet.ShapeNetV1(root=SHAPENETV1_PATH,
|
||||
categories=categories,
|
||||
train=True,
|
||||
split=0.7,
|
||||
with_materials=with_materials,
|
||||
transform=transform,
|
||||
output_dict=output_dict)
|
||||
elif version == 'v2':
|
||||
ds = shapenet.ShapeNetV2(root=SHAPENETV2_PATH,
|
||||
categories=categories,
|
||||
train=True,
|
||||
split=0.7,
|
||||
with_materials=with_materials,
|
||||
transform=transform,
|
||||
output_dict=output_dict)
|
||||
else:
|
||||
raise ValueError(f"version {version} not recognized")
|
||||
return ds
|
||||
|
||||
@pytest.mark.parametrize('index', [0, -1, None, None])
|
||||
def test_basic_getitem(self, shapenet_dataset, index, with_materials, output_dict):
|
||||
if index is None:
|
||||
index = random.randint(0, len(shapenet_dataset) - 1)
|
||||
assert len(shapenet_dataset) > 0
|
||||
|
||||
item = shapenet_dataset[index]
|
||||
if output_dict:
|
||||
data = item['mesh']
|
||||
attributes = item
|
||||
else:
|
||||
data = item.data
|
||||
attributes = item.attributes
|
||||
assert isinstance(data, SurfaceMesh)
|
||||
assert isinstance(attributes, dict)
|
||||
|
||||
assert isinstance(data.vertices, torch.Tensor)
|
||||
assert len(data.vertices.shape) == 2
|
||||
assert data.vertices.shape[0] > 0
|
||||
assert data.vertices.shape[1] == 3
|
||||
|
||||
assert isinstance(data.faces, torch.LongTensor)
|
||||
assert len(data.faces.shape) == 2
|
||||
assert data.faces.shape[0] > 0
|
||||
assert data.faces.shape[1] == 3
|
||||
|
||||
if with_materials:
|
||||
assert isinstance(data.uvs, torch.Tensor)
|
||||
assert len(data.uvs.shape) == 2
|
||||
assert data.uvs.shape[1] == 2
|
||||
|
||||
assert isinstance(data.face_uvs_idx, torch.LongTensor)
|
||||
assert data.face_uvs_idx.shape == data.faces.shape
|
||||
assert isinstance(data.materials, list)
|
||||
assert len(data.materials) > 0
|
||||
assert isinstance(data.material_assignments, torch.ShortTensor)
|
||||
assert list(data.material_assignments.shape) == [data.faces.shape[0]]
|
||||
else:
|
||||
assert data.uvs is None
|
||||
assert data.face_uvs_idx is None
|
||||
assert data.materials is None
|
||||
assert data.material_assignments is None
|
||||
|
||||
assert isinstance(attributes['name'], str)
|
||||
assert isinstance(attributes['path'], Path)
|
||||
assert isinstance(attributes['synset'], str)
|
||||
assert isinstance(attributes['labels'], list)
|
||||
|
||||
@pytest.mark.parametrize('index', [-1, -2])
|
||||
def test_neg_index(self, shapenet_dataset, index, output_dict):
|
||||
|
||||
assert len(shapenet_dataset) > 0
|
||||
|
||||
gt_item = shapenet_dataset[len(shapenet_dataset) + index]
|
||||
if output_dict:
|
||||
gt_data = gt_item['mesh']
|
||||
else:
|
||||
gt_data = gt_item.data
|
||||
|
||||
item = shapenet_dataset[index]
|
||||
if output_dict:
|
||||
data = item['mesh']
|
||||
else:
|
||||
data = item.data
|
||||
|
||||
contained_torch_equal(item, gt_item)
|
||||
|
||||
def test_test_split(self, shapenet_dataset, with_materials, output_dict,
|
||||
version, categories):
|
||||
if version == 'v1':
|
||||
test_dataset = shapenet.ShapeNetV1(root=SHAPENETV1_PATH,
|
||||
categories=categories,
|
||||
train=False,
|
||||
split=0.7,
|
||||
with_materials=with_materials,
|
||||
output_dict=output_dict)
|
||||
else:
|
||||
test_dataset = shapenet.ShapeNetV2(root=SHAPENETV2_PATH,
|
||||
categories=categories,
|
||||
train=False,
|
||||
split=0.7,
|
||||
with_materials=with_materials,
|
||||
output_dict=output_dict)
|
||||
train_item = shapenet_dataset[0]
|
||||
test_item = test_dataset[0]
|
||||
if output_dict:
|
||||
train_attributes = train_item
|
||||
test_attributes = test_item
|
||||
else:
|
||||
train_attributes = train_item.attributes
|
||||
test_attributes = test_item.attributes
|
||||
assert train_attributes['name'] != test_attributes['name']
|
||||
assert test_attributes['name'] not in shapenet_dataset.names
|
||||
155
tests/python/kaolin/io/test_shrec.py
Normal file
@@ -0,0 +1,155 @@
|
||||
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import os
|
||||
import copy
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.rep import SurfaceMesh
|
||||
from kaolin.io.dataset import KaolinDatasetItem
|
||||
from kaolin.io.shrec import SHREC16
|
||||
|
||||
SHREC16_PATH = os.getenv('KAOLIN_TEST_SHREC16_PATH')
|
||||
SHREC16_TEST_CATEGORY_SYNSETS = ['02691156']
|
||||
SHREC16_TEST_CATEGORY_LABELS = ['airplane']
|
||||
SHREC16_TEST_CATEGORY_SYNSETS_2 = ['02958343']
|
||||
SHREC16_TEST_CATEGORY_LABELS_2 = ['car']
|
||||
SHREC16_TEST_CATEGORY_SYNSETS_MULTI = ['02691156', '02958343']
|
||||
SHREC16_TEST_CATEGORY_LABELS_MULTI = ['airplane', 'car']
|
||||
|
||||
ALL_CATEGORIES = [
|
||||
SHREC16_TEST_CATEGORY_SYNSETS,
|
||||
SHREC16_TEST_CATEGORY_LABELS,
|
||||
SHREC16_TEST_CATEGORY_SYNSETS_2,
|
||||
SHREC16_TEST_CATEGORY_LABELS_2,
|
||||
SHREC16_TEST_CATEGORY_SYNSETS_MULTI,
|
||||
SHREC16_TEST_CATEGORY_LABELS_MULTI,
|
||||
]
|
||||
|
||||
|
||||
# Skip test in a CI environment
|
||||
@pytest.mark.skipif(SHREC16_PATH is None,
|
||||
reason="'KAOLIN_TEST_SHREC16_PATH' environment variable is not set.")
|
||||
@pytest.mark.parametrize('categories', ALL_CATEGORIES)
|
||||
@pytest.mark.parametrize('split', ['train', 'val', 'test'])
|
||||
@pytest.mark.parametrize('use_transform', [True, False])
|
||||
@pytest.mark.parametrize('output_dict', [True, False])
|
||||
class TestSHREC16(object):
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def transform(self, output_dict, use_transform):
|
||||
if use_transform:
|
||||
if output_dict:
|
||||
def transform(inputs):
|
||||
outputs = copy.copy(inputs)
|
||||
outputs['mesh'] = SurfaceMesh(
|
||||
vertices=outputs['mesh'].vertices + 1.,
|
||||
faces=outputs['mesh'].faces,
|
||||
uvs=outputs['mesh'].uvs,
|
||||
face_uvs_idx=outputs['mesh'].face_uvs_idx,
|
||||
materials=outputs['mesh'].materials,
|
||||
material_assignments=outputs['mesh'].material_assignments,
|
||||
normals=outputs['mesh'].normals,
|
||||
face_normals_idx=outputs['mesh'].face_normals_idx
|
||||
)
|
||||
return outputs
|
||||
return transform
|
||||
else:
|
||||
def transform(inputs):
|
||||
outputs = KaolinDatasetItem(
|
||||
data=SurfaceMesh(
|
||||
vertices=inputs.data.vertices + 1.,
|
||||
faces=inputs.data.faces,
|
||||
uvs=inputs.data.uvs,
|
||||
face_uvs_idx=inputs.data.face_uvs_idx,
|
||||
materials=inputs.data.materials,
|
||||
material_assignments=inputs.data.material_assignments,
|
||||
normals=inputs.data.normals,
|
||||
face_normals_idx=inputs.data.face_normals_idx
|
||||
),
|
||||
attributes=inputs.attributes)
|
||||
return outputs
|
||||
return transform
|
||||
else:
|
||||
return None
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def shrec16_dataset(self, categories, split, transform, output_dict):
|
||||
return SHREC16(root=SHREC16_PATH,
|
||||
categories=categories,
|
||||
split=split,
|
||||
transform=transform,
|
||||
output_dict=output_dict)
|
||||
|
||||
@pytest.mark.parametrize('index', [0, -1])
|
||||
def test_basic_getitem(self, shrec16_dataset, index, split, output_dict):
|
||||
assert len(shrec16_dataset) > 0
|
||||
|
||||
if index == -1:
|
||||
index = len(shrec16_dataset) - 1
|
||||
|
||||
item = shrec16_dataset[index]
|
||||
if output_dict:
|
||||
data = item['mesh']
|
||||
attributes = item
|
||||
else:
|
||||
data = item.data
|
||||
attributes = item.attributes
|
||||
assert isinstance(data, SurfaceMesh)
|
||||
assert isinstance(attributes, dict)
|
||||
|
||||
assert isinstance(data.vertices, torch.Tensor)
|
||||
assert len(data.vertices.shape) == 2
|
||||
assert data.vertices.shape[1] == 3
|
||||
assert isinstance(data.faces, torch.Tensor)
|
||||
assert len(data.faces.shape) == 2
|
||||
|
||||
assert isinstance(attributes['name'], str)
|
||||
assert isinstance(attributes['path'], Path)
|
||||
|
||||
if split == "test":
|
||||
assert attributes['synset'] is None
|
||||
assert attributes['labels'] is None
|
||||
else:
|
||||
assert isinstance(attributes['synset'], str)
|
||||
assert isinstance(attributes['labels'], list)
|
||||
|
||||
@pytest.mark.parametrize('index', [-1, -2])
|
||||
def test_neg_index(self, shrec16_dataset, index, output_dict):
|
||||
|
||||
assert len(shrec16_dataset) > 0
|
||||
|
||||
gt_item = shrec16_dataset[len(shrec16_dataset) + index]
|
||||
item = shrec16_dataset[index]
|
||||
if output_dict:
|
||||
data = item['mesh']
|
||||
attributes = item
|
||||
gt_data = gt_item['mesh']
|
||||
gt_attributes = gt_item
|
||||
else:
|
||||
data = item.data
|
||||
attributes = item.attributes
|
||||
gt_data = gt_item.data
|
||||
gt_attributes = gt_item.attributes
|
||||
|
||||
assert torch.equal(data.vertices, gt_data.vertices)
|
||||
assert torch.equal(data.faces, gt_data.faces)
|
||||
|
||||
assert attributes['name'] == gt_attributes['name']
|
||||
assert attributes['path'] == gt_attributes['path']
|
||||
assert attributes['synset'] == gt_attributes['synset']
|
||||
95
tests/python/kaolin/io/test_utils.py
Normal file
@@ -0,0 +1,95 @@
|
||||
# Copyright (c) 2019,20-22, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.io import utils
|
||||
from kaolin.utils.testing import contained_torch_equal
|
||||
|
||||
|
||||
class TestUtils:
|
||||
@pytest.mark.parametrize(
|
||||
'handler', [utils.heterogeneous_mesh_handler_naive_homogenize, utils.mesh_handler_naive_triangulate])
|
||||
@pytest.mark.parametrize(
|
||||
'face_assignment_mode', [0, 1, 2])
|
||||
def test_mesh_handler_naive_triangulate(self, handler, face_assignment_mode):
|
||||
N = 15
|
||||
vertices = torch.rand((N, 3), dtype=torch.float32)
|
||||
face_vertex_counts = torch.LongTensor([3, 4, 5, 3, 6])
|
||||
faces = torch.LongTensor(
|
||||
[0, 1, 2, # Face 0 -> 1 face idx [0]
|
||||
2, 1, 3, 4, # Face 1 -> 2 faces idx [1, 2]
|
||||
4, 5, 6, 7, 8, # Face 2 -> 3 faces idx [3, 4, 5]
|
||||
3, 4, 6, # Face 3 -> 1 face idx [6]
|
||||
8, 9, 10, 11, 12, 13]) # Face 4 -> 4 faces idx [7, 8, 9, 10]
|
||||
expected_faces = torch.LongTensor(
|
||||
[[0, 1, 2],
|
||||
[2, 1, 3], [2, 3, 4],
|
||||
[4, 5, 6], [4, 6, 7], [4, 7, 8],
|
||||
[3, 4, 6],
|
||||
[8, 9, 10], [8, 10, 11], [8, 11, 12], [8, 12, 13]])
|
||||
expected_num_faces = 11
|
||||
expected_face_vertex_counts = torch.LongTensor([3 for _ in range(expected_num_faces)])
|
||||
face_uvs_idx = torch.LongTensor(
|
||||
[0, 1, 2, # UVs for face 0
|
||||
10, 11, 12, 13, # UVs for face 1
|
||||
20, 21, 22, 23, 24, # UVs for face 2
|
||||
30, 31, 32, # UVs for face 3
|
||||
40, 41, 42, 43, 44, 45]) # UVs for face 4
|
||||
expected_face_uvs_idx = torch.LongTensor(
|
||||
[[0, 1, 2],
|
||||
[10, 11, 12], [10, 12, 13],
|
||||
[20, 21, 22], [20, 22, 23], [20, 23, 24],
|
||||
[30, 31, 32],
|
||||
[40, 41, 42], [40, 42, 43], [40, 43, 44], [40, 44, 45]])
|
||||
|
||||
# assignments to faces
|
||||
face_assignments = None
|
||||
expected_face_assignments = None
|
||||
with_assignments = face_assignment_mode > 0
|
||||
if with_assignments:
|
||||
if face_assignment_mode == 1: # 1D tensors for face assignemtns replaced with new face indices
|
||||
face_assignments = {
|
||||
'1': torch.LongTensor([0, 2]),
|
||||
'2': torch.LongTensor([1, 3, 4])}
|
||||
expected_face_assignments = {
|
||||
'1': torch.LongTensor([0, 3, 4, 5]),
|
||||
'2': torch.LongTensor([1, 2, 6, 7, 8, 9, 10])}
|
||||
else: # 2D tensors of start and end face_idx, replaced with new start and end face_idx
|
||||
face_assignments = {
|
||||
'cat': torch.LongTensor([[0, 2], [3, 4], [2, 5]]),
|
||||
'dog': torch.LongTensor([[1, 3]])}
|
||||
expected_face_assignments = {
|
||||
'cat': torch.LongTensor([[0, 3], [6, 7], [3, 11]]),
|
||||
'dog': torch.LongTensor([[1, 6]])}
|
||||
|
||||
res = handler(
|
||||
vertices, face_vertex_counts, faces, face_uvs_idx, face_assignments=face_assignments)
|
||||
assert len(res) == (5 if with_assignments else 4)
|
||||
new_vertices = res[0]
|
||||
new_face_vertex_counts = res[1]
|
||||
new_faces = res[2]
|
||||
new_face_uvs_idx = res[3]
|
||||
|
||||
assert torch.allclose(new_vertices, vertices)
|
||||
assert torch.equal(new_face_vertex_counts, expected_face_vertex_counts)
|
||||
assert torch.equal(new_faces, expected_faces)
|
||||
assert torch.equal(new_face_uvs_idx, expected_face_uvs_idx)
|
||||
|
||||
if with_assignments:
|
||||
new_face_assignments = res[4]
|
||||
assert contained_torch_equal(new_face_assignments, expected_face_assignments)
|
||||
|
||||
|
||||
0
tests/python/kaolin/io/usd/__init__.py
Normal file
634
tests/python/kaolin/io/usd/test_mesh.py
Normal file
@@ -0,0 +1,634 @@
|
||||
# Copyright (c) 2019-2023 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import os
|
||||
import shutil
|
||||
|
||||
import torch
|
||||
import pytest
|
||||
from pxr import Usd, UsdGeom
|
||||
|
||||
from kaolin.io import usd, obj
|
||||
from kaolin.io import utils
|
||||
from kaolin.utils.testing import print_namedtuple_attributes, print_dict_attributes, \
|
||||
check_tensor_attribute_shapes, contained_torch_equal, check_allclose
|
||||
|
||||
__test_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
__samples_path = os.path.join(__test_dir, os.pardir, os.pardir, os.pardir, os.pardir, 'samples')
|
||||
|
||||
|
||||
def io_data_path(fname):
|
||||
""" Return path relative to tests/samples/io"""
|
||||
return os.path.join(__samples_path, 'io', fname)
|
||||
|
||||
|
||||
def samples_data_path(*args):
|
||||
return os.path.join(__samples_path, *args)
|
||||
|
||||
|
||||
def read_raw_usd_attributes(fname_or_stage):
|
||||
if type(fname_or_stage) == str:
|
||||
stage = Usd.Stage.Open(fname_or_stage)
|
||||
else:
|
||||
stage = fname_or_stage
|
||||
|
||||
paths = usd.utils.get_scene_paths(stage, prim_types=["Mesh"])
|
||||
return [usd.get_raw_mesh_prim_geometry(stage.GetPrimAtPath(p), with_normals=True, with_uvs=True, time=0)
|
||||
for p in paths]
|
||||
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir)
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def mesh():
|
||||
obj_mesh = obj.import_mesh(os.path.join(__samples_path, 'rocket.obj'), with_normals=True,
|
||||
with_materials=True, error_handler=obj.skip_error_handler)
|
||||
return obj_mesh
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def mesh_alt():
|
||||
obj_mesh = obj.import_mesh(os.path.join(__samples_path, 'model.obj'), with_normals=True,
|
||||
with_materials=True, error_handler=obj.skip_error_handler)
|
||||
return obj_mesh
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def mesh_path():
|
||||
return os.path.join(__samples_path, 'golden', 'mesh.usda') # rocket # TODO: rename
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def homogenized_golden_path():
|
||||
return os.path.join(__samples_path, 'golden', 'rocket_homogenized.usda')
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def homo_mesh_path():
|
||||
return os.path.join(__samples_path, 'rocket_model_hom.usda')
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def mixed_mesh_path():
|
||||
return os.path.join(__samples_path, 'mixed.usdc')
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def hetero_mesh_path():
|
||||
return os.path.join(__samples_path, 'rocket_hetero.usd')
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def hetero_subsets_materials_mesh_path():
|
||||
return os.path.join(__samples_path, 'rocket_hetero_subsets_materials.usd')
|
||||
|
||||
|
||||
class TestMeshes:
|
||||
@pytest.fixture(scope='class')
|
||||
def scene_paths(self):
|
||||
num_meshes = 2
|
||||
return [f'/World/mesh_{i}' for i in range(num_meshes)]
|
||||
|
||||
def test_input_as_expected(self, mesh, mesh_alt):
|
||||
# DEBUG INFORMATION (uncomment when debugging)
|
||||
# print_namedtuple_attributes(mesh, 'test input read from rocket.obj')
|
||||
# print_namedtuple_attributes(mesh_alt, 'test input read from model.obj')
|
||||
|
||||
# Note normals are not vertex normals here, they are in fact normals not associated to vertices
|
||||
# in the order that is in vertices.
|
||||
assert check_tensor_attribute_shapes(
|
||||
mesh, throw=True,
|
||||
vertices=[426, 3], faces=[832, 3], uvs=[493, 2],
|
||||
face_uvs_idx=[832, 3],
|
||||
normals=[430, 3],
|
||||
face_normals_idx=[832, 3])
|
||||
|
||||
assert check_tensor_attribute_shapes(
|
||||
mesh_alt, throw=True,
|
||||
vertices=[482, 3], faces=[960, 3], uvs=[610, 2],
|
||||
face_uvs_idx=[960, 3],
|
||||
normals=[584, 3],
|
||||
face_normals_idx=[960, 3])
|
||||
|
||||
def test_export_single(self, scene_paths, out_dir, mesh, mesh_path):
|
||||
out_path = os.path.join(out_dir, f'single_mesh.usda')
|
||||
|
||||
# Export a mesh
|
||||
stage = usd.export_mesh(out_path, scene_paths[0], mesh.vertices, mesh.faces)
|
||||
|
||||
# Check against golden usd file path
|
||||
assert open(mesh_path).read() == open(out_path).read()
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True, 'generated'])
|
||||
@pytest.mark.parametrize('with_paths', [False, True])
|
||||
def test_export_import_multiple(self, scene_paths, out_dir, mesh, mesh_alt, input_stage, with_paths):
|
||||
out_path = os.path.join(out_dir, 'partial_meshes.usda')
|
||||
|
||||
# Export some meshes
|
||||
meshes = [mesh, mesh_alt]
|
||||
vertices_list = [mesh.vertices, mesh_alt.vertices]
|
||||
faces_list = [mesh.faces, mesh_alt.faces]
|
||||
actual_face_normals_list = [mesh.normals[mesh.face_normals_idx],
|
||||
mesh_alt.normals[mesh_alt.face_normals_idx]]
|
||||
# Try exporting just vertices and faces
|
||||
stage = usd.export_meshes(out_path, scene_paths, vertices_list, faces_list)
|
||||
|
||||
# Now export all the attributes
|
||||
out_path = os.path.join(out_dir, 'meshes.usda')
|
||||
# TODO: properly export with materials once can convert OBJ materials to USD
|
||||
stage = usd.export_meshes(out_path, scene_paths, vertices_list, faces_list,
|
||||
uvs=[mesh.uvs, mesh_alt.uvs],
|
||||
face_uvs_idx=[mesh.face_uvs_idx, mesh_alt.face_uvs_idx],
|
||||
face_normals=actual_face_normals_list)
|
||||
|
||||
# Test that we can read both meshes correctly with/without paths
|
||||
args = {}
|
||||
if with_paths:
|
||||
args = {'scene_paths': scene_paths}
|
||||
|
||||
if input_stage == 'generated':
|
||||
path_or_stage = stage # Also test stage output by export_meshes
|
||||
else:
|
||||
path_or_stage = Usd.Stage.Open(out_path) if input_stage else out_path
|
||||
|
||||
# TODO: once above is fixed use with_materials=True here
|
||||
meshes_in = usd.import_meshes(path_or_stage, with_normals=True, **args)
|
||||
assert len(meshes_in) == len(meshes)
|
||||
for i, orig_mesh in enumerate(meshes):
|
||||
in_mesh = meshes_in[i]
|
||||
# DEBUG INFORMATION (uncomment when debugging)
|
||||
# print_namedtuple_attributes(orig_mesh, f'Orig mesh [{i}]')
|
||||
# print_namedtuple_attributes(in_mesh, f'Imported mesh [{i}]')
|
||||
|
||||
# We check key attributes
|
||||
assert contained_torch_equal(
|
||||
{'vertices': orig_mesh.vertices, 'faces': orig_mesh.faces, 'uvs': orig_mesh.uvs,
|
||||
'face_uvs_idx': orig_mesh.face_uvs_idx, 'face_normals': actual_face_normals_list[i]},
|
||||
{'vertices': in_mesh.vertices, 'faces': in_mesh.faces, 'uvs': in_mesh.uvs,
|
||||
'face_uvs_idx': in_mesh.face_uvs_idx, 'face_normals': in_mesh.face_normals},
|
||||
approximate=True, rtol=1e-5, atol=1e-8)
|
||||
|
||||
# Test that can also read the flattened mesh and check that attributes are correctly shifted
|
||||
mesh_in = usd.import_mesh(path_or_stage, with_normals=True)
|
||||
assert len(mesh_in.vertices) == (len(mesh.vertices) + len(mesh_alt.vertices))
|
||||
assert len(mesh_in.faces) == (len(mesh.faces) + len(mesh_alt.faces))
|
||||
assert contained_torch_equal(
|
||||
{'vertices': torch.cat([mesh.vertices, mesh_alt.vertices], dim=0),
|
||||
'faces': torch.cat([mesh.faces, mesh_alt.faces + mesh.vertices.shape[0]], dim=0),
|
||||
'uvs': torch.cat([mesh.uvs, mesh_alt.uvs], dim=0),
|
||||
'face_uvs_idx': torch.cat([mesh.face_uvs_idx, mesh_alt.face_uvs_idx + mesh.uvs.shape[0]], dim=0),
|
||||
'face_normals': torch.cat(actual_face_normals_list, dim=0)
|
||||
},
|
||||
{'vertices': mesh_in.vertices,
|
||||
'faces': mesh_in.faces,
|
||||
'uvs': mesh_in.uvs,
|
||||
'face_uvs_idx': mesh_in.face_uvs_idx,
|
||||
'face_normals': mesh_in.face_normals
|
||||
},
|
||||
approximate=True, rtol=1e-5, atol=1e-8)
|
||||
|
||||
def test_import_bad_prim(self, scene_paths, mesh_path):
|
||||
"""Test that import fails when reaching invalid prims"""
|
||||
with pytest.raises(ValueError):
|
||||
usd.import_meshes(mesh_path, ['/foo'] + scene_paths)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
@pytest.mark.parametrize('triangulate', [False, True])
|
||||
def test_import_hetero_fail(self, hetero_mesh_path, input_stage, triangulate):
|
||||
"""Test that import fails when importing heterogeneous mesh without handler"""
|
||||
path_or_stage = Usd.Stage.Open(hetero_mesh_path) if input_stage else hetero_mesh_path
|
||||
|
||||
if not triangulate:
|
||||
with pytest.raises(utils.NonHomogeneousMeshError):
|
||||
usd.import_meshes(file_path_or_stage=path_or_stage, scene_paths=['/Root'], triangulate=triangulate)
|
||||
|
||||
with pytest.raises(utils.NonHomogeneousMeshError):
|
||||
usd.import_mesh(file_path_or_stage=path_or_stage, scene_path='/Root', triangulate=triangulate)
|
||||
|
||||
else:
|
||||
usd.import_meshes(file_path_or_stage=path_or_stage, scene_paths=['/Root'], triangulate=triangulate)
|
||||
usd.import_mesh(file_path_or_stage=path_or_stage, scene_path='/Root', triangulate=triangulate)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
@pytest.mark.parametrize('triangulate', [False, True])
|
||||
def test_import_hetero_skip(self, scene_paths, hetero_mesh_path, homo_mesh_path, mixed_mesh_path, input_stage, triangulate):
|
||||
"""Test that import skips mesh when importing heterogeneous mesh with skip handler"""
|
||||
path_or_stage = Usd.Stage.Open(hetero_mesh_path) if input_stage else hetero_mesh_path
|
||||
meshes = usd.import_meshes(path_or_stage, ['/Root'],
|
||||
heterogeneous_mesh_handler=utils.heterogeneous_mesh_handler_skip,
|
||||
triangulate=triangulate)
|
||||
assert len(meshes) == 0
|
||||
|
||||
path_or_stage = Usd.Stage.Open(homo_mesh_path) if input_stage else homo_mesh_path
|
||||
meshes = usd.import_meshes(path_or_stage,
|
||||
heterogeneous_mesh_handler=utils.heterogeneous_mesh_handler_skip)
|
||||
assert len(meshes) == 2
|
||||
|
||||
# Test skip on a batch of mixed homogeneous and heterogeneous meshes
|
||||
# Note we can't export heterogeneous meshes, so previous test did not work
|
||||
path_or_stage = Usd.Stage.Open(mixed_mesh_path) if input_stage else mixed_mesh_path
|
||||
meshes = usd.import_meshes(path_or_stage,
|
||||
heterogeneous_mesh_handler=utils.heterogeneous_mesh_handler_skip)
|
||||
assert len(meshes) == 1
|
||||
|
||||
@pytest.mark.parametrize('use_triangulate_shortcut', [True, False])
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_hetero_homogenize_import_meshes(self, out_dir, hetero_mesh_path, homogenized_golden_path,
|
||||
input_stage, use_triangulate_shortcut):
|
||||
"""Test that imports homogeneous mesh when importing heterogeneous mesh with naive homogenize handler"""
|
||||
# TODO(jlafleche) Render meshes before/after homogenize operation
|
||||
path_or_stage = Usd.Stage.Open(hetero_mesh_path) if input_stage else hetero_mesh_path
|
||||
if use_triangulate_shortcut:
|
||||
kwargs = {'triangulate': True}
|
||||
else:
|
||||
kwargs = {'heterogeneous_mesh_handler': utils.mesh_handler_naive_triangulate}
|
||||
mesh = usd.import_meshes(path_or_stage, ['/Root'], **kwargs)
|
||||
# Confirm we now have a triangle mesh
|
||||
assert mesh[0].faces.size(1) == 3
|
||||
|
||||
out_path = os.path.join(out_dir, 'homogenized.usda')
|
||||
usd.export_mesh(out_path, '/World/Rocket', vertices=mesh[0].vertices, faces=mesh[0].faces)
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
assert open(homogenized_golden_path).read() == open(out_path).read()
|
||||
|
||||
@pytest.mark.parametrize('use_triangulate_shortcut', [True, False])
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_hetero_homogenize_import_mesh(self, out_dir, hetero_mesh_path, homogenized_golden_path,
|
||||
input_stage, use_triangulate_shortcut):
|
||||
"""Test that imports homogeneous mesh when importing heterogeneous mesh with naive homogenize handler"""
|
||||
# TODO(jlafleche) Render meshes before/after homogenize operation
|
||||
path_or_stage = Usd.Stage.Open(hetero_mesh_path) if input_stage else hetero_mesh_path
|
||||
if use_triangulate_shortcut:
|
||||
kwargs = {'triangulate': True}
|
||||
else:
|
||||
kwargs = {'heterogeneous_mesh_handler': utils.mesh_handler_naive_triangulate}
|
||||
mesh = usd.import_mesh(path_or_stage, scene_path='/Root', **kwargs)
|
||||
|
||||
# Confirm we now have a triangle mesh
|
||||
assert mesh.faces.size(1) == 3
|
||||
|
||||
out_path = os.path.join(out_dir, 'homogenized.usda')
|
||||
usd.export_mesh(out_path, '/World/Rocket', vertices=mesh.vertices, faces=mesh.faces)
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
assert open(homogenized_golden_path).read() == open(out_path).read()
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_with_transform(self, scene_paths, out_dir, hetero_mesh_path, input_stage):
|
||||
"""Test that mesh transforms are correctly applied during import"""
|
||||
path_or_stage = Usd.Stage.Open(hetero_mesh_path) if input_stage else hetero_mesh_path
|
||||
mesh = usd.import_mesh(path_or_stage, '/Root',
|
||||
heterogeneous_mesh_handler=utils.mesh_handler_naive_triangulate)
|
||||
out_path = os.path.join(out_dir, 'transformed.usda')
|
||||
stage = usd.create_stage(out_path)
|
||||
prim = usd.add_mesh(stage, '/World/Rocket', vertices=mesh.vertices, faces=mesh.faces)
|
||||
UsdGeom.Xformable(prim).AddTranslateOp().Set((10, 10, 10))
|
||||
stage.Save()
|
||||
|
||||
mesh_import = usd.import_mesh(out_path)
|
||||
assert torch.allclose(mesh_import.vertices, mesh.vertices + 10.)
|
||||
|
||||
@pytest.mark.parametrize('function_variant', ['export_mesh', 'export_meshes'])
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_material_subsets(self, scene_paths, out_dir, hetero_subsets_materials_mesh_path,
|
||||
input_stage, function_variant):
|
||||
"""Test that imports materials from mesh with subsets"""
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(hetero_subsets_materials_mesh_path)
|
||||
else:
|
||||
path_or_stage = hetero_subsets_materials_mesh_path
|
||||
|
||||
# Read and homogenize the mesh
|
||||
mesh = usd.import_mesh(path_or_stage, scene_path='/Root',
|
||||
heterogeneous_mesh_handler=utils.mesh_handler_naive_triangulate,
|
||||
with_normals=True, with_materials=True)
|
||||
# avoid any automatic computation of normals
|
||||
mesh.unset_attributes_return_none = True
|
||||
|
||||
# Confirm we now have a triangulated mesh
|
||||
assert mesh.faces.size(1) == 3
|
||||
|
||||
# Check material assignments
|
||||
expected_material_assignments = torch.zeros((mesh.faces.size(0),), dtype=torch.short)
|
||||
expected_material_assignments[:18 * 2] = 1 # first 18 quads use 2nd material
|
||||
expected_material_assignments[788 + 18:] = 2 # last faces (offset by extra triangles created by quads) use 3rd material
|
||||
assert torch.equal(mesh.material_assignments, expected_material_assignments)
|
||||
|
||||
# Also read in the golden mesh
|
||||
golden_path = samples_data_path('golden', 'rocket_homogenized_materials.usda')
|
||||
golden_mesh = usd.import_mesh(golden_path,
|
||||
heterogeneous_mesh_handler=utils.mesh_handler_naive_triangulate,
|
||||
with_normals=True, with_materials=True)
|
||||
golden_mesh.unset_attributes_return_none = True
|
||||
|
||||
# Spot check against raw USD attributes
|
||||
raw_attributes = read_raw_usd_attributes(path_or_stage)[0]
|
||||
assert torch.sum(raw_attributes['face_sizes'] - 2) == 832, "Bug in read_raw_usd_attributes"
|
||||
assert mesh.faces.size(0) == torch.sum(raw_attributes['face_sizes'] - 2) # Expected for fan triangulation
|
||||
|
||||
# Spot check against raw mesh attributes for a few initial quads
|
||||
assert torch.equal(raw_attributes['face_sizes'][:5], torch.tensor([4, 4, 4, 4, 4])) # make sure first few face sizes are as expected
|
||||
for quad_idx in range(5):
|
||||
tri_idx = quad_idx * 2 # Only true for the first few quads
|
||||
|
||||
# Check the processed mesh
|
||||
expected_vertices = raw_attributes['vertices'][raw_attributes['faces'][quad_idx * 4: quad_idx * 4 + 3], :]
|
||||
expected_normals = raw_attributes['normals'][quad_idx * 4: quad_idx * 4 + 3, :]
|
||||
expected_uvs = raw_attributes['uvs'][quad_idx * 4: quad_idx * 4 + 3, :]
|
||||
assert torch.allclose(mesh.vertices[mesh.faces[tri_idx, :], :], expected_vertices)
|
||||
assert torch.allclose(mesh.face_normals[tri_idx, ...], expected_normals)
|
||||
assert torch.allclose(mesh.uvs[mesh.face_uvs_idx[tri_idx, :], :], expected_uvs)
|
||||
|
||||
# Also sanity check the golden mesh (to catch human error)
|
||||
assert torch.allclose(golden_mesh.vertices[mesh.faces[tri_idx, :], :], expected_vertices)
|
||||
assert torch.allclose(golden_mesh.face_normals[tri_idx, ...], expected_normals)
|
||||
assert torch.allclose(golden_mesh.uvs[mesh.face_uvs_idx[tri_idx, :], :], expected_uvs)
|
||||
# Write the homogenized mesh to file
|
||||
out_path = os.path.join(out_dir, 'rocket_homogenized_materials.usda')
|
||||
if function_variant == 'export_mesh':
|
||||
usd.export_mesh(out_path, '/World/Rocket', vertices=mesh.vertices, faces=mesh.faces,
|
||||
face_uvs_idx=mesh.face_uvs_idx, face_normals=mesh.face_normals, uvs=mesh.uvs,
|
||||
material_assignments=mesh.material_assignments, materials=mesh.materials)
|
||||
else:
|
||||
usd.export_meshes(out_path, ['/World/Rocket'], vertices=[mesh.vertices], faces=[mesh.faces],
|
||||
face_uvs_idx=[mesh.face_uvs_idx], face_normals=[mesh.face_normals], uvs=[mesh.uvs],
|
||||
material_assignments=[mesh.material_assignments],
|
||||
materials=[mesh.materials])
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
assert open(golden_path).read() == open(out_path).read()
|
||||
|
||||
# Confirm we read identical mesh after writing
|
||||
reimported_mesh = usd.import_mesh(out_path, scene_path='/World/Rocket', with_materials=True, with_normals=True)
|
||||
reimported_mesh.unset_attributes_return_none = True
|
||||
|
||||
# Since comparison of materials is not implemented, we override materials with diffuse colors first
|
||||
assert len(mesh.materials) == len(reimported_mesh.materials)
|
||||
assert contained_torch_equal(mesh, reimported_mesh, print_error_context='')
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_with_material(self, scene_paths, out_dir, hetero_subsets_materials_mesh_path, input_stage):
|
||||
"""Test that imports materials from mesh with subsets"""
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(hetero_subsets_materials_mesh_path)
|
||||
else:
|
||||
path_or_stage = hetero_subsets_materials_mesh_path
|
||||
|
||||
mesh = usd.import_mesh(path_or_stage, scene_path='/Root',
|
||||
heterogeneous_mesh_handler=utils.mesh_handler_naive_triangulate,
|
||||
with_materials=False)
|
||||
assert mesh.materials is None
|
||||
assert mesh.material_assignments is None
|
||||
|
||||
mesh = usd.import_mesh(path_or_stage, scene_path='/Root',
|
||||
heterogeneous_mesh_handler=utils.mesh_handler_naive_triangulate,
|
||||
with_materials=True)
|
||||
assert mesh.materials is not None
|
||||
assert mesh.material_assignments is not None
|
||||
|
||||
def test_export_only_vertices(self, out_dir, mesh):
|
||||
out_path = os.path.join(out_dir, 'only_vert.usda')
|
||||
usd.export_mesh(out_path, vertices=mesh.vertices)
|
||||
mesh_in = usd.import_mesh(out_path)
|
||||
assert torch.allclose(mesh_in.vertices, mesh.vertices)
|
||||
|
||||
def test_export_only_faces(self, out_dir, mesh):
|
||||
out_path = os.path.join(out_dir, 'only_faces.usda')
|
||||
usd.export_mesh(out_path, faces=mesh.faces)
|
||||
mesh_in = usd.import_mesh(out_path)
|
||||
assert torch.allclose(mesh_in.faces, mesh.faces)
|
||||
|
||||
def test_export_only_face_uvs(self, out_dir, mesh):
|
||||
out_path = os.path.join(out_dir, 'only_uvs.usda')
|
||||
usd.export_mesh(out_path, vertices=mesh.vertices, faces=mesh.faces, uvs=mesh.uvs)
|
||||
mesh_in = usd.import_mesh(out_path)
|
||||
assert torch.allclose(mesh_in.uvs.view(-1, 2), mesh.uvs.view(-1, 2))
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_st_indices_facevarying(self, out_dir, mesh, input_stage):
|
||||
out_path = os.path.join(out_dir, 'st_indices.usda')
|
||||
uvs = torch.rand((mesh.faces.view(-1).size(0), 2))
|
||||
scene_path = '/World/mesh_0'
|
||||
face_uvs_idx = (torch.rand(mesh.faces.shape[:2]) * 99).long()
|
||||
usd.export_mesh(out_path, scene_path=scene_path, vertices=mesh.vertices,
|
||||
faces=mesh.faces, uvs=uvs, face_uvs_idx=face_uvs_idx)
|
||||
|
||||
# check that interpolation was set correctly to 'faceVarying'
|
||||
stage = Usd.Stage.Open(out_path)
|
||||
pv = UsdGeom.PrimvarsAPI(stage.GetPrimAtPath(scene_path)).GetPrimvar('st')
|
||||
assert pv.GetInterpolation() == 'faceVarying'
|
||||
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
mesh_in = usd.import_mesh(path_or_stage)
|
||||
assert torch.allclose(mesh_in.uvs, uvs)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_st_no_indices_vertex(self, out_dir, mesh, input_stage):
|
||||
out_path = os.path.join(out_dir, 'st_no_indices_vertex.usda')
|
||||
uvs = torch.rand((mesh.vertices.size(0), 2))
|
||||
scene_path = '/World/mesh_0'
|
||||
usd.export_mesh(out_path, scene_path=scene_path, vertices=mesh.vertices,
|
||||
faces=mesh.faces, uvs=uvs)
|
||||
|
||||
# check that interpolation was set correctly to 'vertex'
|
||||
stage = Usd.Stage.Open(out_path)
|
||||
pv = UsdGeom.PrimvarsAPI(stage.GetPrimAtPath(scene_path)).GetPrimvar('st')
|
||||
assert pv.GetInterpolation() == 'vertex'
|
||||
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
mesh_in = usd.import_mesh(path_or_stage)
|
||||
assert torch.allclose(mesh_in.uvs, uvs)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_st_no_indices_facevarying(self, out_dir, mesh, input_stage):
|
||||
out_path = os.path.join(out_dir, 'st_no_indices_face_varying.usda')
|
||||
uvs = torch.rand((mesh.faces.size(0) * mesh.faces.size(1), 2))
|
||||
scene_path = '/World/mesh_0'
|
||||
usd.export_mesh(out_path, scene_path=scene_path, vertices=mesh.vertices,
|
||||
faces=mesh.faces, uvs=uvs)
|
||||
|
||||
# check that interpolation was set correctly to 'faceVarying'
|
||||
stage = Usd.Stage.Open(out_path)
|
||||
pv = UsdGeom.PrimvarsAPI(stage.GetPrimAtPath(scene_path)).GetPrimvar('st')
|
||||
assert pv.GetInterpolation() == 'faceVarying'
|
||||
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
mesh_in = usd.import_mesh(path_or_stage)
|
||||
assert torch.allclose(mesh_in.uvs, uvs)
|
||||
|
||||
def test_import_st_no_indices_uniform(self, out_dir, mesh):
|
||||
out_path = os.path.join(out_dir, 'st_no_indices_face_uniform.usda')
|
||||
uvs = torch.rand((mesh.faces.size(0), 2))
|
||||
scene_path = '/World/mesh_0'
|
||||
usd.export_mesh(out_path, scene_path=scene_path, vertices=mesh.vertices,
|
||||
faces=mesh.faces, uvs=uvs)
|
||||
|
||||
# check that interpolation was set correctly to 'uniform'
|
||||
stage = Usd.Stage.Open(out_path)
|
||||
pv = UsdGeom.PrimvarsAPI(stage.GetPrimAtPath(scene_path)).GetPrimvar('st')
|
||||
|
||||
assert pv.GetInterpolation() == 'uniform'
|
||||
|
||||
# TODO(jlafleche) add support for `uniform` interpolation
|
||||
# mesh_in = usd.import_mesh(out_path)
|
||||
# assert torch.allclose(mesh_in.uvs, uvs)
|
||||
|
||||
def test_export_only_face_normals(self, out_dir, mesh):
|
||||
out_path = os.path.join(out_dir, 'only_normals.usda')
|
||||
usd.export_mesh(out_path, face_normals=mesh.normals[mesh.face_normals_idx])
|
||||
mesh_in = usd.import_mesh(out_path, with_normals=True)
|
||||
assert torch.allclose(mesh_in.face_normals.view(-1, 3), mesh.normals[mesh.face_normals_idx].view(-1, 3))
|
||||
# TODO: support and test normals for various interpolations
|
||||
|
||||
@pytest.mark.parametrize('with_normals', [False, True])
|
||||
@pytest.mark.parametrize('with_materials', [False, True])
|
||||
@pytest.mark.parametrize('flatten', [True, False])
|
||||
def test_import_triangulate(self, with_normals, with_materials, flatten):
|
||||
input_path = io_data_path(f'amsterdam.usd') # Multiple quad meshes
|
||||
if flatten:
|
||||
# Import as one mesh
|
||||
orig = [usd.import_mesh(input_path, with_materials=with_materials, with_normals=with_normals)]
|
||||
triangulated = [usd.import_mesh(input_path, with_materials=with_materials, with_normals=with_normals,
|
||||
triangulate=True)]
|
||||
assert len(orig) == 1
|
||||
assert len(triangulated) == 1
|
||||
expected_num_vertices = [1974]
|
||||
expected_num_quads = [1932]
|
||||
else:
|
||||
# Import as multiple meshes
|
||||
orig = usd.import_meshes(input_path, with_materials=with_materials, with_normals=with_normals)
|
||||
triangulated = usd.import_meshes(input_path, with_materials=with_materials, with_normals=with_normals,
|
||||
triangulate=True)
|
||||
assert len(orig) == 18
|
||||
assert len(triangulated) == 18
|
||||
expected_num_vertices = [4, 98, 98, 98, 386, 386, 98, 8, 98, 98, 98, 4, 4, 4, 386, 98, 4, 4]
|
||||
expected_num_quads = [1, 96, 96, 96, 384, 384, 96, 6, 96, 96, 96, 1, 1, 1, 384, 96, 1, 1]
|
||||
|
||||
for i in range(len(orig)):
|
||||
qmesh = orig[i] # quad mesh
|
||||
tmesh = triangulated[i] # triangle mesh
|
||||
|
||||
# disallow automatic computation of properties (specifically face_normals can be auto-computed)
|
||||
qmesh.allow_auto_compute = False
|
||||
tmesh.allow_auto_compute = False
|
||||
|
||||
check_tensor_attribute_shapes(
|
||||
qmesh, vertices=[expected_num_vertices[i], 3], faces=[expected_num_quads[i], 4])
|
||||
check_tensor_attribute_shapes(
|
||||
tmesh, vertices=[expected_num_vertices[i], 3], faces=[expected_num_quads[i] * 2, 3])
|
||||
assert torch.allclose(qmesh.vertices, tmesh.vertices)
|
||||
if with_materials:
|
||||
assert tmesh.materials is not None
|
||||
assert len(tmesh.materials) > 0
|
||||
assert contained_torch_equal([mat.diffuse_color for mat in qmesh.materials],
|
||||
[mat.diffuse_color for mat in tmesh.materials], approximate=True)
|
||||
else:
|
||||
assert tmesh.materials is None
|
||||
assert tmesh.material_assignments is None
|
||||
|
||||
# Spot check all values for a given quad
|
||||
qidx = expected_num_quads[i] // 2 # quad index
|
||||
tidx = qidx * 2 # triangle index
|
||||
assert torch.allclose(qmesh.vertices[qmesh.faces[qidx, :3], :], tmesh.vertices[tmesh.faces[tidx, :]])
|
||||
assert torch.allclose(qmesh.uvs[qmesh.face_uvs_idx[qidx, :3]], tmesh.uvs[tmesh.face_uvs_idx[tidx, :]])
|
||||
|
||||
if with_normals:
|
||||
assert torch.allclose(qmesh.face_normals[qidx, :3, :], tmesh.face_normals[tidx, ...])
|
||||
else:
|
||||
assert tmesh.face_normals is None
|
||||
|
||||
if with_materials:
|
||||
assert torch.equal(qmesh.material_assignments[qidx], tmesh.material_assignments[tidx])
|
||||
|
||||
|
||||
class TestDiverseInputs:
|
||||
@pytest.fixture(scope='class')
|
||||
def expected_sizes(self):
|
||||
return {'ico_smooth': {'vertices': [42, 3], 'faces': [80, 3]},
|
||||
'ico_flat': {'vertices': [42, 3], 'faces': [80, 3]},
|
||||
'fox': {'vertices': [5002, 3], 'faces': [10000, 3]},
|
||||
'pizza': {'vertices': [482, 3], 'faces': [960, 3]},
|
||||
'armchair': {'vertices': [9204, 3], 'faces': [9200, 4]},
|
||||
'amsterdam': {'vertices': [1974, 3], 'faces': [1932, 4]}}
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def expected_material_counts(self):
|
||||
return {'ico_smooth': 1,
|
||||
'ico_flat': 1,
|
||||
'fox': 1,
|
||||
'pizza': 2,
|
||||
'armchair': 2,
|
||||
'amsterdam': 14}
|
||||
|
||||
# TODO: add armchair
|
||||
@pytest.mark.parametrize('bname', ['ico_flat', 'ico_smooth', 'fox', 'pizza', 'amsterdam', 'armchair'])
|
||||
def test_read_write_read_consistency(self, bname, out_dir, expected_sizes, expected_material_counts):
|
||||
# Read USD version, flattening all meshes into one
|
||||
fname = io_data_path(f'{bname}.usd')
|
||||
read_usd_mesh = usd.import_mesh(fname, with_normals=True, with_materials=True)
|
||||
assert check_tensor_attribute_shapes(read_usd_mesh, **expected_sizes[bname])
|
||||
|
||||
# Read OBJ version
|
||||
fname = io_data_path(f'{bname}.obj')
|
||||
read_obj_mesh = obj.import_mesh(fname, with_normals=True, with_materials=True)
|
||||
|
||||
# DEBUG INFORMATION (uncomment to help diagnose failures)
|
||||
# stage = Usd.Stage.Open(io_data_path(f'{bname}.usd'))
|
||||
# paths = usd.utils.get_scene_paths(stage, prim_types=["Mesh"])
|
||||
# #assert len(paths) == 1
|
||||
# prim = stage.GetPrimAtPath(paths[0])
|
||||
# raw_usd = usd.get_raw_mesh_prim_geometry(prim, with_normals=True, with_uvs=True, time=0)
|
||||
# print_namedtuple_attributes(read_usd_mesh, f'Read USD mesh {bname}')
|
||||
# print_dict_attributes(raw_usd, name=f'RAW USD {bname}')
|
||||
# print_namedtuple_attributes(read_obj_mesh, f'Read OBJ mesh {bname}')
|
||||
|
||||
# Ensure vertex order is consistent before performing any further checks
|
||||
check_allclose(read_obj_mesh.vertices, read_usd_mesh.vertices, atol=1e-04)
|
||||
|
||||
# Check that final face values between the two meshes agree (note the OBJ and USD may store
|
||||
# and index uvs and faces differently, but final per-face per-vertex values must agree
|
||||
assert torch.allclose(read_usd_mesh.face_uvs, read_obj_mesh.face_uvs, atol=1e-04)
|
||||
assert torch.allclose(read_usd_mesh.face_normals, read_obj_mesh.face_normals, atol=1e-04, rtol=1e-03)
|
||||
|
||||
# Check material consistency
|
||||
assert len(read_usd_mesh.materials) == expected_material_counts[bname]
|
||||
assert len(read_usd_mesh.materials) == len(read_obj_mesh.materials)
|
||||
assert len(read_usd_mesh.material_assignments) > 0
|
||||
assert torch.equal(read_usd_mesh.material_assignments, read_obj_mesh.material_assignments)
|
||||
|
||||
# Now write the USD to file, read it back and make sure attributes are as expected
|
||||
out_path = os.path.join(out_dir, f'reexport_{bname}.usda')
|
||||
# TODO: the export fails with materials; add a test and fix this in test_materials.py and here
|
||||
# Note: specular value is expected to be a tuple, not single value as in this case
|
||||
usd.export_mesh(out_path, vertices=read_usd_mesh.vertices, faces=read_usd_mesh.faces,
|
||||
uvs=read_usd_mesh.uvs, face_uvs_idx=read_usd_mesh.face_uvs_idx,
|
||||
face_normals=read_usd_mesh.face_normals)
|
||||
|
||||
# Because we don't want to compare materials, read original mesh and exported mesh
|
||||
fname = io_data_path(f'{bname}.usd')
|
||||
read_usd_mesh = usd.import_mesh(fname, with_normals=True)
|
||||
# Read exported mesh
|
||||
exported_usd_mesh = usd.import_mesh(out_path, with_normals=True)
|
||||
assert contained_torch_equal(read_usd_mesh, exported_usd_mesh, approximate=True, rtol=1e-5, atol=1e-8)
|
||||
187
tests/python/kaolin/io/usd/test_pointcloud.py
Normal file
@@ -0,0 +1,187 @@
|
||||
# Copyright (c) 2019,20-21-23 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import os
|
||||
import shutil
|
||||
|
||||
import torch
|
||||
import pytest
|
||||
|
||||
from pxr import Usd
|
||||
|
||||
from kaolin.io import usd
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir)
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def pointcloud():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
pointcloud, color, normals = usd.import_pointcloud(
|
||||
os.path.join(cur_dir, os.pardir, os.pardir, os.pardir, os.pardir,
|
||||
'samples/rocket_pointcloud_GeomPoints.usda'),
|
||||
'/World/pointcloud')
|
||||
return pointcloud
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def pointcloud_instancer():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
pointcloud, color, normals = usd.import_pointcloud(
|
||||
os.path.join(cur_dir, os.pardir, os.pardir, os.pardir, os.pardir,
|
||||
'samples/rocket_pointcloud.v0.9.0.usda'),
|
||||
'/World/pointcloud')
|
||||
return pointcloud
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def pointcloud_with_color():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
pointcloud, color, normals = usd.import_pointcloud(
|
||||
os.path.join(cur_dir, os.pardir, os.pardir, os.pardir, os.pardir,
|
||||
'samples/golden/pointcloud_GeomPoints_colors.usda'),
|
||||
'/World/pointcloud')
|
||||
return (pointcloud, color)
|
||||
|
||||
class TestPointCloud:
|
||||
def setup_method(self):
|
||||
self.scene_path = '/World/pointcloud'
|
||||
self.num_multiple = 3
|
||||
|
||||
def test_export_single(self, out_dir, pointcloud):
|
||||
out_path = os.path.join(out_dir, 'pointcloud.usda')
|
||||
usd.export_pointcloud(pointcloud=pointcloud, file_path=out_path, scene_path=self.scene_path, points_type='usd_geom_points')
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
golden = os.path.join(out_dir, os.pardir, os.pardir, os.pardir, os.pardir, os.pardir,
|
||||
'samples/golden/pointcloud_GeomPoints.usda')
|
||||
assert open(golden).read() == open(out_path).read()
|
||||
|
||||
def test_export_single_instancer(self, out_dir, pointcloud):
|
||||
out_path = os.path.join(out_dir, 'pointcloud_instancer.usda')
|
||||
usd.export_pointcloud(pointcloud=pointcloud, file_path=out_path, scene_path=self.scene_path)
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
golden = os.path.join(out_dir, os.pardir, os.pardir, os.pardir, os.pardir, os.pardir,
|
||||
'samples/golden/pointcloud_PointInstancer.usda')
|
||||
assert open(golden).read() == open(out_path).read()
|
||||
|
||||
def test_export_multiple(self, out_dir, pointcloud):
|
||||
out_path = os.path.join(out_dir, 'pointclouds.usda')
|
||||
|
||||
# Export some meshes using default scene paths
|
||||
usd.export_pointclouds(pointclouds=[pointcloud for _ in range(self.num_multiple)],
|
||||
file_path=out_path, points_type='usd_geom_points')
|
||||
|
||||
# Test that can get their scene paths later
|
||||
scene_paths = usd.get_pointcloud_scene_paths(out_path)
|
||||
assert len(scene_paths) == self.num_multiple
|
||||
|
||||
def test_export_multiple_instancer(self, out_dir, pointcloud):
|
||||
out_path = os.path.join(out_dir, 'pointclouds_instancer.usda')
|
||||
|
||||
usd.export_pointclouds(pointclouds=[pointcloud for _ in range(self.num_multiple)],
|
||||
file_path=out_path)
|
||||
|
||||
# Test that can get their scene paths later
|
||||
scene_paths = usd.get_pointcloud_scene_paths(out_path)
|
||||
assert len(scene_paths) == self.num_multiple
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_single(self, out_dir, pointcloud, input_stage):
|
||||
out_path = os.path.join(out_dir, 'pointcloud.usda')
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
pointcloud_in = usd.import_pointcloud(path_or_stage, scene_path=self.scene_path).points
|
||||
|
||||
# Confirm imported pointcloud matches original input
|
||||
assert torch.allclose(pointcloud, pointcloud_in)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_multiple(self, out_dir, pointcloud, input_stage):
|
||||
out_path = os.path.join(out_dir, 'pointclouds.usda')
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
pointcloud_in_list = usd.import_pointclouds(path_or_stage)
|
||||
|
||||
# Confirm imported pointcloud matches original input
|
||||
assert len(pointcloud_in_list) == self.num_multiple
|
||||
for pointcloud_in, colors_in, normals_in in pointcloud_in_list:
|
||||
assert torch.allclose(pointcloud, pointcloud_in)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_single_instancer(self, out_dir, pointcloud_instancer, input_stage):
|
||||
# Test that the read from UsdPointInstancer is the same as the read from UsdGeomPoints
|
||||
out_path = os.path.join(out_dir, 'pointcloud.usda')
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
pointcloud_in, colors_in, normals_in = usd.import_pointcloud(
|
||||
path_or_stage, scene_path=self.scene_path)
|
||||
|
||||
# Confirm imported pointcloud matches original input
|
||||
assert torch.allclose(pointcloud_instancer, pointcloud_in)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_multiple_instancer(self, out_dir, pointcloud_instancer, input_stage):
|
||||
# Test that the read from UsdPointInstancer is the same as the read from UsdGeomPoints
|
||||
out_path = os.path.join(out_dir, 'pointclouds.usda')
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
pointcloud_in_list = usd.import_pointclouds(path_or_stage)
|
||||
|
||||
# Confirm imported pointcloud matches original input
|
||||
assert len(pointcloud_in_list) == self.num_multiple
|
||||
for pointcloud_in, colors_in, normals_in in pointcloud_in_list:
|
||||
assert torch.allclose(pointcloud_instancer, pointcloud_in)
|
||||
|
||||
def test_export_single_colors(self, out_dir, pointcloud_with_color):
|
||||
# Export a single pointcloud with colors
|
||||
pointcloud, color = pointcloud_with_color
|
||||
|
||||
out_path = os.path.join(out_dir, 'pointcloud_colors.usda')
|
||||
usd.export_pointcloud(pointcloud=pointcloud, file_path=out_path, color=color,
|
||||
scene_path=self.scene_path, points_type='usd_geom_points')
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
golden = os.path.join(out_dir, os.pardir, os.pardir, os.pardir, os.pardir, os.pardir,
|
||||
'samples/golden/pointcloud_GeomPoints_colors.usda')
|
||||
assert open(golden).read() == open(out_path).read()
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_single_color(self, out_dir, pointcloud, input_stage):
|
||||
out_path = os.path.join(out_dir, 'pointcloud_colors.usda')
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
pointcloud_in, color, _ = usd.import_pointcloud(path_or_stage, scene_path=self.scene_path)
|
||||
|
||||
# Confirm imported pointcloud matches original input
|
||||
assert torch.allclose(pointcloud, pointcloud_in)
|
||||
|
||||
# Confirm that points have the same shape as color
|
||||
assert pointcloud_in.shape == color.shape
|
||||
106
tests/python/kaolin/io/usd/test_utils.py
Normal file
@@ -0,0 +1,106 @@
|
||||
# Copyright (c) 2019,20-21-23 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import os
|
||||
import pytest
|
||||
import shutil
|
||||
from pxr import Usd
|
||||
|
||||
from kaolin.io import usd, obj
|
||||
from kaolin.ops.conversions import trianglemeshes_to_voxelgrids
|
||||
|
||||
|
||||
__test_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
__samples_path = os.path.join(__test_dir, os.pardir, os.pardir, os.pardir, os.pardir, 'samples')
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(__test_dir, '_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir)
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def mesh():
|
||||
obj_mesh = obj.import_mesh(os.path.join(__samples_path, 'rocket.obj'),
|
||||
with_normals=True, with_materials=True, error_handler=obj.skip_error_handler)
|
||||
return obj_mesh
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def mesh_path():
|
||||
return os.path.join(__samples_path, 'golden', 'mesh.usda') # rocket # TODO: rename file
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def pointcloud():
|
||||
pointcloud, color, normals = usd.import_pointcloud(
|
||||
os.path.join(__samples_path, 'rocket_pointcloud_GeomPoints.usda'),
|
||||
'/World/pointcloud')
|
||||
return pointcloud
|
||||
|
||||
class TestVoxelGrid:
|
||||
def setup_method(self):
|
||||
self.scene_path = '/World/voxelgrid'
|
||||
self.num_multiple = 3
|
||||
|
||||
@staticmethod
|
||||
def make_voxelgrid(mesh):
|
||||
resolution = 64
|
||||
voxelgrid = trianglemeshes_to_voxelgrids(mesh.vertices.unsqueeze(0), mesh.faces,
|
||||
resolution)
|
||||
return voxelgrid[0].bool()
|
||||
|
||||
class TestMisc:
|
||||
@pytest.fixture(scope='class')
|
||||
def voxelgrid(self, mesh):
|
||||
return TestVoxelGrid.make_voxelgrid(mesh)
|
||||
|
||||
def test_get_authored_time_samples_untimed(self, out_dir, mesh, voxelgrid):
|
||||
out_path = os.path.join(out_dir, 'untimed.usda')
|
||||
usd.export_voxelgrid(file_path=out_path, voxelgrid=voxelgrid, scene_path='/World/voxelgrid')
|
||||
usd.export_mesh(out_path, scene_path='/World/meshes', vertices=mesh.vertices, faces=mesh.faces)
|
||||
|
||||
times = usd.get_authored_time_samples(out_path)
|
||||
assert times == []
|
||||
|
||||
def test_get_authored_time_samples_timed(self, out_dir, mesh, voxelgrid, pointcloud):
|
||||
out_path = os.path.join(out_dir, 'timed.usda')
|
||||
usd.export_voxelgrid(file_path=out_path, voxelgrid=voxelgrid, scene_path='/World/voxelgrid')
|
||||
times = usd.get_authored_time_samples(out_path)
|
||||
assert times == []
|
||||
|
||||
usd.export_voxelgrid(file_path=out_path, voxelgrid=voxelgrid, scene_path='/World/voxelgrid', time=1)
|
||||
times = usd.get_authored_time_samples(out_path)
|
||||
assert times == [1]
|
||||
|
||||
usd.export_mesh(out_path, scene_path='/World/meshes', vertices=mesh.vertices, faces=mesh.faces, time=20)
|
||||
usd.export_mesh(out_path, scene_path='/World/meshes', vertices=mesh.vertices, faces=None, time=250)
|
||||
times = usd.get_authored_time_samples(out_path)
|
||||
assert times == [1.0, 20.0, 250.0]
|
||||
|
||||
usd.export_pointcloud(out_path, pointcloud)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_get_scene_paths(self, mesh_path, input_stage):
|
||||
paths = usd.get_scene_paths(mesh_path)
|
||||
assert len(paths) == 2
|
||||
|
||||
paths = usd.get_scene_paths(mesh_path, prim_types="Mesh")
|
||||
assert len(paths) == 1
|
||||
|
||||
paths = usd.get_scene_paths(mesh_path, prim_types=["Mesh"])
|
||||
assert len(paths) == 1
|
||||
|
||||
paths = usd.get_scene_paths(mesh_path, scene_path_regex=".*World.*")
|
||||
assert len(paths) == 2
|
||||
98
tests/python/kaolin/io/usd/test_voxelgrid.py
Normal file
@@ -0,0 +1,98 @@
|
||||
# Copyright (c) 2019,20-21-23 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import shutil
|
||||
|
||||
import torch
|
||||
import pytest
|
||||
|
||||
from pxr import Usd
|
||||
|
||||
from kaolin.io import usd, obj
|
||||
from kaolin.ops.conversions import trianglemeshes_to_voxelgrids
|
||||
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir)
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def mesh():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
obj_mesh = obj.import_mesh(
|
||||
os.path.join(cur_dir, os.pardir, os.pardir, os.pardir, os.pardir, 'samples/rocket.obj'),
|
||||
with_normals=True, with_materials=True, error_handler=obj.skip_error_handler)
|
||||
return obj_mesh
|
||||
|
||||
class TestVoxelGrid:
|
||||
def setup_method(self):
|
||||
self.scene_path = '/World/voxelgrid'
|
||||
self.num_multiple = 3
|
||||
|
||||
@staticmethod
|
||||
def make_voxelgrid(mesh):
|
||||
resolution = 64
|
||||
voxelgrid = trianglemeshes_to_voxelgrids(mesh.vertices.unsqueeze(0), mesh.faces,
|
||||
resolution)
|
||||
return voxelgrid[0].bool()
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def voxelgrid(self, mesh):
|
||||
return TestVoxelGrid.make_voxelgrid(mesh)
|
||||
|
||||
def test_export_single(self, out_dir, voxelgrid):
|
||||
out_path = os.path.join(out_dir, 'voxelgrid.usda')
|
||||
usd.export_voxelgrid(file_path=out_path, voxelgrid=voxelgrid, scene_path=self.scene_path)
|
||||
|
||||
# Confirm exported USD matches golden file
|
||||
golden = os.path.join(out_dir, os.pardir, os.pardir, os.pardir, os.pardir, os.pardir,
|
||||
'samples/golden/voxelgrid.usda')
|
||||
assert open(golden).read() == open(out_path).read()
|
||||
|
||||
def test_export_multiple(self, out_dir, voxelgrid):
|
||||
out_path = os.path.join(out_dir, 'voxelgrids.usda')
|
||||
|
||||
# Export multiple voxelgrids using default paths
|
||||
usd.export_voxelgrids(file_path=out_path, voxelgrids=[voxelgrid for _ in range(self.num_multiple)])
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_single(self, out_dir, voxelgrid, input_stage):
|
||||
out_path = os.path.join(out_dir, 'voxelgrid.usda')
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
voxelgrid_in = usd.import_voxelgrid(path_or_stage, scene_path=self.scene_path)
|
||||
assert torch.equal(voxelgrid, voxelgrid_in)
|
||||
|
||||
@pytest.mark.parametrize('input_stage', [False, True])
|
||||
def test_import_multiple(self, out_dir, voxelgrid, input_stage):
|
||||
out_path = os.path.join(out_dir, 'voxelgrids.usda')
|
||||
if input_stage:
|
||||
path_or_stage = Usd.Stage.Open(out_path)
|
||||
else:
|
||||
path_or_stage = out_path
|
||||
voxelgrid_in_list = usd.import_voxelgrids(path_or_stage)
|
||||
|
||||
# Confirm imported voxelgrid matches original input
|
||||
assert len(voxelgrid_in_list) == self.num_multiple
|
||||
for voxelgrid_in in voxelgrid_in_list:
|
||||
assert torch.equal(voxelgrid, voxelgrid_in)
|
||||
0
tests/python/kaolin/metrics/__init__.py
Normal file
347
tests/python/kaolin/metrics/test_pointcloud.py
Normal file
@@ -0,0 +1,347 @@
|
||||
# Copyright (c) 2019,20-21 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.metrics import pointcloud as pc
|
||||
from kaolin.utils.testing import FLOAT_DTYPES, with_seed
|
||||
|
||||
|
||||
@pytest.mark.parametrize('dtype', FLOAT_DTYPES)
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
class TestSidedDistance:
|
||||
@pytest.fixture(autouse=True)
|
||||
def get_tol(self, device, dtype):
|
||||
if dtype == torch.half:
|
||||
return 1e-3, 1e-3
|
||||
elif dtype == torch.float:
|
||||
return 1e-5, 1e-4
|
||||
elif dtype == torch.double:
|
||||
return 1e-6, 1e-5
|
||||
|
||||
@with_seed(torch_seed=0)
|
||||
@pytest.fixture(autouse=True)
|
||||
def input_double_p1(self, device, dtype):
|
||||
return torch.randn((5, 20, 3), requires_grad=True, device='cuda', dtype=torch.double)
|
||||
|
||||
@with_seed(torch_seed=0)
|
||||
@pytest.fixture(autouse=True)
|
||||
def input_double_p2(self, device, dtype):
|
||||
return torch.randn((5, 15, 3), requires_grad=True, device='cuda', dtype=torch.double)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def get_input(self, device, dtype):
|
||||
p1 = torch.tensor([[[8.8977, 4.1709, 1.2839],
|
||||
[8.5640, 7.7767, 9.4214]],
|
||||
[[0.5431, 6.4495, 11.4914],
|
||||
[3.2126, 8.0865, 3.1018]]], dtype=dtype, device=device)
|
||||
|
||||
p2 = torch.tensor([[[6.9340, 6.1152, 3.4435],
|
||||
[0.1032, 9.8181, 11.3350]],
|
||||
[[11.4006, 2.2154, 7.9589],
|
||||
[4.2586, 1.4133, 7.2606]]], dtype=dtype, device=device)
|
||||
|
||||
return p1, p2
|
||||
|
||||
@with_seed(torch_seed=0)
|
||||
@pytest.fixture(autouse=True)
|
||||
def get_large_input(self, device, dtype):
|
||||
N = 100
|
||||
B = 3
|
||||
M = 50
|
||||
|
||||
p1 = torch.randint(0, 100, (B, N, 3), dtype=dtype, device=device)
|
||||
|
||||
p2 = torch.randint(0, 100, (B, M, 3), dtype=dtype, device=device)
|
||||
return p1, p2
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_grad_double(self, input_double_p1, input_double_p2):
|
||||
# if test_gradcheck passed the gradient using torch.double inputs is trustable
|
||||
outputs = torch.sum(pc.sided_distance(input_double_p1, input_double_p2)[0])
|
||||
outputs.backward()
|
||||
return input_double_p1.grad.clone(), input_double_p2.grad.clone()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_grad_double_2(self, get_input):
|
||||
# if test_gradcheck passed the gradient using torch.double inputs is trustable
|
||||
p1, p2 = get_input
|
||||
p1 = p1.detach()
|
||||
p2 = p2.detach()
|
||||
p1.requires_grad = True
|
||||
p2.requires_grad = True
|
||||
|
||||
outputs = torch.sum(pc.sided_distance(p1, p2)[0])
|
||||
outputs.backward()
|
||||
return p1.grad.clone(), p2.grad.clone()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_grad_double_large(self, get_large_input):
|
||||
# if test_gradcheck passed the gradient using torch.double inputs is trustable
|
||||
p1, p2 = get_large_input
|
||||
p1 = p1.detach()
|
||||
p2 = p2.detach()
|
||||
p1.requires_grad = True
|
||||
p2.requires_grad = True
|
||||
|
||||
outputs = torch.sum(pc.sided_distance(p1, p2)[0])
|
||||
outputs.backward()
|
||||
return p1.grad.clone(), p2.grad.clone()
|
||||
|
||||
def test_sided_distance(self, device, dtype, get_input, get_tol):
|
||||
p1, p2 = get_input
|
||||
output_p1, output_idx_p1 = pc.sided_distance(p1, p2)
|
||||
expected_p1 = torch.tensor([[12.3003, 41.1528], [57.0679, 62.9213]], device=device, dtype=dtype)
|
||||
expected_idx_p1 = torch.tensor([[0, 0], [1, 1]], device=device, dtype=torch.long)
|
||||
|
||||
atol, rtol = get_tol
|
||||
assert torch.allclose(output_p1, expected_p1, atol=atol, rtol=rtol)
|
||||
assert torch.equal(output_idx_p1, expected_idx_p1)
|
||||
|
||||
def test_sided_distance_large_input(self, device, dtype, get_large_input, get_tol):
|
||||
p1, p2 = get_large_input
|
||||
output_p1, output_idx_p1 = pc.sided_distance(p1, p2)
|
||||
|
||||
expected_p1 = pc._sided_distance(p1, p2)
|
||||
|
||||
atol, rtol = get_tol
|
||||
assert torch.allclose(output_p1, expected_p1, atol=atol, rtol=rtol)
|
||||
|
||||
@with_seed(torch_seed=0)
|
||||
def test_directed_distance_batch_size(self, device, dtype):
|
||||
|
||||
with pytest.raises(RuntimeError,
|
||||
match=r"Expected tensor of size \[3, 3, 3\], but got tensor "
|
||||
r"of size \[2, 3, 3\] for argument #2 'p2' "
|
||||
r"\(while checking arguments for sided_distance_forward_cuda\)"):
|
||||
p1 = torch.randint(0, 10, (3, 2, 3), dtype=dtype, device=device)
|
||||
p2 = torch.randint(0, 10, (2, 3, 3), dtype=dtype, device=device)
|
||||
pc.sided_distance(p1, p2)
|
||||
|
||||
@with_seed(torch_seed=0)
|
||||
def test_directed_distance_dims(self, device, dtype):
|
||||
|
||||
with pytest.raises(RuntimeError,
|
||||
match="Expected 3-dimensional tensor, but got "
|
||||
"4-dimensional tensor for argument #1 'p1' "
|
||||
r"\(while checking arguments for sided_distance_forward_cuda\)"):
|
||||
p1 = torch.randint(0, 10, (3, 2, 3, 4), dtype=dtype, device=device)
|
||||
p2 = torch.randint(0, 10, (2, 3, 3), dtype=dtype, device=device)
|
||||
pc.sided_distance(p1, p2)
|
||||
|
||||
with pytest.raises(RuntimeError,
|
||||
match=r"Expected tensor of size \[2, 2, 3\], but got "
|
||||
r"tensor of size \[2, 2, 2\] for argument #1 'p1' "
|
||||
r"\(while checking arguments for sided_distance_forward_cuda\)"):
|
||||
p1 = torch.randint(0, 10, (2, 2, 2), dtype=dtype, device=device)
|
||||
p2 = torch.randint(0, 10, (2, 3, 3), dtype=dtype, device=device)
|
||||
pc.sided_distance(p1, p2)
|
||||
|
||||
def test_grad_check(self, device, dtype, input_double_p1, input_double_p2):
|
||||
if dtype != torch.double:
|
||||
pytest.skip("Gradient check only works in double.")
|
||||
|
||||
input_points = (input_double_p1, input_double_p2)
|
||||
|
||||
grad_result = torch.autograd.gradcheck(pc.sided_distance, input_points, eps=1e-6, atol=1e-6)
|
||||
|
||||
assert grad_result
|
||||
|
||||
def test_grad_check_2(self, device, dtype, get_input):
|
||||
# Test for gradient accumulation w.r.t p2
|
||||
if dtype != torch.double:
|
||||
pytest.skip("Gradient check only works in double.")
|
||||
|
||||
p1, p2 = get_input
|
||||
p1.requires_grad = True
|
||||
p2.requires_grad = True
|
||||
|
||||
grad_result = torch.autograd.gradcheck(pc.sided_distance, (p1, p2), eps=1e-6, atol=1e-6)
|
||||
|
||||
assert grad_result
|
||||
|
||||
def test_grad_check_large(self, device, dtype, get_large_input):
|
||||
# Test for gradient accumulation w.r.t p2
|
||||
if dtype != torch.double:
|
||||
pytest.skip("Gradient check only works in double.")
|
||||
|
||||
p1, p2 = get_large_input
|
||||
p1.requires_grad = True
|
||||
p2.requires_grad = True
|
||||
|
||||
grad_result = torch.autograd.gradcheck(pc.sided_distance, (p1, p2), eps=1e-6, atol=1e-6)
|
||||
|
||||
assert grad_result
|
||||
|
||||
def test_grad_check_other_type(self, device, dtype, input_double_p1, input_double_p2, target_grad_double):
|
||||
if dtype == torch.double:
|
||||
pytest.skip("Gradient check for double already tested.")
|
||||
|
||||
p1 = input_double_p1.to(dtype).detach()
|
||||
p2 = input_double_p2.to(dtype).detach()
|
||||
p1.requires_grad = True
|
||||
p2.requires_grad = True
|
||||
|
||||
output = pc.sided_distance(p1, p2)[0]
|
||||
torch.sum(output).backward()
|
||||
target_grad_p1, target_grad_p2 = target_grad_double
|
||||
target_grad_p1 = target_grad_p1.to(dtype)
|
||||
target_grad_p2 = target_grad_p2.to(dtype)
|
||||
|
||||
assert torch.allclose(p1.grad, target_grad_p1, rtol=1e-2, atol=1e-2)
|
||||
assert torch.allclose(p2.grad, target_grad_p2, rtol=1e-2, atol=1e-2)
|
||||
|
||||
def test_grad_check_other_type_2(self, device, dtype, get_input, target_grad_double_2):
|
||||
if dtype == torch.double:
|
||||
pytest.skip("Gradient check for double already tested.")
|
||||
|
||||
p1, p2 = get_input
|
||||
p1.requires_grad = True
|
||||
p2.requires_grad = True
|
||||
|
||||
output = pc.sided_distance(p1, p2)[0]
|
||||
torch.sum(output).backward()
|
||||
target_grad_p1, target_grad_p2 = target_grad_double_2
|
||||
target_grad_p1 = target_grad_p1.to(dtype)
|
||||
target_grad_p2 = target_grad_p2.to(dtype)
|
||||
|
||||
assert torch.allclose(p1.grad, target_grad_p1, rtol=1e-2, atol=1e-2)
|
||||
assert torch.allclose(p2.grad, target_grad_p2, rtol=1e-2, atol=1e-2)
|
||||
|
||||
def test_grad_check_other_type_large(self, device, dtype, get_large_input, target_grad_double_large):
|
||||
if dtype == torch.double:
|
||||
pytest.skip("Gradient check for double already tested.")
|
||||
|
||||
p1, p2 = get_large_input
|
||||
p1.requires_grad = True
|
||||
p2.requires_grad = True
|
||||
|
||||
output = pc.sided_distance(p1, p2)[0]
|
||||
torch.sum(output).backward()
|
||||
target_grad_p1, target_grad_p2 = target_grad_double_large
|
||||
target_grad_p1 = target_grad_p1.to(dtype)
|
||||
target_grad_p2 = target_grad_p2.to(dtype)
|
||||
|
||||
assert torch.allclose(p1.grad, target_grad_p1, rtol=1e-2, atol=1e-2)
|
||||
assert torch.allclose(p2.grad, target_grad_p2, rtol=1e-2, atol=1e-2)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('dtype', FLOAT_DTYPES)
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
class TestChamferDistance:
|
||||
@pytest.fixture(autouse=True)
|
||||
def tolerances(self, device, dtype):
|
||||
if dtype == torch.half:
|
||||
return 1e-3, 1e-3
|
||||
elif dtype == torch.float:
|
||||
return 1e-5, 1e-4
|
||||
elif dtype == torch.double:
|
||||
return 1e-6, 1e-5
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def p1(self, device, dtype):
|
||||
return torch.tensor([[[8.8977, 4.1709, 1.2839],
|
||||
[8.5640, 7.7767, 9.4214]],
|
||||
[[0.5431, 6.4495, 11.4914],
|
||||
[3.2126, 8.0865, 3.1018]]],
|
||||
dtype=dtype, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def p2(self, device, dtype):
|
||||
return torch.tensor([[[6.9340, 6.1152, 3.4435],
|
||||
[0.1032, 9.8181, 11.3350]],
|
||||
[[11.4006, 2.2154, 7.9589],
|
||||
[4.2586, 1.4133, 7.2606]]],
|
||||
dtype=dtype, device=device)
|
||||
|
||||
def test_chamfer_distance(self, device, dtype, p1, p2, tolerances):
|
||||
output = pc.chamfer_distance(p1, p2)
|
||||
|
||||
expected = torch.tensor([72.5838, 151.0809], dtype=dtype, device=device)
|
||||
|
||||
atol, rtol = tolerances
|
||||
assert torch.allclose(output, expected, atol=atol, rtol=rtol)
|
||||
|
||||
def test_weighted_chamfer_distance(self, device, dtype, p1, p2, tolerances):
|
||||
output = pc.chamfer_distance(p1, p2, w1=1.3, w2=0.8)
|
||||
expected = torch.tensor([71.4303, 150.8620], dtype=dtype, device=device)
|
||||
|
||||
atol, rtol = tolerances
|
||||
assert torch.allclose(output, expected, atol=atol, rtol=rtol)
|
||||
|
||||
def test_chamfer_distance_not_squared(self, device, dtype, p1, p2, tolerances):
|
||||
output = pc.chamfer_distance(p1, p2, squared=False)
|
||||
expected = torch.tensor([11.1704, 17.1130], dtype=dtype, device=device)
|
||||
|
||||
atol, rtol = tolerances
|
||||
assert torch.allclose(output, expected, atol=atol, rtol=rtol)
|
||||
|
||||
@pytest.mark.parametrize('dtype', FLOAT_DTYPES)
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
class TestFScore:
|
||||
@pytest.fixture(autouse=True)
|
||||
def get_tol(self, device, dtype):
|
||||
if dtype == torch.half:
|
||||
return 1e-3, 1e-3
|
||||
elif dtype == torch.float:
|
||||
return 1e-5, 1e-4
|
||||
elif dtype == torch.double:
|
||||
return 1e-6, 1e-5
|
||||
|
||||
def test_FScore(self, device, dtype, get_tol):
|
||||
|
||||
gt_points = torch.tensor([[[8.8977, 4.1709, 1.2839],
|
||||
[8.5640, 7.7767, 9.4214]],
|
||||
[[0.5431, 6.4495, 11.4914],
|
||||
[3.2126, 8.0865, 3.1018]]], dtype=dtype, device=device)
|
||||
|
||||
pred_points = torch.tensor([[[8.8914, 4.1788, 1.2176],
|
||||
[8.5291, 7.5513, 9.5412]],
|
||||
[[0.4010, 6.4602, 11.5183],
|
||||
[3.2977, 8.0325, 3.1180]]], dtype=dtype, device=device)
|
||||
output1 = pc.f_score(gt_points, pred_points, radius=0.2)
|
||||
output2 = pc.f_score(gt_points, pred_points, radius=0.12)
|
||||
|
||||
expected1 = torch.tensor([0.5, 1], device=device, dtype=dtype)
|
||||
expected2 = torch.tensor([0.5, 0.5], device=device, dtype=dtype)
|
||||
|
||||
atol, rtol = get_tol
|
||||
assert torch.allclose(output1, expected1, atol=atol, rtol=rtol)
|
||||
assert torch.allclose(output2, expected2, atol=atol, rtol=rtol)
|
||||
|
||||
def test_FScore_heterogeneous(self, device, dtype, get_tol):
|
||||
gt_points = torch.tensor([[[8.8977, 4.1709, 1.2839],
|
||||
[8.5640, 7.7767, 9.4214]],
|
||||
[[0.5431, 6.4495, 11.4914],
|
||||
[3.2126, 8.0865, 3.1018]]], dtype=dtype, device=device)
|
||||
|
||||
pred_points = torch.tensor([[[8.8914, 4.1788, 1.2176],
|
||||
[8.5291, 7.5513, 9.5412],
|
||||
[3.7831, 6.0182, 4.1208]],
|
||||
[[0.4010, 6.4602, 11.5183],
|
||||
[3.2977, 8.0325, 3.1180],
|
||||
[2.4987, 5.8763, 3.1987]]], dtype=dtype, device=device)
|
||||
output1 = pc.f_score(gt_points, pred_points, radius=0.2)
|
||||
output2 = pc.f_score(gt_points, pred_points, radius=0.12)
|
||||
|
||||
expected1 = torch.tensor([0.4, 0.8], device=device, dtype=dtype)
|
||||
expected2 = torch.tensor([0.4, 0.4], device=device, dtype=dtype)
|
||||
|
||||
atol, rtol = get_tol
|
||||
assert torch.allclose(output1, expected1, atol=atol, rtol=rtol)
|
||||
assert torch.allclose(output2, expected2, atol=atol, rtol=rtol)
|
||||
|
||||
52
tests/python/kaolin/metrics/test_render.py
Normal file
@@ -0,0 +1,52 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
import random
|
||||
|
||||
from kaolin.metrics import render
|
||||
from kaolin.utils.testing import FLOAT_TYPES
|
||||
|
||||
@pytest.mark.parametrize('device,dtype', FLOAT_TYPES)
|
||||
class TestRender:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def lhs_mask(self, device, dtype):
|
||||
return torch.tensor([[[0., 0.2, 0.1, 1.],
|
||||
[0.5, 0.5, 0.9, 0.9],
|
||||
[0., 1., 1., 0.9],
|
||||
[0.8, 0.7, 0.2, 0.1]],
|
||||
[[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.]]],
|
||||
dtype=dtype, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def rhs_mask(self, device, dtype):
|
||||
return torch.tensor([[[0.1, 0.3, 0.3, 0.9],
|
||||
[0.5, 0.5, 1., 0.3],
|
||||
[0., 0.9, 0.9, 0.8],
|
||||
[1., 1., 0., 0.]],
|
||||
[[0.3, 0.6, 0.7, 0.7],
|
||||
[0.8, 0.9, 0.9, 1.],
|
||||
[1., 0.9, 0.9, 0.5],
|
||||
[0.8, 0.7, 0.8, 0.5]]],
|
||||
dtype=dtype, device=device)
|
||||
|
||||
def test_mask_iou(self, lhs_mask, rhs_mask, device, dtype):
|
||||
loss = render.mask_iou(lhs_mask, rhs_mask)
|
||||
assert torch.allclose(loss, torch.tensor([0.3105],
|
||||
dtype=dtype, device=device))
|
||||
79
tests/python/kaolin/metrics/test_tetmesh.py
Normal file
@@ -0,0 +1,79 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.metrics import tetmesh
|
||||
|
||||
|
||||
class TestTetMeshMetrics:
|
||||
|
||||
def test_tetrahedron_volume(self):
|
||||
tetrahedrons = torch.tensor([[[[0.5000, 0.5000, 0.4500],
|
||||
[0.4500, 0.5000, 0.5000],
|
||||
[0.4750, 0.4500, 0.4500],
|
||||
[0.5000, 0.5000, 0.5000]]]])
|
||||
assert torch.allclose(tetmesh.tetrahedron_volume(tetrahedrons), torch.tensor([[-2.0833e-05]]))
|
||||
|
||||
def test_amips(self):
|
||||
tetrahedrons = torch.tensor([[[
|
||||
[1.7000, 2.3000, 4.4500],
|
||||
[3.4800, 0.2000, 5.3000],
|
||||
[4.9000, 9.4500, 6.4500],
|
||||
[6.2000, 8.5000, 7.1000]],
|
||||
[[-1.3750, 1.4500, 3.2500],
|
||||
[4.9000, 1.8000, 2.7000],
|
||||
[3.6000, 1.9000, 2.3000],
|
||||
[1.5500, 1.3500, 2.9000]]],
|
||||
[[[1.7000, 2.3000, 4.4500],
|
||||
[3.4800, 0.2000, 5.3000],
|
||||
[4.9000, 9.4500, 6.4500],
|
||||
[6.2000, 8.5000, 7.1000]],
|
||||
[[-1.3750, 1.4500, 3.2500],
|
||||
[4.9000, 1.8000, 2.7000],
|
||||
[3.6000, 1.9000, 2.3000],
|
||||
[1.5500, 1.3500, 2.9000]]]])
|
||||
inverse_offset_matrix = torch.tensor([[[[-1.1561, -1.1512, -1.9049],
|
||||
[1.5138, 1.0108, 3.4302],
|
||||
[1.6538, 1.0346, 4.2223]],
|
||||
[[2.9020, -1.0995, -1.8744],
|
||||
[1.1554, 1.1519, 1.7780],
|
||||
[-0.0766, 1.6350, 1.1064]]],
|
||||
[[[-0.9969, 1.4321, -0.3075],
|
||||
[-1.3414, 1.5795, -1.6571],
|
||||
[-0.1775, -0.4349, 1.1772]],
|
||||
[[-1.1077, -1.2441, 1.8037],
|
||||
[-0.5722, 0.1755, -2.4364],
|
||||
[-0.5263, 1.5765, 1.5607]]]])
|
||||
torch.allclose(tetmesh.amips(tetrahedrons, inverse_offset_matrix), torch.tensor([[13042.3408], [2376.2517]]))
|
||||
|
||||
def test_equivolume(self):
|
||||
tetrahedrons = torch.tensor([[[[0.5000, 0.5000, 0.7500],
|
||||
[0.4500, 0.8000, 0.6000],
|
||||
[0.4750, 0.4500, 0.2500],
|
||||
[0.5000, 0.3000, 0.3000]],
|
||||
[[0.4750, 0.4500, 0.2500],
|
||||
[0.5000, 0.9000, 0.3000],
|
||||
[0.4500, 0.4000, 0.9000],
|
||||
[0.4500, 0.4500, 0.7000]]],
|
||||
[[[0.7000, 0.3000, 0.4500],
|
||||
[0.4800, 0.2000, 0.3000],
|
||||
[0.9000, 0.4500, 0.4500],
|
||||
[0.2000, 0.5000, 0.1000]],
|
||||
[[0.3750, 0.4500, 0.2500],
|
||||
[0.9000, 0.8000, 0.7000],
|
||||
[0.6000, 0.9000, 0.3000],
|
||||
[0.5500, 0.3500, 0.9000]]]])
|
||||
assert torch.allclose(tetmesh.equivolume(tetrahedrons, pow=4), torch.tensor([[2.2898e-15], [2.9661e-10]]))
|
||||
201
tests/python/kaolin/metrics/test_trianglemesh.py
Normal file
@@ -0,0 +1,201 @@
|
||||
# Copyright (c) 2019,20-21 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
import random
|
||||
|
||||
from kaolin.metrics import trianglemesh
|
||||
from kaolin.ops.mesh import index_vertices_by_faces
|
||||
from kaolin.utils.testing import FLOAT_TYPES, CUDA_FLOAT_TYPES
|
||||
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
@pytest.mark.parametrize('dtype', [torch.float, torch.double])
|
||||
def test_unbatched_naive_triangle_distance(device, dtype):
|
||||
pointcloud = torch.tensor([[0., -1., -1.],
|
||||
[1., -1., -1.],
|
||||
[-1., -1., -1.],
|
||||
[0., -1., 2.],
|
||||
[1., -1., 2.],
|
||||
[-1, -1., 2.],
|
||||
[0., 2., 0.5],
|
||||
[1., 2., 0.5],
|
||||
[-1., 2., 0.5],
|
||||
[0., -1., 0.5],
|
||||
[1., -1., 0.5],
|
||||
[-1., -1., 0.5],
|
||||
[0., 1., 1.],
|
||||
[1., 1., 1.],
|
||||
[-1., 1., 1.],
|
||||
[0., 1., 0.],
|
||||
[1., 1., 0.],
|
||||
[-1., 1., 0.],
|
||||
[1., 0.5, 0.5],
|
||||
[-1., 0.5, 0.5]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
vertices = torch.tensor([[0., 0., 0.],
|
||||
[0., 0., 1.],
|
||||
[0., 1., 0.5],
|
||||
[0.5, 0., 0.],
|
||||
[0.5, 0., 1.],
|
||||
[0.5, 1., 0.5]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2], [3, 4, 5]], device=device, dtype=torch.long)
|
||||
|
||||
face_vertices = index_vertices_by_faces(vertices.unsqueeze(0), faces)[0]
|
||||
|
||||
expected_dist = torch.tensor(
|
||||
[2.0000, 2.2500, 3.0000, 2.0000, 2.2500, 3.0000, 1.0000, 1.2500, 2.0000,
|
||||
1.0000, 1.2500, 2.0000, 0.2000, 0.4500, 1.2000, 0.2000, 0.4500, 1.2000,
|
||||
0.2500, 1.0000], device=device, dtype=dtype)
|
||||
|
||||
expected_face_idx = torch.tensor(
|
||||
[0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0],
|
||||
device=device, dtype=torch.long)
|
||||
|
||||
expected_dist_type = torch.tensor(
|
||||
[1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 0, 0],
|
||||
device=device, dtype=torch.int)
|
||||
|
||||
dist, face_idx, dist_type = trianglemesh._unbatched_naive_point_to_mesh_distance(
|
||||
pointcloud, face_vertices)
|
||||
|
||||
assert torch.allclose(dist, expected_dist)
|
||||
assert torch.equal(face_idx, expected_face_idx)
|
||||
assert torch.equal(dist_type, expected_dist_type)
|
||||
|
||||
@pytest.mark.parametrize('num_points', [1025])
|
||||
@pytest.mark.parametrize('num_faces', [1025])
|
||||
@pytest.mark.parametrize('dtype', [torch.float, torch.double])
|
||||
class TestUnbatchedTriangleDistanceCuda:
|
||||
@pytest.fixture(autouse=True)
|
||||
def pointcloud(self, num_points, dtype):
|
||||
return torch.randn((num_points, 3), device='cuda', dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices(self, num_faces, dtype):
|
||||
return torch.randn((num_faces, 3, 3), device='cuda', dtype=dtype)
|
||||
|
||||
def test_face_vertices(self, pointcloud, face_vertices):
|
||||
dist, face_idx, dist_type = trianglemesh._UnbatchedTriangleDistanceCuda.apply(
|
||||
pointcloud, face_vertices)
|
||||
dist2, face_idx2, dist_type2 = trianglemesh._unbatched_naive_point_to_mesh_distance(
|
||||
pointcloud, face_vertices)
|
||||
assert torch.allclose(dist, dist2)
|
||||
assert torch.equal(face_idx, face_idx2)
|
||||
assert torch.equal(dist_type, dist_type2)
|
||||
|
||||
def test_face_vertices_grad(self, pointcloud, face_vertices):
|
||||
pointcloud = pointcloud.detach()
|
||||
pointcloud.requires_grad = True
|
||||
face_vertices = face_vertices.detach()
|
||||
face_vertices.requires_grad = True
|
||||
pointcloud2 = pointcloud.detach()
|
||||
pointcloud2.requires_grad = True
|
||||
face_vertices2 = face_vertices.detach()
|
||||
face_vertices2.requires_grad = True
|
||||
dist, face_idx, dist_type = trianglemesh._UnbatchedTriangleDistanceCuda.apply(
|
||||
pointcloud, face_vertices)
|
||||
dist2, face_idx2, dist_type2 = trianglemesh._unbatched_naive_point_to_mesh_distance(
|
||||
pointcloud2, face_vertices2)
|
||||
grad_out = torch.rand_like(dist)
|
||||
dist.backward(grad_out)
|
||||
dist2.backward(grad_out)
|
||||
diff_idxs = torch.where(~torch.isclose(pointcloud.grad, pointcloud2.grad))
|
||||
assert torch.allclose(pointcloud.grad, pointcloud2.grad,
|
||||
rtol=1e-5, atol=1e-5)
|
||||
assert torch.allclose(face_vertices.grad, face_vertices2.grad,
|
||||
rtol=1e-5, atol=1e-5)
|
||||
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
@pytest.mark.parametrize('dtype', [torch.float, torch.double])
|
||||
@pytest.mark.parametrize('batch_size', [1, 3])
|
||||
@pytest.mark.parametrize('num_points', [11, 1025])
|
||||
@pytest.mark.parametrize('num_faces', [11, 1025])
|
||||
def test_triangle_distance(batch_size, num_points, num_faces, device, dtype):
|
||||
pointclouds = torch.randn((batch_size, num_points, 3), device=device,
|
||||
dtype=dtype)
|
||||
face_vertices = torch.randn((batch_size, num_faces, 3, 3), device=device,
|
||||
dtype=dtype)
|
||||
expected_dist = []
|
||||
expected_face_idx = []
|
||||
expected_dist_type = []
|
||||
|
||||
for i in range(batch_size):
|
||||
_expected_dist, _expected_face_idx, _expected_dist_type = \
|
||||
trianglemesh._unbatched_naive_point_to_mesh_distance(
|
||||
pointclouds[i], face_vertices[i])
|
||||
expected_dist.append(_expected_dist)
|
||||
expected_face_idx.append(_expected_face_idx)
|
||||
expected_dist_type.append(_expected_dist_type)
|
||||
expected_dist = torch.stack(expected_dist, dim=0)
|
||||
expected_face_idx = torch.stack(expected_face_idx, dim=0)
|
||||
expected_dist_type = torch.stack(expected_dist_type, dim=0)
|
||||
dist, face_idx, dist_type = trianglemesh.point_to_mesh_distance(
|
||||
pointclouds, face_vertices)
|
||||
assert torch.allclose(dist, expected_dist)
|
||||
assert torch.equal(face_idx, expected_face_idx)
|
||||
assert torch.equal(dist_type, expected_dist_type)
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
class TestEdgeLength:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def get_tol(self, device, dtype):
|
||||
if dtype == torch.half:
|
||||
return 1e-2, 1e-2
|
||||
elif dtype == torch.float:
|
||||
return 1e-5, 1e-4
|
||||
elif dtype == torch.double:
|
||||
return 1e-6, 1e-5
|
||||
|
||||
def test_edge_length(self, device, dtype, get_tol):
|
||||
|
||||
atol, rtol = get_tol
|
||||
vertices = torch.tensor([[[1, 0, 0],
|
||||
[0, 1, 0],
|
||||
[0, 0, 1]],
|
||||
|
||||
[[3, 0, 0],
|
||||
[0, 4, 0],
|
||||
[0, 0, 5]]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
output = trianglemesh.average_edge_length(vertices, faces)
|
||||
expected = torch.tensor([[1.4142], [5.7447]], device=device, dtype=dtype)
|
||||
|
||||
assert torch.allclose(output, expected, atol=atol, rtol=rtol)
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
def test_laplacian_smooth(device, dtype):
|
||||
vertices = torch.tensor([[[0, 0, 1],
|
||||
[2, 1, 2],
|
||||
[3, 1, 2]],
|
||||
[[3, 1, 2],
|
||||
[0, 0, 3],
|
||||
[0, 3, 3]]], dtype=dtype, device=device)
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
output = trianglemesh.uniform_laplacian_smoothing(vertices, faces)
|
||||
expected = torch.tensor([[[2.5000, 1.0000, 2.0000],
|
||||
[1.5000, 0.5000, 1.5000],
|
||||
[1.0000, 0.5000, 1.5000]],
|
||||
[[0.0000, 1.5000, 3.0000],
|
||||
[1.5000, 2.0000, 2.5000],
|
||||
[1.5000, 0.5000, 2.5000]]], dtype=dtype, device=device)
|
||||
|
||||
assert torch.equal(output, expected)
|
||||
76
tests/python/kaolin/metrics/test_voxelgrid.py
Normal file
@@ -0,0 +1,76 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.utils.testing import FLOAT_DTYPES, ALL_DEVICES
|
||||
from kaolin.metrics import voxelgrid as vg_metrics
|
||||
|
||||
@pytest.mark.parametrize('dtype', FLOAT_DTYPES)
|
||||
@pytest.mark.parametrize('device', ALL_DEVICES)
|
||||
class TestIoU:
|
||||
def test_handmade_input(self, device, dtype):
|
||||
pred = torch.tensor([[[[0, 1, 1],
|
||||
[1, 0, 0],
|
||||
[1, 0, 0]],
|
||||
|
||||
[[0, 0, 1],
|
||||
[0, 1, 1],
|
||||
[1, 1, 1]],
|
||||
|
||||
[[1, 0, 0],
|
||||
[0, 1, 1],
|
||||
[0, 0, 1]]],
|
||||
|
||||
[[[1, 0, 0],
|
||||
[1, 0, 1],
|
||||
[1, 0, 0]],
|
||||
|
||||
[[1, 0, 0],
|
||||
[0, 1, 0],
|
||||
[0, 1, 0]],
|
||||
|
||||
[[0, 1, 0],
|
||||
[0, 1, 1],
|
||||
[0, 0, 1]]]], dtype=dtype, device=device)
|
||||
|
||||
gt = torch.tensor([[[[0, 0, 0],
|
||||
[0, 0, 1],
|
||||
[1, 0, 1]],
|
||||
|
||||
[[1, 1, 1],
|
||||
[0, 1, 1],
|
||||
[1, 1, 1]],
|
||||
|
||||
[[1, 0, 0],
|
||||
[1, 1, 0],
|
||||
[0, 1, 0]]],
|
||||
|
||||
[[[1, 0, 1],
|
||||
[0, 1, 1],
|
||||
[1, 0, 1]],
|
||||
|
||||
[[0, 1, 0],
|
||||
[1, 1, 1],
|
||||
[0, 0, 1]],
|
||||
|
||||
[[1, 0, 0],
|
||||
[1, 0, 0],
|
||||
[1, 1, 1]]]], dtype=dtype, device=device)
|
||||
|
||||
expected = torch.tensor((0.4500, 0.2273), device=device, dtype=torch.float)
|
||||
output = vg_metrics.iou(pred, gt)
|
||||
|
||||
assert torch.allclose(expected, output, atol=1e-4)
|
||||
295
tests/python/kaolin/non_commercial/flexicubes/test_flexicubes.py
Normal file
@@ -0,0 +1,295 @@
|
||||
# Copyright (c) 2023 YOUR_ORGANIZATION_NAME.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
from kaolin.non_commercial import FlexiCubes
|
||||
|
||||
|
||||
def cube_sdf(x_nx3):
|
||||
sdf_values = 0.5 - torch.abs(x_nx3)
|
||||
sdf_values = torch.clamp(sdf_values, min=0.0)
|
||||
sdf_values = sdf_values[:, 0] * sdf_values[:, 1] * sdf_values[:, 2]
|
||||
sdf_values = -1.0 * sdf_values
|
||||
|
||||
return sdf_values.view(-1)
|
||||
|
||||
|
||||
def cube_sdf_gradient(x_nx3):
|
||||
gradients = []
|
||||
for i in range(x_nx3.shape[0]):
|
||||
x, y, z = x_nx3[i]
|
||||
grad_x, grad_y, grad_z = 0, 0, 0
|
||||
|
||||
max_val = max(abs(x) - 0.5, abs(y) - 0.5, abs(z) - 0.5)
|
||||
|
||||
if max_val == abs(x) - 0.5:
|
||||
grad_x = 1.0 if x > 0 else -1.0
|
||||
if max_val == abs(y) - 0.5:
|
||||
grad_y = 1.0 if y > 0 else -1.0
|
||||
if max_val == abs(z) - 0.5:
|
||||
grad_z = 1.0 if z > 0 else -1.0
|
||||
|
||||
gradients.append(torch.tensor([grad_x, grad_y, grad_z]))
|
||||
|
||||
return torch.stack(gradients).to(x_nx3.device)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
|
||||
class TestFlexiCubes:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def input_data(self, device):
|
||||
sdf_n = torch.tensor([0.6660, 0.5500, 0.5071, 0.5500, 0.6660, 0.5500, 0.4124, 0.3590,
|
||||
0.4124, 0.5500, 0.5071, 0.3590, 0.3000, 0.3590, 0.5071, 0.5500,
|
||||
0.4124, 0.3590, 0.4124, 0.5500, 0.6660, 0.5500, 0.5071, 0.5500,
|
||||
0.6660, 0.5500, 0.4124, 0.3590, 0.4124, 0.5500, 0.4124, 0.2330,
|
||||
0.1536, 0.2330, 0.4124, 0.3590, 0.1536, 0.0500, 0.1536, 0.3590,
|
||||
0.4124, 0.2330, 0.1536, 0.2330, 0.4124, 0.5500, 0.4124, 0.3590,
|
||||
0.4124, 0.5500, 0.5071, 0.3590, 0.3000, 0.3590, 0.5071, 0.3590,
|
||||
0.1536, 0.0500, 0.1536, 0.3590, 0.3000, 0.0500, -0.2000, 0.0500,
|
||||
0.3000, 0.3590, 0.1536, 0.0500, 0.1536, 0.3590, 0.5071, 0.3590,
|
||||
0.3000, 0.3590, 0.5071, 0.5500, 0.4124, 0.3590, 0.4124, 0.5500,
|
||||
0.4124, 0.2330, 0.1536, 0.2330, 0.4124, 0.3590, 0.1536, 0.0500,
|
||||
0.1536, 0.3590, 0.4124, 0.2330, 0.1536, 0.2330, 0.4124, 0.5500,
|
||||
0.4124, 0.3590, 0.4124, 0.5500, 0.6660, 0.5500, 0.5071, 0.5500,
|
||||
0.6660, 0.5500, 0.4124, 0.3590, 0.4124, 0.5500, 0.5071, 0.3590,
|
||||
0.3000, 0.3590, 0.5071, 0.5500, 0.4124, 0.3590, 0.4124, 0.5500,
|
||||
0.6660, 0.5500, 0.5071, 0.5500, 0.6660],
|
||||
dtype=torch.float,
|
||||
device=device)
|
||||
return sdf_n
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_trimesh_output(self, device):
|
||||
expected_vertices = torch.tensor([[-0.0667, -0.0667, -0.0667],
|
||||
[-0.0667, -0.0667, 0.0667],
|
||||
[-0.0667, 0.0667, -0.0667],
|
||||
[-0.0667, 0.0667, 0.0667],
|
||||
[0.0667, -0.0667, -0.0667],
|
||||
[0.0667, -0.0667, 0.0667],
|
||||
[0.0667, 0.0667, -0.0667],
|
||||
[0.0667, 0.0667, 0.0667]],
|
||||
dtype=torch.float,
|
||||
device=device)
|
||||
|
||||
expected_faces = torch.tensor([[0, 1, 2],
|
||||
[2, 1, 3],
|
||||
[0, 2, 4],
|
||||
[4, 2, 6],
|
||||
[2, 3, 6],
|
||||
[6, 3, 7],
|
||||
[4, 5, 0],
|
||||
[0, 5, 1],
|
||||
[5, 7, 1],
|
||||
[1, 7, 3],
|
||||
[6, 7, 4],
|
||||
[4, 7, 5]],
|
||||
dtype=torch.long,
|
||||
device=device)
|
||||
return expected_vertices, expected_faces
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_tetmesh_output(self, device):
|
||||
expected_vertices = torch.tensor([[-0.0667, -0.0667, -0.0667],
|
||||
[-0.0667, -0.0667, 0.0667],
|
||||
[-0.0667, 0.0667, -0.0667],
|
||||
[-0.0667, 0.0667, 0.0667],
|
||||
[0.0667, -0.0667, -0.0667],
|
||||
[0.0667, -0.0667, 0.0667],
|
||||
[0.0667, 0.0667, -0.0667],
|
||||
[0.0667, 0.0667, 0.0667],
|
||||
[0.0000, 0.0000, 0.0000]],
|
||||
dtype=torch.float,
|
||||
device=device)
|
||||
|
||||
expected_tets = torch.tensor([[0, 1, 2, 8],
|
||||
[2, 1, 3, 8],
|
||||
[0, 2, 4, 8],
|
||||
[4, 2, 6, 8],
|
||||
[2, 3, 6, 8],
|
||||
[6, 3, 7, 8],
|
||||
[4, 5, 0, 8],
|
||||
[0, 5, 1, 8],
|
||||
[5, 7, 1, 8],
|
||||
[1, 7, 3, 8],
|
||||
[6, 7, 4, 8],
|
||||
[4, 7, 5, 8]],
|
||||
dtype=torch.long,
|
||||
device=device)
|
||||
return expected_vertices, expected_tets
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_grid(self, device):
|
||||
expected_x_nx3 = torch.tensor([[-0.5000, -0.5000, -0.5000],
|
||||
[-0.5000, -0.5000, 0.0000],
|
||||
[-0.5000, -0.5000, 0.5000],
|
||||
[-0.5000, 0.0000, -0.5000],
|
||||
[-0.5000, 0.0000, 0.0000],
|
||||
[-0.5000, 0.0000, 0.5000],
|
||||
[-0.5000, 0.5000, -0.5000],
|
||||
[-0.5000, 0.5000, 0.0000],
|
||||
[-0.5000, 0.5000, 0.5000],
|
||||
[0.0000, -0.5000, -0.5000],
|
||||
[0.0000, -0.5000, 0.0000],
|
||||
[0.0000, -0.5000, 0.5000],
|
||||
[0.0000, 0.0000, -0.5000],
|
||||
[0.0000, 0.0000, 0.0000],
|
||||
[0.0000, 0.0000, 0.5000],
|
||||
[0.0000, 0.5000, -0.5000],
|
||||
[0.0000, 0.5000, 0.0000],
|
||||
[0.0000, 0.5000, 0.5000],
|
||||
[0.5000, -0.5000, -0.5000],
|
||||
[0.5000, -0.5000, 0.0000],
|
||||
[0.5000, -0.5000, 0.5000],
|
||||
[0.5000, 0.0000, -0.5000],
|
||||
[0.5000, 0.0000, 0.0000],
|
||||
[0.5000, 0.0000, 0.5000],
|
||||
[0.5000, 0.5000, -0.5000],
|
||||
[0.5000, 0.5000, 0.0000],
|
||||
[0.5000, 0.5000, 0.5000]],
|
||||
dtype=torch.float,
|
||||
device=device)
|
||||
|
||||
expected_cube_fx8 = torch.tensor([[0, 9, 3, 12, 1, 10, 4, 13],
|
||||
[1, 10, 4, 13, 2, 11, 5, 14],
|
||||
[3, 12, 6, 15, 4, 13, 7, 16],
|
||||
[4, 13, 7, 16, 5, 14, 8, 17],
|
||||
[9, 18, 12, 21, 10, 19, 13, 22],
|
||||
[10, 19, 13, 22, 11, 20, 14, 23],
|
||||
[12, 21, 15, 24, 13, 22, 16, 25],
|
||||
[13, 22, 16, 25, 14, 23, 17, 26]],
|
||||
dtype=torch.long,
|
||||
device=device)
|
||||
return expected_x_nx3, expected_cube_fx8
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_qef_vertices(self, device):
|
||||
return torch.tensor([[-0.5, -0.5, -0.5],
|
||||
[-0.5, -0.5, 0.0],
|
||||
[-0.5, -0.5, 0.5],
|
||||
[-0.5, 0.0, -0.5],
|
||||
[-0.5, 0.0, 0.0],
|
||||
[-0.5, 0.0, 0.5],
|
||||
[-0.5, 0.5, -0.5],
|
||||
[-0.5, 0.5, 0.0],
|
||||
[-0.5, 0.5, 0.5],
|
||||
[0.0, -0.5, -0.5],
|
||||
[0.0, -0.5, 0.0],
|
||||
[0.0, -0.5, 0.5],
|
||||
[0.0, 0.0, -0.5],
|
||||
[0.0, 0.0, 0.5],
|
||||
[0.0, 0.5, -0.5],
|
||||
[0.0, 0.5, 0.0],
|
||||
[0.0, 0.5, 0.5],
|
||||
[0.5, -0.5, -0.5],
|
||||
[0.5, -0.5, 0.0],
|
||||
[0.5, -0.5, 0.5],
|
||||
[0.5, 0.0, -0.5],
|
||||
[0.5, 0.0, 0.0],
|
||||
[0.5, 0.0, 0.5],
|
||||
[0.5, 0.5, -0.5],
|
||||
[0.5, 0.5, 0.0],
|
||||
[0.5, 0.5, 0.5]],
|
||||
dtype=torch.float,
|
||||
device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_qef_possible_tri(self, device):
|
||||
quad = torch.tensor([
|
||||
[3, 4, 1, 0],
|
||||
[4, 5, 2, 1],
|
||||
[6, 7, 4, 3],
|
||||
[7, 8, 5, 4],
|
||||
[9, 12, 3, 0],
|
||||
[9, 10, 1, 0],
|
||||
[10, 11, 2, 1],
|
||||
[11, 13, 5, 2],
|
||||
[12, 14, 6, 3],
|
||||
[13, 16, 8, 5],
|
||||
[14, 15, 7, 6],
|
||||
[15, 16, 8, 7],
|
||||
[17, 20, 12, 9],
|
||||
[17, 18, 10, 9],
|
||||
[20, 21, 18, 17],
|
||||
[18, 19, 11, 10],
|
||||
[19, 22, 13, 11],
|
||||
[21, 22, 19, 18],
|
||||
[20, 23, 14, 12],
|
||||
[23, 24, 21, 20],
|
||||
[22, 25, 16, 13],
|
||||
[24, 25, 22, 21],
|
||||
[23, 24, 15, 14],
|
||||
[24, 25, 16, 15]
|
||||
], dtype=torch.long, device=device)
|
||||
tri_00 = torch.sort(quad[:, [0, 1, 2]], dim=1)[0]
|
||||
tri_01 = torch.sort(quad[:, [0, 2, 3]], dim=1)[0]
|
||||
tri_10 = torch.sort(quad[:, [0, 1, 3]], dim=1)[0]
|
||||
tri_11 = torch.sort(quad[:, [1, 2, 3]], dim=1)[0]
|
||||
return tri_00, tri_01, tri_10, tri_11
|
||||
|
||||
def test_grid_construction(self, expected_grid, device):
|
||||
fc = FlexiCubes(device)
|
||||
x_nx3, cube_fx8 = fc.construct_voxel_grid(2)
|
||||
assert torch.allclose(x_nx3, expected_grid[0], atol=1e-4)
|
||||
assert torch.equal(cube_fx8, expected_grid[1])
|
||||
|
||||
def test_trimesh_extraction(self, input_data, expected_trimesh_output, device):
|
||||
fc = FlexiCubes(device)
|
||||
x_nx3, cube_fx8 = fc.construct_voxel_grid(4)
|
||||
output = fc(x_nx3, input_data, cube_fx8, 4)
|
||||
|
||||
assert torch.allclose(output[0], expected_trimesh_output[0], atol=1e-4)
|
||||
assert torch.equal(output[1], expected_trimesh_output[1])
|
||||
|
||||
def test_tetmesh_extraction(self, input_data, expected_tetmesh_output, device):
|
||||
fc = FlexiCubes(device)
|
||||
x_nx3, cube_fx8 = fc.construct_voxel_grid(4)
|
||||
output = fc(x_nx3, input_data, cube_fx8, 4, output_tetmesh=True)
|
||||
|
||||
assert torch.allclose(output[0], expected_tetmesh_output[0], atol=1e-4)
|
||||
assert torch.equal(output[1], expected_tetmesh_output[1])
|
||||
|
||||
def test_qef_extraction_grad_func(self, expected_qef_vertices,
|
||||
expected_qef_possible_tri, device):
|
||||
fc = FlexiCubes(device)
|
||||
x_nx3, cube_fx8 = fc.construct_voxel_grid(3)
|
||||
sdf_n = cube_sdf(x_nx3)
|
||||
output = fc(x_nx3, sdf_n, cube_fx8, 3, grad_func=cube_sdf_gradient)
|
||||
|
||||
assert torch.allclose(output[0], expected_qef_vertices, atol=1e-4)
|
||||
# There are many triangulation possible
|
||||
tri_00, tri_01, tri_10, tri_11 = expected_qef_possible_tri
|
||||
sorted_tri_mesh = torch.sort(output[1], dim=1)[0]
|
||||
has_tri_00 = torch.any(torch.all(
|
||||
tri_00.reshape(1, -1, 3) == sorted_tri_mesh.reshape(-1, 1, 3), dim=-1), dim=0)
|
||||
has_tri_01 = torch.any(torch.all(
|
||||
tri_01.reshape(1, -1, 3) == sorted_tri_mesh.reshape(-1, 1, 3), dim=-1), dim=0)
|
||||
has_tri_10 = torch.any(torch.all(
|
||||
tri_10.reshape(1, -1, 3) == sorted_tri_mesh.reshape(-1, 1, 3), dim=-1), dim=0)
|
||||
has_tri_11 = torch.any(torch.all(
|
||||
tri_11.reshape(1, -1, 3) == sorted_tri_mesh.reshape(-1, 1, 3), dim=-1), dim=0)
|
||||
has_tri_0 = torch.logical_and(has_tri_00, has_tri_01)
|
||||
has_tri_1 = torch.logical_and(has_tri_10, has_tri_11)
|
||||
has_tri = torch.logical_or(has_tri_0, has_tri_1)
|
||||
has_all_tri = torch.all(has_tri)
|
||||
reconstructed_mesh = torch.cat([
|
||||
tri_00[has_tri_00], tri_01[has_tri_01],
|
||||
tri_10[has_tri_10], tri_11[has_tri_11]
|
||||
], dim=0)
|
||||
assert reconstructed_mesh.shape[0] == tri_00.shape[0] * 2
|
||||
assert torch.unique(sorted_tri_mesh, dim=0).shape == sorted_tri_mesh.shape
|
||||
assert torch.all(torch.unique(reconstructed_mesh, dim=0) == torch.unique(sorted_tri_mesh, dim=0))
|
||||
0
tests/python/kaolin/ops/conversions/__init__.py
Normal file
297
tests/python/kaolin/ops/conversions/test_pointcloud.py
Normal file
@@ -0,0 +1,297 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.ops.conversions import pointclouds_to_voxelgrids, unbatched_pointcloud_to_spc
|
||||
from kaolin.utils.testing import FLOAT_TYPES, BOOL_DTYPES, INT_DTYPES, FLOAT_DTYPES, ALL_DTYPES, check_spc_octrees
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
class TestPointcloudToVoxelgrid:
|
||||
|
||||
def test_pointclouds_to_voxelgrids(self, device, dtype):
|
||||
pointclouds = torch.tensor([[[0, 0, 0],
|
||||
[1, 1, 1],
|
||||
[2, 2, 2],
|
||||
[0, 2, 2]],
|
||||
|
||||
[[0, 1, 2],
|
||||
[2, 0, 0],
|
||||
[1, 2, 0],
|
||||
[1, 1, 2]]], device=device, dtype=dtype)
|
||||
|
||||
expected_vg = torch.tensor([[[[1., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 1.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 1., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 1.]]],
|
||||
|
||||
[[[0., 0., 0.],
|
||||
[0., 0., 1.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 1.],
|
||||
[1., 0., 0.]],
|
||||
|
||||
[[1., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]]], device=device, dtype=dtype)
|
||||
|
||||
output_vg = pointclouds_to_voxelgrids(pointclouds, 3)
|
||||
|
||||
assert torch.equal(output_vg, expected_vg)
|
||||
|
||||
def test_pointclouds_to_voxelgrids_origin(self, device, dtype):
|
||||
pointclouds = torch.tensor([[[0, 0, 0],
|
||||
[1, 1, 1],
|
||||
[2, 2, 2],
|
||||
[0, 2, 2]],
|
||||
|
||||
[[0, 1, 2],
|
||||
[2, 0, 0],
|
||||
[1, 2, 0],
|
||||
[1, 1, 2]]], device=device, dtype=dtype)
|
||||
|
||||
expected_vg = torch.tensor([[[[1., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 1.]]],
|
||||
|
||||
[[[0., 0., 1.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]]], device=device, dtype=dtype)
|
||||
|
||||
|
||||
output_vg = pointclouds_to_voxelgrids(pointclouds, 3, origin=torch.ones((2, 3), device=device, dtype=dtype))
|
||||
|
||||
assert torch.equal(output_vg, expected_vg)
|
||||
|
||||
def test_pointclouds_to_voxelgrids_scale(self, device, dtype):
|
||||
pointclouds = torch.tensor([[[0, 0, 0],
|
||||
[1, 1, 1],
|
||||
[2, 2, 2],
|
||||
[0, 2, 2]],
|
||||
|
||||
[[0, 1, 2],
|
||||
[2, 0, 0],
|
||||
[1, 2, 0],
|
||||
[1, 1, 2]]], device=device, dtype=dtype)
|
||||
|
||||
expected_vg = torch.tensor([[[[1., 0., 0.],
|
||||
[0., 1., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 1., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]],
|
||||
|
||||
[[[0., 1., 0.],
|
||||
[1., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[1., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]]], device=device, dtype=dtype)
|
||||
|
||||
output_vg = pointclouds_to_voxelgrids(pointclouds, 3, scale=torch.ones((2), device=device, dtype=dtype) * 4)
|
||||
|
||||
assert torch.equal(output_vg, expected_vg)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
@pytest.mark.parametrize('level', list(range(1, 6)))
|
||||
class TestUnbatchedPointcloudToSpc:
|
||||
|
||||
@pytest.fixture
|
||||
def pointcloud(self, device):
|
||||
return torch.tensor([[-1, -1, -1],
|
||||
[-1, -1, 0],
|
||||
[0, -1, -1],
|
||||
[-1, 0, -1],
|
||||
[0, 0, 0],
|
||||
[1, 1, 1],
|
||||
[0.999, 0.999, 0.999]], device=device)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def typed_pointcloud(self, pointcloud, dtype):
|
||||
return pointcloud.to(dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_octree(self, device, level):
|
||||
level_cutoff_mapping = [1, 6, 12, 18, 24]
|
||||
level_cutoff = level_cutoff_mapping[level-1]
|
||||
full_octree = torch.tensor([151,
|
||||
1, 1, 1, 1, 129,
|
||||
1, 1, 1, 1, 1, 128,
|
||||
1, 1, 1, 1, 1, 128,
|
||||
1, 1, 1, 1, 1, 128], device=device)
|
||||
expected_octree = full_octree[:level_cutoff]
|
||||
return expected_octree.byte()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def bool_features(self):
|
||||
def _bool_features(device, booltype):
|
||||
return torch.tensor([[0],
|
||||
[1],
|
||||
[1],
|
||||
[1],
|
||||
[0],
|
||||
[1],
|
||||
[1]], device=device).to(booltype)
|
||||
return _bool_features
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_bool_features(self):
|
||||
def _expected_bool_features(device, booltype, level):
|
||||
if level == 1:
|
||||
return torch.tensor([[0],
|
||||
[1],
|
||||
[1],
|
||||
[1],
|
||||
[1]], device=device).to(booltype)
|
||||
else:
|
||||
return torch.tensor([[0],
|
||||
[1],
|
||||
[1],
|
||||
[1],
|
||||
[0],
|
||||
[1]], device=device).to(booltype)
|
||||
return _expected_bool_features
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def int_features(self):
|
||||
def _int_features(device, inttype):
|
||||
return torch.tensor([[1],
|
||||
[4],
|
||||
[7],
|
||||
[10],
|
||||
[20],
|
||||
[37],
|
||||
[1]], device=device).to(inttype)
|
||||
return _int_features
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_int_features(self):
|
||||
def _expected_int_features(device, inttype, level):
|
||||
if level == 1:
|
||||
return torch.tensor([[1],
|
||||
[4],
|
||||
[10],
|
||||
[7],
|
||||
[19]], device=device).to(inttype)
|
||||
else:
|
||||
return torch.tensor([[1],
|
||||
[4],
|
||||
[10],
|
||||
[7],
|
||||
[20],
|
||||
[19]], device=device).to(inttype)
|
||||
return _expected_int_features
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fp_features(self):
|
||||
def _fp_features(device, fptype):
|
||||
return torch.tensor([[1, 2, 3],
|
||||
[4, 5, 6],
|
||||
[7, 8, 9],
|
||||
[10, 10, 10],
|
||||
[20, 20, 20],
|
||||
[37, 37, 37],
|
||||
[1, 2, 3]], device=device).to(fptype)
|
||||
return _fp_features
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_fp_features(self):
|
||||
def _expected_fp_features(device, fptype, level):
|
||||
if level == 1:
|
||||
return torch.tensor([[1, 2, 3],
|
||||
[4, 5, 6],
|
||||
[10, 10, 10],
|
||||
[7, 8, 9],
|
||||
[58/3, 59/3, 60/3]], device=device).to(fptype)
|
||||
else:
|
||||
return torch.tensor([[1, 2, 3],
|
||||
[4, 5, 6],
|
||||
[10, 10, 10],
|
||||
[7, 8, 9],
|
||||
[20, 20, 20],
|
||||
[19, 19.5, 20]], device=device).to(fptype)
|
||||
return _expected_fp_features
|
||||
|
||||
|
||||
@pytest.mark.parametrize('dtype', FLOAT_DTYPES)
|
||||
def test_unbatched_pointcloud_to_spc(self, typed_pointcloud, level, expected_octree):
|
||||
output_spc = unbatched_pointcloud_to_spc(typed_pointcloud, level)
|
||||
assert check_spc_octrees(output_spc.octrees, output_spc.lengths,
|
||||
batch_size=output_spc.batch_size,
|
||||
level=level,
|
||||
device=typed_pointcloud.device.type)
|
||||
assert torch.equal(output_spc.octrees, expected_octree)
|
||||
|
||||
@pytest.mark.parametrize('booltype', BOOL_DTYPES)
|
||||
def test_unbatched_pointcloud_to_spc_with_bool_features(self, pointcloud, device, booltype, level,
|
||||
bool_features, expected_bool_features):
|
||||
features_arg = bool_features(device, booltype)
|
||||
expected_features_arg = expected_bool_features(device, booltype, level)
|
||||
output_spc = unbatched_pointcloud_to_spc(pointcloud, level, features_arg)
|
||||
assert torch.equal(output_spc.features, expected_features_arg)
|
||||
|
||||
@pytest.mark.parametrize('inttype', INT_DTYPES)
|
||||
def test_unbatched_pointcloud_to_spc_with_int_features(self, pointcloud, device, inttype, level,
|
||||
int_features, expected_int_features):
|
||||
features_arg = int_features(device, inttype)
|
||||
expected_features_arg = expected_int_features(device, inttype, level)
|
||||
output_spc = unbatched_pointcloud_to_spc(pointcloud, level, features_arg)
|
||||
assert torch.equal(output_spc.features, expected_features_arg)
|
||||
|
||||
@pytest.mark.parametrize('fptype', FLOAT_DTYPES)
|
||||
def test_unbatched_pointcloud_to_spc_with_fp_features(self, pointcloud, device, fptype, level,
|
||||
fp_features, expected_fp_features):
|
||||
features_arg = fp_features(device, fptype)
|
||||
expected_features_arg = expected_fp_features(device, fptype, level)
|
||||
output_spc = unbatched_pointcloud_to_spc(pointcloud, level, features_arg)
|
||||
assert torch.allclose(output_spc.features, expected_features_arg)
|
||||
94
tests/python/kaolin/ops/conversions/test_sdf.py
Normal file
@@ -0,0 +1,94 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import sys
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.ops.conversions import sdf
|
||||
|
||||
|
||||
class TestSdfToVoxelgrids:
|
||||
|
||||
def sphere(self, points, center=0, radius=0.5):
|
||||
return torch.sum((points - center) ** 2, 1) ** 0.5 - radius
|
||||
|
||||
def two_spheres(self, points):
|
||||
dis1 = self.sphere(points, 0.1, 0.4)
|
||||
dis2 = self.sphere(points, -0.1, 0.4)
|
||||
dis = torch.zeros_like(dis1)
|
||||
mask = (dis1 > 0) & (dis2 > 0)
|
||||
dis[mask] = torch.min(dis1[mask], dis2[mask])
|
||||
mask = (dis1 < 0) ^ (dis2 < 0)
|
||||
dis[mask] = torch.max(-torch.abs(dis1[mask]), -torch.abs(dis2[mask]))
|
||||
mask = (dis1 < 0) & (dis2 < 0)
|
||||
dis[mask] = torch.min(torch.abs(dis1[mask]), torch.abs(dis2[mask]))
|
||||
return dis
|
||||
|
||||
def sdf_to_voxelgrids_naive(self, sdf, res):
|
||||
outputs = []
|
||||
for i_batch in range(len(sdf)):
|
||||
output = torch.ones((res, res, res))
|
||||
grid_pts = torch.nonzero(output).float() / (res - 1) - 0.5
|
||||
outputs.append((sdf[i_batch](grid_pts) <= 0).float().reshape(output.shape))
|
||||
return torch.stack(outputs)
|
||||
|
||||
def test_sdf_type(self):
|
||||
with pytest.raises(TypeError,
|
||||
match=r"Expected sdf to be list "
|
||||
r"but got <class 'int'>."):
|
||||
sdf.sdf_to_voxelgrids(0)
|
||||
def test_each_sdf_type(self):
|
||||
with pytest.raises(TypeError,
|
||||
match=r"Expected sdf\[0\] to be callable "
|
||||
r"but got <class 'int'>."):
|
||||
sdf.sdf_to_voxelgrids([0])
|
||||
def test_bbox_center_type(self):
|
||||
with pytest.raises(TypeError,
|
||||
match=r"Expected bbox_center to be int or float "
|
||||
r"but got <class 'str'>."):
|
||||
sdf.sdf_to_voxelgrids([self.sphere], bbox_center=' ')
|
||||
|
||||
def test_bbox_dim_type(self):
|
||||
with pytest.raises(TypeError,
|
||||
match=r"Expected bbox_dim to be int or float "
|
||||
r"but got <class 'str'>."):
|
||||
sdf.sdf_to_voxelgrids([self.sphere], bbox_dim=' ')
|
||||
|
||||
def test_init_res_type(self):
|
||||
with pytest.raises(TypeError,
|
||||
match=r"Expected init_res to be int "
|
||||
r"but got <class 'float'>."):
|
||||
sdf.sdf_to_voxelgrids([self.sphere], init_res=0.5)
|
||||
|
||||
def test_upsampling_steps_type(self):
|
||||
with pytest.raises(TypeError,
|
||||
match=r"Expected upsampling_steps to be int "
|
||||
r"but got <class 'float'>."):
|
||||
sdf.sdf_to_voxelgrids([self.sphere], upsampling_steps=0.5)
|
||||
|
||||
@pytest.mark.parametrize('init_res', [4, 8, 32])
|
||||
@pytest.mark.parametrize('upsampling_steps', [0, 2, 4])
|
||||
def test_sphere(self, init_res, upsampling_steps):
|
||||
final_res = init_res * 2 ** upsampling_steps + 1
|
||||
assert(torch.equal(sdf.sdf_to_voxelgrids([self.sphere], init_res=init_res, upsampling_steps=upsampling_steps),
|
||||
self.sdf_to_voxelgrids_naive([self.sphere], final_res)))
|
||||
|
||||
@pytest.mark.parametrize('init_res', [4, 8, 32])
|
||||
@pytest.mark.parametrize('upsampling_steps', [0, 2, 4])
|
||||
def test_two_spheres(self, init_res, upsampling_steps):
|
||||
final_res = init_res * 2 ** upsampling_steps + 1
|
||||
assert(torch.equal(sdf.sdf_to_voxelgrids([self.two_spheres], init_res=init_res, upsampling_steps=upsampling_steps),
|
||||
self.sdf_to_voxelgrids_naive([self.two_spheres], final_res)))
|
||||
153
tests/python/kaolin/ops/conversions/test_tetmesh.py
Normal file
@@ -0,0 +1,153 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
from kaolin.ops.conversions import tetmesh as tm
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
|
||||
class TestMarchingTetrahedra:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices(self, device):
|
||||
vertices = torch.tensor([[-1., -1., -1.],
|
||||
[1., -1., -1.],
|
||||
[-1., 1., -1.],
|
||||
[1., 1., -1.],
|
||||
[-1., -1., 1.],
|
||||
[1., -1., 1.],
|
||||
[-1., 1., 1.],
|
||||
[1., 1., 1.]],
|
||||
dtype=torch.float,
|
||||
device=device).unsqueeze(0).expand(4, -1, -1)
|
||||
return vertices
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def tets(self, device):
|
||||
tets = torch.tensor([[0, 1, 3, 5],
|
||||
[4, 5, 0, 6],
|
||||
[0, 3, 2, 6],
|
||||
[5, 3, 6, 7],
|
||||
[0, 5, 3, 6]],
|
||||
dtype=torch.long,
|
||||
device=device)
|
||||
return tets
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def sdf(self, device):
|
||||
sdf = torch.tensor([[1, 1, 1, 1, 1, 1, 1, 1], # 1st case: empty
|
||||
[1, 1, 1, 1, -1, 1, 1, 1], # 2nd case: one triangle
|
||||
[1, 1, 1, 1, -1, -1, 1, 1], # 3rd case: multiple triangles
|
||||
[1, 1, 1, 1, -0.5, -0.7, 1, 1]], # 4th case: same topology as 3rd case but different zero-crossings
|
||||
|
||||
dtype=torch.float,
|
||||
device=device)
|
||||
return sdf
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_verts(self, device):
|
||||
expected_verts = []
|
||||
expected_verts.append(torch.zeros((0, 3), device=device))
|
||||
expected_verts.append(torch.tensor([[-1., -1., 0.],
|
||||
[0., -1., 1.],
|
||||
[-1., 0., 1.]],
|
||||
dtype=torch.float,
|
||||
device=device))
|
||||
|
||||
expected_verts.append(torch.tensor([[-1., -1., 0.],
|
||||
[0., -1., 0.],
|
||||
[1., -1., 0.],
|
||||
[1., 0., 0.],
|
||||
[-1., 0., 1.],
|
||||
[0., 0., 1.],
|
||||
[1., 0., 1.]],
|
||||
dtype=torch.float,
|
||||
device=device))
|
||||
|
||||
expected_verts.append(torch.tensor([[-1.0000, -1.0000, 0.3333],
|
||||
[0.1765, -1.0000, 0.1765],
|
||||
[1.0000, -1.0000, 0.1765],
|
||||
[1.0000, -0.1765, 0.1765],
|
||||
[-1.0000, -0.3333, 1.0000],
|
||||
[0.1765, -0.1765, 1.0000],
|
||||
[1.0000, -0.1765, 1.0000]],
|
||||
dtype=torch.float,
|
||||
device=device))
|
||||
|
||||
return expected_verts
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces(self, device):
|
||||
|
||||
expected_faces = []
|
||||
|
||||
expected_faces.append(torch.zeros(
|
||||
(0, 3), dtype=torch.long, device=device))
|
||||
|
||||
expected_faces.append(torch.tensor([[2, 1, 0]],
|
||||
dtype=torch.long,
|
||||
device=device))
|
||||
|
||||
expected_faces.append(torch.tensor([[2, 1, 3],
|
||||
[6, 3, 5],
|
||||
[3, 1, 5],
|
||||
[5, 0, 4],
|
||||
[5, 1, 0]],
|
||||
dtype=torch.long,
|
||||
device=device))
|
||||
|
||||
expected_faces.append(torch.tensor([[2, 1, 3],
|
||||
[6, 3, 5],
|
||||
[3, 1, 5],
|
||||
[5, 0, 4],
|
||||
[5, 1, 0]],
|
||||
dtype=torch.long,
|
||||
device=device))
|
||||
|
||||
return expected_faces
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_tet_idx(self, device):
|
||||
|
||||
expected_tet_idx = []
|
||||
expected_tet_idx.append(torch.zeros(
|
||||
(0), dtype=torch.long, device=device))
|
||||
|
||||
expected_tet_idx.append(torch.tensor([1],
|
||||
dtype=torch.long,
|
||||
device=device))
|
||||
|
||||
expected_tet_idx.append(torch.tensor([0, 3, 4, 1, 1],
|
||||
dtype=torch.long,
|
||||
device=device))
|
||||
|
||||
expected_tet_idx.append(torch.tensor([0, 3, 4, 1, 1],
|
||||
dtype=torch.long,
|
||||
device=device))
|
||||
|
||||
return expected_tet_idx
|
||||
|
||||
def test_output_value(self, vertices, tets, sdf, expected_verts, expected_faces, expected_tet_idx):
|
||||
|
||||
verts_list, faces_list, tet_idx_list = tm.marching_tetrahedra(vertices, tets, sdf, True)
|
||||
for i in range(0, 4):
|
||||
assert torch.allclose(
|
||||
verts_list[i], expected_verts[i], atol=1e-4)
|
||||
assert torch.equal(
|
||||
faces_list[i], expected_faces[i])
|
||||
assert torch.equal(
|
||||
tet_idx_list[i], expected_tet_idx[i])
|
||||
370
tests/python/kaolin/ops/conversions/test_trianglemesh.py
Normal file
@@ -0,0 +1,370 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
|
||||
import kaolin as kal
|
||||
from kaolin.ops.conversions import trianglemeshes_to_voxelgrids, unbatched_mesh_to_spc
|
||||
from kaolin.utils.testing import FLOAT_TYPES
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
@pytest.mark.parametrize('return_sparse', [True, False])
|
||||
class TestTriangleMeshToVoxelgrid:
|
||||
|
||||
def test_resolution_type(self, device, dtype, return_sparse):
|
||||
vertices = torch.tensor([[[0, 0, 0],
|
||||
[1, 0, 0],
|
||||
[0, 0, 1]]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
origins = torch.zeros((1, 3), dtype=dtype, device=device)
|
||||
|
||||
scale = torch.ones((1), dtype=dtype, device=device)
|
||||
|
||||
with pytest.raises(TypeError, match=r"Expected resolution to be int "
|
||||
r"but got .*"):
|
||||
trianglemeshes_to_voxelgrids(
|
||||
vertices, faces, 2.3, origins, scale, return_sparse
|
||||
)
|
||||
|
||||
def test_mesh_to_voxel_batched(self, device, dtype, return_sparse):
|
||||
vertices = torch.tensor([[[0, 0, 0],
|
||||
[1, 0, 0],
|
||||
[0, 0, 1]],
|
||||
|
||||
[[0, 0, 0],
|
||||
[0, 1, 0],
|
||||
[1, 0, 1]]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
origins = torch.zeros((2, 3), dtype=dtype, device=device)
|
||||
|
||||
scale = torch.ones((2), dtype=dtype, device=device)
|
||||
|
||||
output = trianglemeshes_to_voxelgrids(
|
||||
vertices, faces, 3, origins, scale, return_sparse
|
||||
)
|
||||
|
||||
# output voxelgrid should have value around the corner
|
||||
expected = torch.tensor([[[[1., 1., 1.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[1., 1., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[1., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]],
|
||||
|
||||
[[[1., 0., 0.],
|
||||
[1., 0., 0.],
|
||||
[1., 0., 0.]],
|
||||
|
||||
[[0., 1., 0.],
|
||||
[0., 1., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 1.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]]],device=device, dtype=dtype)
|
||||
|
||||
if return_sparse:
|
||||
output = output.to_dense()
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
def test_mesh_to_voxel_origins(self, device, dtype, return_sparse):
|
||||
vertices = torch.tensor([[[0, 0, 0],
|
||||
[1, 0, 0],
|
||||
[0, 0, 1]]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
origins = torch.zeros((1, 3), dtype=dtype, device=device)
|
||||
origins[0][2] = 0.6
|
||||
|
||||
scale = torch.ones((1), dtype=dtype, device=device)
|
||||
|
||||
output = trianglemeshes_to_voxelgrids(
|
||||
vertices, faces, 3, origins, scale, return_sparse
|
||||
)
|
||||
|
||||
expected = torch.tensor([[[[1., 1., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[1., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]]], device=device, dtype=dtype)
|
||||
|
||||
if return_sparse:
|
||||
output = output.to_dense()
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
def test_mesh_to_voxel_scale(self, device, dtype, return_sparse):
|
||||
vertices = torch.tensor([[[0, 0, 0],
|
||||
[1, 0, 0],
|
||||
[0, 0, 1]]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
origins = torch.zeros((1, 3), dtype=dtype, device=device)
|
||||
|
||||
scale = torch.ones((1), dtype=dtype, device=device) * 2
|
||||
|
||||
output = trianglemeshes_to_voxelgrids(
|
||||
vertices, faces, 3, origins, scale, return_sparse
|
||||
)
|
||||
|
||||
expected = torch.tensor([[[[1., 1., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[1., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[0., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]]], device=device, dtype=dtype)
|
||||
|
||||
if return_sparse:
|
||||
output = output.to_dense()
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
def test_mesh_to_voxel_resolution_3(self, device, dtype, return_sparse):
|
||||
vertices = torch.tensor([[[0, 0, 0],
|
||||
[1, 0, 0],
|
||||
[0, 0, 1]]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
origins = torch.zeros((1, 3), dtype=dtype, device=device)
|
||||
|
||||
scale = torch.ones((1), dtype=dtype, device=device)
|
||||
|
||||
output = trianglemeshes_to_voxelgrids(
|
||||
vertices, faces, 3, origins, scale, return_sparse
|
||||
)
|
||||
|
||||
expected = torch.tensor([[[[1., 1., 1.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[1., 1., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]],
|
||||
|
||||
[[1., 0., 0.],
|
||||
[0., 0., 0.],
|
||||
[0., 0., 0.]]]], device=device, dtype=dtype)
|
||||
|
||||
if return_sparse:
|
||||
output = output.to_dense()
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
def test_rectangle(self, device, dtype, return_sparse):
|
||||
vertices = torch.tensor([[0, 0, 0],
|
||||
[8, 0, 0],
|
||||
[0, 8, 0],
|
||||
[8, 8, 0],
|
||||
[0, 0, 12],
|
||||
[8, 0, 12],
|
||||
[0, 8, 12],
|
||||
[8, 8, 12]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 3, 1],
|
||||
[0, 2, 3],
|
||||
[0, 1, 5],
|
||||
[0, 5, 4],
|
||||
[1, 7, 5],
|
||||
[1, 3, 7],
|
||||
[0, 6, 2],
|
||||
[0, 4, 6],
|
||||
[4, 7, 6],
|
||||
[4, 5, 7],
|
||||
[2, 7, 3],
|
||||
[2, 6, 7]], dtype=torch.long, device=device)
|
||||
|
||||
origin = torch.zeros((1, 3), dtype=dtype, device=device)
|
||||
|
||||
origin[0][2] = 2
|
||||
|
||||
scale = torch.ones((1), dtype=dtype, device=device) * 8
|
||||
|
||||
output = trianglemeshes_to_voxelgrids(
|
||||
vertices.unsqueeze(0), faces, 4, origin, scale, return_sparse
|
||||
)
|
||||
|
||||
expected = torch.tensor([[[[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.]],
|
||||
|
||||
[[1., 1., 1., 1.],
|
||||
[0., 0., 0., 0.],
|
||||
[0., 0., 0., 0.],
|
||||
[1., 1., 1., 1.]],
|
||||
|
||||
[[1., 1., 1., 1.],
|
||||
[0., 0., 0., 0.],
|
||||
[0., 0., 0., 0.],
|
||||
[1., 1., 1., 1.]],
|
||||
|
||||
[[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.],
|
||||
[1., 1., 1., 1.]]]], device=device, dtype=dtype)
|
||||
|
||||
if return_sparse:
|
||||
output = output.to_dense()
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
@pytest.mark.parametrize('level', [3])
|
||||
class TestUnbatchedMeshToSpc:
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces(self):
|
||||
return torch.tensor([
|
||||
[0, 1, 2],
|
||||
[2, 1, 3],
|
||||
[4, 5, 6],
|
||||
[7, 8, 9]
|
||||
], device='cuda', dtype=torch.long)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices(self):
|
||||
return torch.tensor([[
|
||||
[-0.4272, 0.0795, 0.3548],
|
||||
[-0.9217, 0.3106, 0.1516],
|
||||
[-0.2636, 0.3794, -0.7979],
|
||||
[ 0.1259, 0.9089, 0.7439],
|
||||
[ 0.0710, -0.6947, -0.0480],
|
||||
[ 0.6215, 0.2809, -0.0480],
|
||||
[ 0.4972, 0.3347, 0.4422],
|
||||
[-0.4374, 0.4967, -0.6047],
|
||||
[ 0.0397, 0.1230, -0.7417],
|
||||
[-0.3534, 0.9970, -0.4558]
|
||||
]], device='cuda')
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices(self, vertices, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(vertices, faces)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_octree(self, level):
|
||||
if level == 1:
|
||||
return torch.tensor([], dtype=torch.uint8, device='cuda')
|
||||
elif level == 3:
|
||||
return torch.tensor([
|
||||
252, 242, 213, 10, 5, 35, 29, 232, 172, 79, 170, 55, 245, 48,
|
||||
7, 179, 81, 8, 162, 4, 209, 2, 32, 10, 176, 11, 4, 15
|
||||
], dtype=torch.uint8, device='cuda')
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_face_idx(self, level):
|
||||
if level == 3:
|
||||
return torch.tensor([
|
||||
0, 0, 0, 0, 0, 0, 3, 1, 0, 0, 0, 0, 1, 3, 3, 3, 3, 1, 1, 3, 1, 1,
|
||||
0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2,
|
||||
2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2, 2
|
||||
], device='cuda', dtype=torch.long)
|
||||
elif level == 1:
|
||||
return torch.tensor([0], device='cuda', dtype=torch.long)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_bary_w(self, level):
|
||||
return torch.tensor([
|
||||
[4.5012e-08, 7.7766e-01],
|
||||
[2.8764e-01, 4.0506e-01],
|
||||
[3.5860e-08, 4.7760e-01],
|
||||
[1.0753e-01, 5.5666e-01],
|
||||
[2.5024e-08, 3.0500e-04],
|
||||
[5.3537e-03, 1.7265e-01],
|
||||
[2.4690e-01, 7.5310e-01],
|
||||
[8.8672e-01, 4.2031e-09],
|
||||
[4.0263e-01, 1.9203e-05],
|
||||
[6.0202e-01, 3.6483e-08],
|
||||
[2.2252e-01, 1.5161e-01],
|
||||
[4.3968e-01, 1.3058e-01],
|
||||
[7.3768e-01, 2.0286e-02],
|
||||
[6.2631e-01, 8.5154e-02],
|
||||
[2.8269e-01, 2.0198e-02],
|
||||
[7.7711e-08, 4.6322e-01],
|
||||
[9.7272e-08, 2.4475e-01],
|
||||
[6.3429e-01, 1.7624e-01],
|
||||
[4.4455e-01, 2.6633e-01],
|
||||
[1.8181e-01, 2.8813e-08],
|
||||
[7.0239e-01, 3.2221e-08],
|
||||
[5.5041e-01, 2.5328e-02],
|
||||
[1.7266e-01, 8.1010e-01],
|
||||
[1.7232e-08, 9.5489e-01],
|
||||
[5.0481e-01, 3.8403e-01],
|
||||
[6.9277e-01, 3.0723e-01],
|
||||
[3.2469e-01, 5.3563e-01],
|
||||
[2.7070e-08, 7.3333e-01],
|
||||
[1.4894e-01, 5.9743e-01],
|
||||
[5.6993e-09, 6.5052e-01],
|
||||
[8.0139e-01, 4.1631e-08],
|
||||
[1.0000e+00, 0.0000e+00],
|
||||
[2.5233e-01, 4.4148e-01],
|
||||
[2.5480e-01, 3.5643e-01],
|
||||
[6.5063e-02, 4.4652e-01],
|
||||
[3.6067e-01, 1.1542e-01],
|
||||
[1.7093e-01, 2.0551e-01],
|
||||
[1.7340e-01, 1.2046e-01],
|
||||
[4.2470e-08, 4.2354e-01],
|
||||
[2.3278e-08, 2.7855e-01],
|
||||
[4.6319e-08, 1.9574e-01],
|
||||
[9.2212e-01, 7.7879e-02],
|
||||
[7.2775e-01, 2.7225e-01],
|
||||
[6.1808e-01, 3.8192e-01],
|
||||
[4.2371e-01, 5.7629e-01],
|
||||
[8.7880e-01, 2.5213e-08],
|
||||
[7.0510e-01, 8.7381e-09],
|
||||
[6.1462e-01, 1.1334e-01],
|
||||
[4.1944e-01, 2.4449e-01],
|
||||
[3.7678e-01, 3.8143e-08],
|
||||
[3.8031e-08, 9.9842e-01],
|
||||
[2.2935e-01, 7.7065e-01],
|
||||
[1.1967e-01, 8.8033e-01],
|
||||
[0.0000e+00, 1.0000e+00],
|
||||
[2.2426e-01, 3.7564e-01],
|
||||
[2.0308e-01, 1.5850e-08],
|
||||
[2.3061e-02, 3.8613e-02],
|
||||
[3.9331e-01, 1.1329e-08],
|
||||
[2.5610e-01, 1.6592e-08],
|
||||
[2.0898e-01, 1.9427e-08],
|
||||
[7.1771e-02, 1.6657e-08],
|
||||
[1.1603e-01, 5.9735e-01],
|
||||
[1.1001e-01, 1.2918e-01],
|
||||
[2.4004e-08, 6.5422e-01],
|
||||
[2.3279e-08, 1.8040e-01]
|
||||
], device='cuda', dtype=torch.float)
|
||||
|
||||
def test_octree(self, face_vertices, level, expected_octree, expected_face_idx, expected_bary_w):
|
||||
octree, face_idx, bary_w = unbatched_mesh_to_spc(face_vertices.squeeze(0), level)
|
||||
assert torch.equal(octree, expected_octree)
|
||||
assert torch.equal(face_idx, expected_face_idx)
|
||||
assert torch.allclose(bary_w, expected_bary_w, atol=1e-3, rtol=1e-3)
|
||||
|
||||
1143
tests/python/kaolin/ops/conversions/test_voxelgrid.py
Normal file
185
tests/python/kaolin/ops/mesh/test_check_sign.py
Normal file
@@ -0,0 +1,185 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import math
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.utils.testing import FLOAT_TYPES
|
||||
from kaolin.ops import mesh
|
||||
|
||||
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
|
||||
@pytest.mark.parametrize('dtype', [torch.float, torch.double])
|
||||
class TestCheckSign:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def verts(self, device, dtype):
|
||||
verts = torch.tensor(
|
||||
[[[1., 0., 0.],
|
||||
[1., 0., 1.],
|
||||
[1., -1., -1.],
|
||||
[1., 1., -1.],
|
||||
|
||||
[-1., 0., 0.],
|
||||
[-1., 0., -4.],
|
||||
[-1., -4., 4.],
|
||||
[-1., 4., 4.]]], device=device, dtype=dtype)
|
||||
# TODO(cfujitsang): fix cpu and extend test with torch.flip(verts, dims=(-1,))
|
||||
return torch.cat([verts, -verts], dim=0)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces(self, device):
|
||||
return torch.tensor(
|
||||
[[0, 1, 2],
|
||||
[0, 2, 3],
|
||||
[0, 3, 1],
|
||||
|
||||
[1, 2, 6],
|
||||
[2, 3, 5],
|
||||
[3, 1, 7],
|
||||
|
||||
[5, 6, 2],
|
||||
[6, 7, 1],
|
||||
[7, 5, 3],
|
||||
|
||||
[4, 6, 5],
|
||||
[4, 7, 6],
|
||||
[4, 5, 7]], device=device, dtype=torch.long)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def points(self, device, dtype):
|
||||
points = torch.tensor([[
|
||||
# Inside
|
||||
[ 0.9, 0., 0. ], # inside overlap with vertices 0 and 4
|
||||
[ 0.9, 0., -1. ], # inside overlap with edges 2-3, and 4-5
|
||||
[ 0.9, 0., -0.9], # inside overlap with edge 4-5
|
||||
[ 0.9, 0.1, -1.0], # inside overlap with edge 2-3
|
||||
[ 0.9, 0., 1. ], # inside overlap with vertice 1
|
||||
[ 0.9, -1., -1. ], # inside overlap with vertice 2
|
||||
[ 0.9, 1., -1. ], # inside overlap with vertice 3
|
||||
[-0.99, 0., -3.9], # inside near vertice 5
|
||||
[-0.99, -3.9, 3.9], # inside near vertice 6
|
||||
[-0.99, 3.9, 3.9], # inside near vertice 7
|
||||
# Outside
|
||||
[ 0.9, 0., -4. ], # outside overlap with 5
|
||||
[ 0.9, -4., 4. ], # outside overlap with 6
|
||||
[ 0.9, 4., 4. ], # outside overlap with 7
|
||||
[ 0.9, 0., 4. ], # outside overlap with edge 6-7
|
||||
[-0.9, 0., -3.9], # outside aligned with edge 2-3 and overlap with 4-5
|
||||
[-0.9, -3.9, 3.9], # outside near vertice 6
|
||||
[-0.9, 3.9, 3.9], # outside near vertice 7
|
||||
[ 0.5, 0., 5. ], # outside aligned with edges 2-3 and 4-5
|
||||
[ 0.5, -5., 4. ], # outside aligned with edge 6-7
|
||||
[ 1.1, 0., 0. ], # in front overlap with vertice 0 and 4
|
||||
[ 1.1, 0., -1. ], # in front overlap with edges 2-3 and 4-5
|
||||
[ 1.1, 0., -0.9], # in front overlap with edge 4-5
|
||||
[ 1.1, 0.1, -1.0], # in front overlap with edge 2-3
|
||||
[ 1.1, 0., 1. ], # in front overlap with vertice 1
|
||||
[ 1.1, -1., -1. ], # in front overlap with vertice 2
|
||||
[ 1.1, 1., -1. ], # in front overlap with vertice 3
|
||||
[-1.1, 0., 0. ], # behind overlap with vertice 0 and 4
|
||||
[-1.1, 0., -1. ], # behind overlap with edges 2-3 and 4-5
|
||||
[-1.1, 0., -0.9], # behind overlap with edge 4-5
|
||||
[-1.1, 0.1, -1.0], # behind overlap with edge 2-3
|
||||
[-1.1, 0., 1. ], # behind overlap with vertice 1
|
||||
[-1.1, -1., -1. ], # behind overlap with vertice 2
|
||||
[-1.1, 1., -1. ], # behind overlap with vertice 3
|
||||
]], device=device, dtype=dtype)
|
||||
return torch.cat([
|
||||
points, torch.flip(-points, dims=(1,))], dim=0)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected(self, device):
|
||||
expected = torch.tensor(
|
||||
[[True, True, True, True, True, True, True, True, True, True,
|
||||
False, False, False, False, False, False, False, False, False, False,
|
||||
False, False, False, False, False, False, False, False, False, False,
|
||||
False, False, False]], device=device)
|
||||
return torch.cat([expected, torch.flip(expected, dims=(1,))], dim=0)
|
||||
|
||||
def test_faces_type(self, verts, faces, points):
|
||||
with pytest.raises(TypeError,
|
||||
match=r"Expected faces entries to be torch.int64 "
|
||||
r"but got torch.int32."):
|
||||
faces = faces.int()
|
||||
mesh.check_sign(verts, faces, points)
|
||||
|
||||
def test_hash_resolution_type(self, verts, faces, points):
|
||||
with pytest.raises(TypeError,
|
||||
match=r"Expected hash_resolution to be int "
|
||||
r"but got <class 'float'>."):
|
||||
mesh.check_sign(verts, faces, points, 512.0)
|
||||
|
||||
def test_verts_ndim(self, verts, faces, points):
|
||||
with pytest.raises(ValueError,
|
||||
match=r"Expected verts to have 3 dimensions "
|
||||
r"but got 4 dimensions."):
|
||||
verts = verts.unsqueeze(-1)
|
||||
mesh.check_sign(verts, faces, points)
|
||||
|
||||
def test_faces_ndim(self, verts, faces, points):
|
||||
with pytest.raises(ValueError,
|
||||
match=r"Expected faces to have 2 dimensions "
|
||||
r"but got 3 dimensions."):
|
||||
faces = faces.unsqueeze(-1)
|
||||
mesh.check_sign(verts, faces, points)
|
||||
|
||||
def test_points_ndim(self, verts, faces, points):
|
||||
with pytest.raises(ValueError,
|
||||
match=r"Expected points to have 3 dimensions "
|
||||
r"but got 4 dimensions."):
|
||||
points = points.unsqueeze(-1)
|
||||
mesh.check_sign(verts, faces, points)
|
||||
|
||||
def test_verts_shape(self, verts, faces, points):
|
||||
with pytest.raises(ValueError,
|
||||
match=r"Expected verts to have 3 coordinates "
|
||||
r"but got 2 coordinates."):
|
||||
verts = verts[...,:2]
|
||||
mesh.check_sign(verts, faces, points)
|
||||
|
||||
def test_faces_shape(self, verts, faces, points):
|
||||
with pytest.raises(ValueError,
|
||||
match=r"Expected faces to have 3 vertices "
|
||||
r"but got 2 vertices."):
|
||||
faces = faces[:,:2]
|
||||
mesh.check_sign(verts, faces, points)
|
||||
|
||||
def test_points_shape(self, verts, faces, points):
|
||||
with pytest.raises(ValueError,
|
||||
match=r"Expected points to have 3 coordinates "
|
||||
r"but got 2 coordinates."):
|
||||
points = points[...,:2]
|
||||
mesh.check_sign(verts, faces, points)
|
||||
|
||||
def test_single_batch(self, verts, faces, points, expected):
|
||||
output = mesh.check_sign(verts[:1], faces, points[:1])
|
||||
diff_idxs = torch.where(output != expected[:1])
|
||||
assert(torch.equal(output, expected[:1]))
|
||||
|
||||
def test_meshes(self, verts, faces, points, expected):
|
||||
output = mesh.check_sign(verts, faces, points)
|
||||
assert(torch.equal(output, expected))
|
||||
|
||||
def test_faces_with_zero_area(self, verts, faces, points, expected):
|
||||
faces = torch.cat([faces, torch.tensor([[1, 1, 1],
|
||||
[0, 0, 0],
|
||||
[2, 2, 2],
|
||||
[3, 3, 3]]).to(faces.device)])
|
||||
output = mesh.check_sign(verts, faces, points)
|
||||
assert(torch.equal(output, expected))
|
||||
|
||||
132
tests/python/kaolin/ops/mesh/test_mesh.py
Normal file
@@ -0,0 +1,132 @@
|
||||
# Copyright (c) 2019,20-22-23 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import math
|
||||
import pytest
|
||||
import os
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.utils.testing import FLOAT_TYPES
|
||||
from kaolin.ops import mesh
|
||||
from kaolin.io import obj
|
||||
|
||||
ROOT_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),
|
||||
os.pardir, os.pardir, os.pardir, os.pardir, 'samples/')
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
def test_adjacency_matrix_sparse(device, dtype):
|
||||
num_vertices = 5
|
||||
faces = torch.tensor([[1, 3, 2],
|
||||
[1, 4, 0]], dtype=torch.long, device=device)
|
||||
|
||||
output = mesh.adjacency_matrix(num_vertices, faces).to_dense()
|
||||
expected = torch.tensor([[0, 1, 0, 0, 1],
|
||||
[1, 0, 1, 1, 1],
|
||||
[0, 1, 0, 1, 0],
|
||||
[0, 1, 1, 0, 0],
|
||||
[1, 1, 0, 0, 0]], dtype=torch.float, device=device)
|
||||
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
def test_adjacency_matrix_dense(device, dtype):
|
||||
num_vertices = 5
|
||||
faces = torch.tensor([[1, 3, 2],
|
||||
[1, 4, 0]], dtype=torch.long, device=device)
|
||||
|
||||
output = mesh.adjacency_matrix(num_vertices, faces, sparse=False)
|
||||
expected = torch.tensor([[0, 1, 0, 0, 1],
|
||||
[1, 0, 1, 1, 1],
|
||||
[0, 1, 0, 1, 0],
|
||||
[0, 1, 1, 0, 0],
|
||||
[1, 1, 0, 0, 0]], dtype=torch.float, device=device)
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
def test_adjacency_consistent(device, dtype):
|
||||
test_mesh = obj.import_mesh(os.path.join(ROOT_DIR, 'model.obj'))
|
||||
vertices = test_mesh.vertices
|
||||
faces = test_mesh.faces
|
||||
|
||||
num_vertices = vertices.shape[0]
|
||||
|
||||
sparse = mesh.adjacency_matrix(num_vertices, faces)
|
||||
sparse_to_dense = sparse.to_dense()
|
||||
dense = mesh.adjacency_matrix(num_vertices, faces, sparse=False)
|
||||
|
||||
assert torch.equal(sparse_to_dense, dense)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
class TestUniformLaplacian:
|
||||
|
||||
def test_uniform_laplacian(self, device, dtype):
|
||||
|
||||
num_vertices = 5
|
||||
faces = torch.tensor([[1, 3, 2],
|
||||
[1, 4, 0]], dtype=torch.long, device=device)
|
||||
|
||||
output = mesh.uniform_laplacian(num_vertices, faces)
|
||||
expected = torch.tensor([[-1, 0.5, 0, 0, 0.5],
|
||||
[0.25, -1, 0.25, 0.25, 0.25],
|
||||
[0, 0.5, -1, 0.5, 0],
|
||||
[0, 0.5, 0.5, -1, 0],
|
||||
[0.5, 0.5, 0, 0, -1]], dtype=torch.float, device=device)
|
||||
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
def test_not_connected_mesh(self, device, dtype):
|
||||
num_vertices = 4
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
result = mesh.uniform_laplacian(num_vertices, faces)
|
||||
|
||||
# Any row and column related to V3 is zeros.
|
||||
assert torch.equal(result[3, :3], torch.zeros((3), device=device, dtype=torch.float))
|
||||
assert torch.equal(result[:3, 3], torch.zeros((3), device=device, dtype=torch.float))
|
||||
|
||||
@pytest.mark.parametrize('device,dtype', FLOAT_TYPES)
|
||||
class TestComputeVertexNormals:
|
||||
def test_compute_vertex_normals(self, device, dtype):
|
||||
# Faces are a fan around the 0th vertex
|
||||
faces = torch.tensor([[0, 2, 1],
|
||||
[0, 3, 2],
|
||||
[0, 4, 3]],
|
||||
device=device, dtype=torch.long)
|
||||
B = 3
|
||||
F = faces.shape[0]
|
||||
FSize = faces.shape[1]
|
||||
V = 6 # one vertex not in faces
|
||||
face_normals = torch.rand((B, F, FSize, 3), device=device, dtype=dtype)
|
||||
|
||||
expected = torch.zeros((B, V, 3), device=device, dtype=dtype)
|
||||
for b in range(B):
|
||||
expected[b, 0, :] = (face_normals[b, 0, 0, :] + face_normals[b, 1, 0, :] + face_normals[b, 2, 0, :]) / 3
|
||||
expected[b, 1, :] = face_normals[b, 0, 2, :]
|
||||
expected[b, 2, :] = (face_normals[b, 0, 1, :] + face_normals[b, 1, 2, :]) / 2
|
||||
expected[b, 3, :] = (face_normals[b, 1, 1, :] + face_normals[b, 2, 2, :]) / 2
|
||||
expected[b, 4, :] = face_normals[b, 2, 1, :]
|
||||
expected[b, 5, :] = 0 # DNE in faces
|
||||
|
||||
vertex_normals = mesh.compute_vertex_normals(faces, face_normals, num_vertices=V)
|
||||
assert torch.allclose(expected, vertex_normals)
|
||||
|
||||
# Now let's not pass in num_vertices; we will not get normals for the last vertex which is not in faces
|
||||
vertex_normals = mesh.compute_vertex_normals(faces, face_normals)
|
||||
assert torch.allclose(expected[:, :5, :], vertex_normals)
|
||||
143
tests/python/kaolin/ops/mesh/test_tetmesh.py
Normal file
@@ -0,0 +1,143 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.ops.mesh import tetmesh
|
||||
|
||||
|
||||
class TestTetMeshOps:
|
||||
|
||||
def test_validate_tetrahedrons_wrong_ndim(self):
|
||||
wrong_ndim_tet = torch.randn(size=(2, 2))
|
||||
with pytest.raises(Exception):
|
||||
tetmesh._validate_tetrahedrons(wrong_ndim_tet)
|
||||
|
||||
def test_validate_tetrahedrons_wrong_third_dimension(self):
|
||||
wrong_third_dim_tet = torch.randn(size=(2, 2, 3))
|
||||
with pytest.raises(Exception):
|
||||
tetmesh._validate_tetrahedrons(wrong_third_dim_tet)
|
||||
|
||||
def test_validate_tetrahedrons_wrong_fourth_dimension(self):
|
||||
wrong_fourth_dim_tet = torch.randn(size=(2, 2, 4, 2))
|
||||
with pytest.raises(Exception):
|
||||
tetmesh._validate_tetrahedrons(wrong_fourth_dim_tet)
|
||||
|
||||
def test_inverse_vertices_offset(self):
|
||||
tetrahedrons = torch.tensor([[[[-0.0500, 0.0000, 0.0500],
|
||||
[-0.0250, -0.0500, 0.0000],
|
||||
[0.0000, 0.0000, 0.0500],
|
||||
[0.5000, 0.5000, 0.4500]]]])
|
||||
oracle = torch.tensor([[[[0.0000, 20.0000, 0.0000],
|
||||
[79.9999, -149.9999, 10.0000],
|
||||
[-99.9999, 159.9998, -10.0000]]]])
|
||||
torch.allclose(tetmesh.inverse_vertices_offset(tetrahedrons), oracle)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
|
||||
class TestSubdivideTetmesh:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_single_tet(self, device):
|
||||
return torch.tensor([[[0, 0, 0], [1, 0, 0], [0, 1, 0], [0, 0, 1]]], dtype=torch.float, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces_single_tet(self, device):
|
||||
return torch.tensor([[0, 1, 2, 3]], dtype=torch.long, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_vertices_single_tet(self, device):
|
||||
return torch.tensor([[[0.0000, 0.0000, 0.0000],
|
||||
[1.0000, 0.0000, 0.0000],
|
||||
[0.0000, 1.0000, 0.0000],
|
||||
[0.0000, 0.0000, 1.0000],
|
||||
[0.5000, 0.0000, 0.0000],
|
||||
[0.0000, 0.5000, 0.0000],
|
||||
[0.0000, 0.0000, 0.5000],
|
||||
[0.5000, 0.5000, 0.0000],
|
||||
[0.5000, 0.0000, 0.5000],
|
||||
[0.0000, 0.5000, 0.5000]]], dtype=torch.float, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces_single_tet(self, device):
|
||||
return torch.tensor([[0, 4, 5, 6],
|
||||
[1, 7, 4, 8],
|
||||
[2, 5, 7, 9],
|
||||
[3, 6, 9, 8],
|
||||
[4, 5, 6, 8],
|
||||
[4, 5, 8, 7],
|
||||
[9, 5, 8, 6],
|
||||
[9, 5, 7, 8]], dtype=torch.long, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces_two_tets(self, device):
|
||||
return torch.tensor([[0, 1, 2, 3], [0, 1, 2, 3]], dtype=torch.long, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces_two_tets(self, device):
|
||||
return torch.tensor([[0, 4, 5, 6],
|
||||
[0, 4, 5, 6],
|
||||
[1, 7, 4, 8],
|
||||
[1, 7, 4, 8],
|
||||
[2, 5, 7, 9],
|
||||
[2, 5, 7, 9],
|
||||
[3, 6, 9, 8],
|
||||
[3, 6, 9, 8],
|
||||
[4, 5, 6, 8],
|
||||
[4, 5, 6, 8],
|
||||
[4, 5, 8, 7],
|
||||
[4, 5, 8, 7],
|
||||
[9, 5, 8, 6],
|
||||
[9, 5, 8, 6],
|
||||
[9, 5, 7, 8],
|
||||
[9, 5, 7, 8]], dtype=torch.long, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def features_single_tet(self, device):
|
||||
return torch.tensor([[[-1, 2], [-1, 4], [0.5, -2], [0.5, -3]]], dtype=torch.float, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_features_single_tet(self, device):
|
||||
return torch.tensor([[[-1.0000, 2.0000],
|
||||
[-1.0000, 4.0000],
|
||||
[0.5000, -2.0000],
|
||||
[0.5000, -3.0000],
|
||||
[-1.0000, 3.0000],
|
||||
[-0.2500, 0.0000],
|
||||
[-0.2500, -0.5000],
|
||||
[-0.2500, 1.0000],
|
||||
[-0.2500, 0.5000],
|
||||
[0.5000, -2.5000]]], dtype=torch.float, device=device)
|
||||
|
||||
def test_subdivide_tetmesh_no_features(self, vertices_single_tet, faces_single_tet, expected_vertices_single_tet, expected_faces_single_tet):
|
||||
new_vertices, new_faces = tetmesh.subdivide_tetmesh(vertices_single_tet, faces_single_tet)
|
||||
assert torch.equal(new_vertices, expected_vertices_single_tet)
|
||||
assert torch.equal(new_faces, expected_faces_single_tet)
|
||||
|
||||
def test_subdivide_tetmesh_no_features(self, vertices_single_tet, faces_single_tet, expected_vertices_single_tet, expected_faces_single_tet, features_single_tet, expected_features_single_tet):
|
||||
new_vertices, new_faces, new_features = tetmesh.subdivide_tetmesh(
|
||||
vertices_single_tet, faces_single_tet, features_single_tet)
|
||||
assert torch.equal(new_vertices, expected_vertices_single_tet)
|
||||
assert torch.equal(new_faces, expected_faces_single_tet)
|
||||
assert torch.equal(new_features, expected_features_single_tet)
|
||||
|
||||
def test_subdivide_tetmesh_shared_verts(self, vertices_single_tet, faces_two_tets, expected_vertices_single_tet, expected_faces_two_tets, features_single_tet, expected_features_single_tet):
|
||||
# check if redundant vertices are generated
|
||||
new_vertices, new_faces, new_features = tetmesh.subdivide_tetmesh(
|
||||
vertices_single_tet, faces_two_tets, features_single_tet)
|
||||
assert torch.equal(new_vertices, expected_vertices_single_tet)
|
||||
assert torch.equal(new_faces, expected_faces_two_tets)
|
||||
assert torch.equal(new_features, expected_features_single_tet)
|
||||
830
tests/python/kaolin/ops/mesh/test_trianglemesh.py
Normal file
@@ -0,0 +1,830 @@
|
||||
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import math
|
||||
|
||||
import torch
|
||||
|
||||
import kaolin
|
||||
from kaolin.utils.testing import FLOAT_TYPES, check_allclose, check_tensor, with_seed
|
||||
from kaolin.ops.mesh.trianglemesh import _unbatched_subdivide_vertices
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", FLOAT_TYPES)
|
||||
class TestFaceAreas:
|
||||
def test_face_areas(self, device, dtype):
|
||||
vertices = torch.tensor([[[0., 0., 0.],
|
||||
[0., 0., 1.],
|
||||
[0., 1., 0.],
|
||||
[2., 0., 0.2]],
|
||||
[[-1., -1., -1.],
|
||||
[-1., -1., 1.],
|
||||
[-1, 1., -1.],
|
||||
[3, -1., -0.6]]],
|
||||
device=device, dtype=dtype)
|
||||
faces = torch.tensor([[0, 1, 2],
|
||||
[1, 0, 3]],
|
||||
device=device, dtype=torch.long)
|
||||
output = kaolin.ops.mesh.face_areas(vertices, faces)
|
||||
expected_output = torch.tensor([[0.5, 1.], [2., 4.]], device=device, dtype=dtype)
|
||||
assert torch.equal(output, expected_output)
|
||||
|
||||
def test_packed_face_areas(self, device, dtype):
|
||||
vertices = torch.tensor([[0., 0., 0.],
|
||||
[0., 0., 1.],
|
||||
[0., 1., 0.],
|
||||
[2., 0., 0.2],
|
||||
[0., 0., 0.],
|
||||
[0., 1., 1.],
|
||||
[2., 0., 0.]],
|
||||
device=device, dtype=dtype)
|
||||
faces = torch.tensor([[0, 1, 2],
|
||||
[1, 0, 3],
|
||||
[0, 1, 2]], device=device, dtype=torch.long)
|
||||
first_idx_vertices = torch.LongTensor([0, 4, 7], device='cpu')
|
||||
num_faces_per_mesh = torch.LongTensor([2, 1], device='cpu')
|
||||
output = kaolin.ops.mesh.packed_face_areas(vertices, first_idx_vertices,
|
||||
faces, num_faces_per_mesh)
|
||||
expected_output = torch.tensor([0.5, 1., math.sqrt(2.)], device=device, dtype=dtype)
|
||||
check_allclose(output, expected_output)
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", FLOAT_TYPES)
|
||||
class TestSamplePoints:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices(self, device, dtype):
|
||||
# TODO(cfujitsang): extend the test with Z variation
|
||||
return torch.tensor([[[0., 0., 0.],
|
||||
[0., 1., 0.],
|
||||
[1., 0., 0.],
|
||||
[-1, 0., 0.]],
|
||||
[[1., 1., 3.],
|
||||
[1., 1.5, 3.],
|
||||
[1.5, 1., 3.],
|
||||
[0.5, 1., 3.]]],
|
||||
device=device, dtype=dtype)
|
||||
return vertices
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces(self, device, dtype):
|
||||
return torch.tensor([[0, 1, 2],
|
||||
[1, 0, 3]],
|
||||
device=device, dtype=torch.long)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_features(self, device, dtype):
|
||||
return torch.tensor(
|
||||
[[[[0., 0.], [0., 1.], [0., 2.]],
|
||||
[[1., 3.], [1., 4.], [1., 5.]]],
|
||||
[[[2., 6.], [2., 7.], [2., 8.]],
|
||||
[[3., 9.], [3., 10.], [3., 11.]]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
######## FIXED ########
|
||||
@pytest.mark.parametrize('use_features', [False, True])
|
||||
def test_sample_points(self, vertices, faces, face_features,
|
||||
use_features, device, dtype):
|
||||
batch_size, num_vertices = vertices.shape[:2]
|
||||
num_faces = faces.shape[0]
|
||||
num_samples = 1000
|
||||
|
||||
if use_features:
|
||||
points, face_choices, interpolated_features = kaolin.ops.mesh.sample_points(
|
||||
vertices, faces, num_samples, face_features=face_features)
|
||||
else:
|
||||
points, face_choices = kaolin.ops.mesh.sample_points(
|
||||
vertices, faces, num_samples)
|
||||
|
||||
check_tensor(points, shape=(batch_size, num_samples, 3),
|
||||
dtype=dtype, device=device)
|
||||
check_tensor(face_choices, shape=(batch_size, num_samples),
|
||||
dtype=torch.long, device=device)
|
||||
|
||||
# check that all faces are sampled
|
||||
num_0 = torch.sum(face_choices == 0, dim=1)
|
||||
assert torch.all(num_0 + torch.sum(face_choices == 1, dim=1) == num_samples)
|
||||
sampling_prob = num_samples / 2
|
||||
tolerance = sampling_prob * 0.2
|
||||
assert torch.all(num_0 < sampling_prob + tolerance) and \
|
||||
torch.all(num_0 > sampling_prob - tolerance)
|
||||
|
||||
face_vertices = kaolin.ops.mesh.index_vertices_by_faces(vertices, faces)
|
||||
|
||||
face_vertices_choices = torch.gather(
|
||||
face_vertices, 1, face_choices[:, :, None, None].repeat(1, 1, 3, 3))
|
||||
|
||||
# compute distance from the point to the plan of the face picked
|
||||
face_normals = kaolin.ops.mesh.face_normals(face_vertices_choices, unit=True)
|
||||
|
||||
v0_p = points - face_vertices_choices[:, :, 0] # batch_size x num_points x 3
|
||||
len_v0_p = torch.sqrt(torch.sum(v0_p ** 2, dim=-1))
|
||||
point_to_face_dist = torch.matmul(
|
||||
v0_p.reshape(-1, 1, 3),
|
||||
face_normals.reshape(-1, 3, 1)
|
||||
).reshape(batch_size, num_samples)
|
||||
|
||||
if dtype == torch.half:
|
||||
atol = 1e-2
|
||||
rtol = 1e-3
|
||||
else:
|
||||
atol = 1e-4
|
||||
rtol = 1e-5
|
||||
|
||||
# check that the point is close to the plan
|
||||
check_allclose(point_to_face_dist,
|
||||
torch.zeros((batch_size, num_samples), device=device, dtype=dtype),
|
||||
atol=atol, rtol=rtol)
|
||||
|
||||
# check that the point lie in the triangle
|
||||
edges0 = face_vertices_choices[:, :, 1] - face_vertices_choices[:, :, 0]
|
||||
edges1 = face_vertices_choices[:, :, 2] - face_vertices_choices[:, :, 1]
|
||||
edges2 = face_vertices_choices[:, :, 0] - face_vertices_choices[:, :, 2]
|
||||
|
||||
v0_p = points - face_vertices_choices[:, :, 0]
|
||||
v1_p = points - face_vertices_choices[:, :, 1]
|
||||
v2_p = points - face_vertices_choices[:, :, 2]
|
||||
|
||||
# Normals of the triangle formed by an edge and the point
|
||||
normals1 = torch.cross(edges0, v0_p)
|
||||
normals2 = torch.cross(edges1, v1_p)
|
||||
normals3 = torch.cross(edges2, v2_p)
|
||||
# cross-product of those normals with the face normals must be positive
|
||||
margin = -5e-3 if dtype == torch.half else 0.
|
||||
assert torch.all(torch.matmul(normals1.reshape(-1, 1, 3),
|
||||
face_normals.reshape(-1, 3, 1)) >= margin)
|
||||
assert torch.all(torch.matmul(normals2.reshape(-1, 1, 3),
|
||||
face_normals.reshape(-1, 3, 1)) >= margin)
|
||||
assert torch.all(torch.matmul(normals3.reshape(-1, 1, 3),
|
||||
face_normals.reshape(-1, 3, 1)) >= margin)
|
||||
if use_features:
|
||||
feat_dim = face_features.shape[-1]
|
||||
check_tensor(interpolated_features, shape=(batch_size, num_samples, feat_dim),
|
||||
dtype=dtype, device=device)
|
||||
# face_vertices_choices (batch_size, num_samples, 3, 3)
|
||||
# points (batch_size, num_samples, 3)
|
||||
ax = face_vertices_choices[:, :, 0, 0]
|
||||
ay = face_vertices_choices[:, :, 0, 1]
|
||||
bx = face_vertices_choices[:, :, 1, 0]
|
||||
by = face_vertices_choices[:, :, 1, 1]
|
||||
cx = face_vertices_choices[:, :, 2, 0]
|
||||
cy = face_vertices_choices[:, :, 2, 1]
|
||||
m = bx - ax
|
||||
p = by - ay
|
||||
n = cx - ax
|
||||
q = cy - ay
|
||||
s = points[:, :, 0] - ax
|
||||
t = points[:, :, 1] - ay
|
||||
|
||||
# sum_weights = torch.sum(weights, dim=-1)
|
||||
# zeros_idxs = torch.where(sum_weights == 0)
|
||||
#weights = weights / torch.sum(weights, keepdims=True, dim=-1)
|
||||
k1 = s * q - n * t
|
||||
k2 = m * t - s * p
|
||||
k3 = m * q - n * p
|
||||
w1 = k1 / (k3 + 1e-7)
|
||||
w2 = k2 / (k3 + 1e-7)
|
||||
w0 = (1. - w1) - w2
|
||||
weights = torch.stack([w0, w1, w2], dim=-1)
|
||||
|
||||
gt_points = torch.sum(
|
||||
face_vertices_choices * weights.unsqueeze(-1), dim=-2)
|
||||
check_allclose(points, gt_points, atol=atol, rtol=rtol)
|
||||
|
||||
_face_choices = face_choices[..., None, None].repeat(1, 1, 3, feat_dim)
|
||||
face_features_choices = torch.gather(face_features, 1, _face_choices)
|
||||
|
||||
gt_interpolated_features = torch.sum(
|
||||
face_features_choices * weights.unsqueeze(-1), dim=-2)
|
||||
check_allclose(interpolated_features, gt_interpolated_features,
|
||||
atol=atol, rtol=rtol)
|
||||
|
||||
def test_sample_points_with_areas(self, vertices, faces, dtype, device):
|
||||
num_samples = 1000
|
||||
face_areas = kaolin.ops.mesh.face_areas(vertices, faces)
|
||||
points1, face_choices1 = with_seed(1234)(
|
||||
kaolin.ops.mesh.sample_points)(vertices, faces, num_samples, face_areas)
|
||||
points2, face_choices2 = with_seed(1234)(
|
||||
kaolin.ops.mesh.sample_points)(vertices, faces, num_samples)
|
||||
check_allclose(points1, points2)
|
||||
assert torch.equal(face_choices1, face_choices2)
|
||||
|
||||
def test_sample_points_with_areas_with_features(self, vertices, faces,
|
||||
face_features, dtype, device):
|
||||
num_samples = 1000
|
||||
face_areas = kaolin.ops.mesh.face_areas(vertices, faces)
|
||||
points1, face_choices1, interpolated_features1 = with_seed(1234)(
|
||||
kaolin.ops.mesh.sample_points)(vertices, faces, num_samples, face_areas,
|
||||
face_features=face_features)
|
||||
points2, face_choices2, interpolated_features2 = with_seed(1234)(
|
||||
kaolin.ops.mesh.sample_points)(vertices, faces, num_samples,
|
||||
face_features=face_features)
|
||||
check_allclose(points1, points2)
|
||||
assert torch.equal(face_choices1, face_choices2)
|
||||
check_allclose(interpolated_features1, interpolated_features2)
|
||||
|
||||
def test_diff_sample_points(self, vertices, faces, device, dtype):
|
||||
num_samples = 1000
|
||||
points1, face_choices1 = with_seed(1234)(
|
||||
kaolin.ops.mesh.sample_points)(vertices, faces, num_samples)
|
||||
points2, face_choices2 = with_seed(1235)(
|
||||
kaolin.ops.mesh.sample_points)(vertices, faces, num_samples)
|
||||
assert not torch.equal(points1, points2)
|
||||
assert not torch.equal(face_choices1, face_choices2)
|
||||
|
||||
######## PACKED ########
|
||||
@pytest.fixture(autouse=True)
|
||||
def packed_vertices_info(self, device, dtype):
|
||||
vertices = torch.tensor([[0., 0., 0.],
|
||||
[0., 0., 1.],
|
||||
[0., 1., 0.],
|
||||
[2., 0., 0.2],
|
||||
[0., 0., 0.],
|
||||
[0., 1., 1.],
|
||||
[2., 0., 0.]],
|
||||
device=device, dtype=dtype)
|
||||
first_idx_vertices = torch.LongTensor([0, 4, 7], device='cpu')
|
||||
return vertices, first_idx_vertices
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def packed_faces_info(self, device, dtype):
|
||||
faces = torch.tensor([[0, 1, 2],
|
||||
[1, 0, 3],
|
||||
[0, 1, 2]], device=device, dtype=torch.long)
|
||||
num_faces_per_mesh = torch.LongTensor([2, 1], device='cpu')
|
||||
return faces, num_faces_per_mesh
|
||||
|
||||
def test_packed_sample_points(self, packed_vertices_info, packed_faces_info,
|
||||
device, dtype):
|
||||
vertices, first_idx_vertices = packed_vertices_info
|
||||
faces, num_faces_per_mesh = packed_faces_info
|
||||
|
||||
total_num_vertices = vertices.shape[0]
|
||||
total_num_faces = faces.shape[0]
|
||||
batch_size = num_faces_per_mesh.shape[0]
|
||||
num_samples = 1000
|
||||
|
||||
points, face_choices = kaolin.ops.mesh.packed_sample_points(
|
||||
vertices, first_idx_vertices, faces, num_faces_per_mesh, num_samples)
|
||||
|
||||
check_tensor(points, shape=(batch_size, num_samples, 3),
|
||||
dtype=dtype, device=device)
|
||||
check_tensor(face_choices, shape=(batch_size, num_samples),
|
||||
dtype=torch.long, device=device)
|
||||
|
||||
# check that all faces are sampled
|
||||
assert torch.all(face_choices[1] == 2)
|
||||
num_0 = torch.sum(face_choices[0] == 0)
|
||||
assert num_0 + torch.sum(face_choices[0] == 1) == num_samples
|
||||
sampling_prob = num_samples / 3.
|
||||
tolerance = sampling_prob * 0.2
|
||||
assert (num_0 < sampling_prob + tolerance) and \
|
||||
(num_0 > sampling_prob - tolerance)
|
||||
|
||||
merged_faces = faces + kaolin.ops.batch.tile_to_packed(
|
||||
first_idx_vertices[:-1].to(vertices.device),
|
||||
num_faces_per_mesh)
|
||||
|
||||
face_vertices = torch.index_select(
|
||||
vertices, 0, merged_faces.reshape(-1)).reshape(total_num_faces, 3, 3)
|
||||
|
||||
face_vertices_choices = torch.gather(
|
||||
face_vertices, 0, face_choices.reshape(-1, 1, 1).repeat(1, 3, 3)
|
||||
).reshape(batch_size, num_samples, 3, 3)
|
||||
|
||||
# compute distance from the point to the plan of the face picked
|
||||
face_normals = kaolin.ops.mesh.face_normals(face_vertices_choices, unit=True)
|
||||
v0_p = points - face_vertices_choices[:, :, 0] # batch_size x num_points x 3
|
||||
len_v0_p = torch.sqrt(torch.sum(v0_p ** 2, dim=-1))
|
||||
point_to_face_dist = torch.matmul(
|
||||
v0_p.reshape(-1, 1, 3),
|
||||
face_normals.reshape(-1, 3, 1)
|
||||
).reshape(batch_size, num_samples)
|
||||
|
||||
if dtype == torch.half:
|
||||
atol = 1e-2
|
||||
rtol = 1e-3
|
||||
else:
|
||||
atol = 1e-4
|
||||
rtol = 1e-5
|
||||
|
||||
# check that the point is close to the plan
|
||||
check_allclose(point_to_face_dist,
|
||||
torch.zeros((batch_size, num_samples), device=device, dtype=dtype),
|
||||
atol=atol, rtol=rtol)
|
||||
|
||||
# check that the point lie in the triangle
|
||||
edges0 = face_vertices_choices[:, :, 1] - face_vertices_choices[:, :, 0]
|
||||
edges1 = face_vertices_choices[:, :, 2] - face_vertices_choices[:, :, 1]
|
||||
edges2 = face_vertices_choices[:, :, 0] - face_vertices_choices[:, :, 2]
|
||||
|
||||
v0_p = points - face_vertices_choices[:, :, 0]
|
||||
v1_p = points - face_vertices_choices[:, :, 1]
|
||||
v2_p = points - face_vertices_choices[:, :, 2]
|
||||
|
||||
# Normals of the triangle formed by an edge and the point
|
||||
normals1 = torch.cross(edges0, v0_p)
|
||||
normals2 = torch.cross(edges1, v1_p)
|
||||
normals3 = torch.cross(edges2, v2_p)
|
||||
# cross-product of those normals with the face normals must be positive
|
||||
margin = -2e-3 if dtype == torch.half else 0.
|
||||
assert torch.all(torch.matmul(normals1.reshape(-1, 1, 3),
|
||||
face_normals.reshape(-1, 3, 1)) >= margin)
|
||||
assert torch.all(torch.matmul(normals2.reshape(-1, 1, 3),
|
||||
face_normals.reshape(-1, 3, 1)) >= margin)
|
||||
assert torch.all(torch.matmul(normals3.reshape(-1, 1, 3),
|
||||
face_normals.reshape(-1, 3, 1)) >= margin)
|
||||
|
||||
def test_packed_sample_points_with_areas(self, packed_vertices_info, packed_faces_info,
|
||||
dtype, device):
|
||||
num_samples = 1000
|
||||
vertices, first_idx_vertices = packed_vertices_info
|
||||
faces, num_faces_per_mesh = packed_faces_info
|
||||
|
||||
face_areas = kaolin.ops.mesh.packed_face_areas(
|
||||
vertices, first_idx_vertices, faces, num_faces_per_mesh)
|
||||
|
||||
points1, face_choices1 = with_seed(1234)(kaolin.ops.mesh.packed_sample_points)(
|
||||
vertices, first_idx_vertices, faces, num_faces_per_mesh, num_samples, face_areas)
|
||||
|
||||
points2, face_choices2 = with_seed(1234)(kaolin.ops.mesh.packed_sample_points)(
|
||||
vertices, first_idx_vertices, faces, num_faces_per_mesh, num_samples)
|
||||
|
||||
check_allclose(points1, points2)
|
||||
assert torch.equal(face_choices1, face_choices2)
|
||||
|
||||
def test_diff_packed_sample_points(self, packed_vertices_info, packed_faces_info,
|
||||
dtype, device):
|
||||
num_samples = 1000
|
||||
vertices, first_idx_vertices = packed_vertices_info
|
||||
faces, num_faces_per_mesh = packed_faces_info
|
||||
|
||||
points1, face_choices1 = with_seed(1234)(kaolin.ops.mesh.packed_sample_points)(
|
||||
vertices, first_idx_vertices, faces, num_faces_per_mesh, num_samples)
|
||||
points2, face_choices2 = with_seed(1235)(kaolin.ops.mesh.packed_sample_points)(
|
||||
vertices, first_idx_vertices, faces, num_faces_per_mesh, num_samples)
|
||||
|
||||
assert not torch.equal(points1, points2)
|
||||
assert not torch.equal(face_choices1, face_choices2)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device,dtype', FLOAT_TYPES)
|
||||
class TestVertexTangents:
|
||||
def test_tangents(self, device, dtype):
|
||||
|
||||
# Faces are a fan around the 0th vertex
|
||||
faces = torch.tensor([
|
||||
[0, 1, 2],
|
||||
[0, 2, 3],
|
||||
[0, 3, 4]
|
||||
], device=device, dtype=torch.long)
|
||||
vertices = torch.tensor([[
|
||||
[0., 0., 0.],
|
||||
[0., 0., 1.],
|
||||
[0., 1., 0.],
|
||||
[1., 0., 0.],
|
||||
[1., -1., 0.]
|
||||
]], device=device, dtype=dtype)
|
||||
face_vertices = kaolin.ops.mesh.index_vertices_by_faces(
|
||||
vertices, faces).squeeze(0)
|
||||
vertex_normals = torch.tensor([
|
||||
[-1., 0., -1.],
|
||||
[-1., 0., 1.],
|
||||
[-1., 1., -1.],
|
||||
[ 1., 0., -1.],
|
||||
[ 0., 0., -1.]
|
||||
], device=device, dtype=dtype)
|
||||
face_uvs = torch.tensor([
|
||||
[[0.5, 0.5], [1., 1.], [0., 1.]],
|
||||
[[0.5, 0.5], [0., 1.], [0., 0.]],
|
||||
[[0.5, 0.5], [0., 0.], [1., 0.]]
|
||||
], device=device, dtype=dtype)
|
||||
tangents = kaolin.ops.mesh.vertex_tangents(
|
||||
faces, face_vertices, face_uvs, vertex_normals)
|
||||
expected_tangents = torch.tensor([
|
||||
[-0.3015, -0.9045, 0.3015],
|
||||
[ 0.7071, -0.7071, 0.0000],
|
||||
[-0.9487, 0.0000, -0.3162],
|
||||
[ 0.0000, -0.8944, -0.4472],
|
||||
[ 0.0000, -1.0000, 0.0000]
|
||||
], device=device, dtype=dtype)
|
||||
check_allclose(tangents, expected_tangents, atol=1e-3, rtol=1e-3)
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
class TestSubdivide:
|
||||
|
||||
def test_subdivide(self, device, dtype):
|
||||
vertices = torch.tensor([[0, 0, 0],
|
||||
[1, 0, 0],
|
||||
[0, 0, 1]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
new_vertices = _unbatched_subdivide_vertices(vertices, faces, 3)
|
||||
expected_vertices = torch.tensor([[0.0000, 0.0000, 0.0000],
|
||||
[0.0000, 0.0000, 0.1250],
|
||||
[0.0000, 0.0000, 0.2500],
|
||||
[0.0000, 0.0000, 0.3750],
|
||||
[0.0000, 0.0000, 0.5000],
|
||||
[0.0000, 0.0000, 0.6250],
|
||||
[0.0000, 0.0000, 0.7500],
|
||||
[0.0000, 0.0000, 0.8750],
|
||||
[0.0000, 0.0000, 1.0000],
|
||||
[0.1250, 0.0000, 0.0000],
|
||||
[0.1250, 0.0000, 0.1250],
|
||||
[0.1250, 0.0000, 0.2500],
|
||||
[0.1250, 0.0000, 0.3750],
|
||||
[0.1250, 0.0000, 0.5000],
|
||||
[0.1250, 0.0000, 0.6250],
|
||||
[0.1250, 0.0000, 0.7500],
|
||||
[0.1250, 0.0000, 0.8750],
|
||||
[0.2500, 0.0000, 0.0000],
|
||||
[0.2500, 0.0000, 0.1250],
|
||||
[0.2500, 0.0000, 0.2500],
|
||||
[0.2500, 0.0000, 0.3750],
|
||||
[0.2500, 0.0000, 0.5000],
|
||||
[0.2500, 0.0000, 0.6250],
|
||||
[0.2500, 0.0000, 0.7500],
|
||||
[0.3750, 0.0000, 0.0000],
|
||||
[0.3750, 0.0000, 0.1250],
|
||||
[0.3750, 0.0000, 0.2500],
|
||||
[0.3750, 0.0000, 0.3750],
|
||||
[0.3750, 0.0000, 0.5000],
|
||||
[0.3750, 0.0000, 0.6250],
|
||||
[0.5000, 0.0000, 0.0000],
|
||||
[0.5000, 0.0000, 0.1250],
|
||||
[0.5000, 0.0000, 0.2500],
|
||||
[0.5000, 0.0000, 0.3750],
|
||||
[0.5000, 0.0000, 0.5000],
|
||||
[0.6250, 0.0000, 0.0000],
|
||||
[0.6250, 0.0000, 0.1250],
|
||||
[0.6250, 0.0000, 0.2500],
|
||||
[0.6250, 0.0000, 0.3750],
|
||||
[0.7500, 0.0000, 0.0000],
|
||||
[0.7500, 0.0000, 0.1250],
|
||||
[0.7500, 0.0000, 0.2500],
|
||||
[0.8750, 0.0000, 0.0000],
|
||||
[0.8750, 0.0000, 0.1250],
|
||||
[1.0000, 0.0000, 0.0000]], dtype=dtype, device=device)
|
||||
|
||||
assert torch.equal(new_vertices, expected_vertices)
|
||||
|
||||
def test_subdivide_2(self, device, dtype):
|
||||
vertices = torch.tensor([[0, 0, 0],
|
||||
[1, 0, 0],
|
||||
[0, 0, 1]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
new_vertices = _unbatched_subdivide_vertices(vertices, faces, 2)
|
||||
expected_vertices = torch.tensor([[0.0000, 0.0000, 0.0000],
|
||||
[0.0000, 0.0000, 0.1250],
|
||||
[0.0000, 0.0000, 0.2500],
|
||||
[0.0000, 0.0000, 0.3750],
|
||||
[0.0000, 0.0000, 0.5000],
|
||||
[0.0000, 0.0000, 0.6250],
|
||||
[0.0000, 0.0000, 0.7500],
|
||||
[0.0000, 0.0000, 0.8750],
|
||||
[0.0000, 0.0000, 1.0000],
|
||||
[0.1250, 0.0000, 0.0000],
|
||||
[0.1250, 0.0000, 0.1250],
|
||||
[0.1250, 0.0000, 0.2500],
|
||||
[0.1250, 0.0000, 0.3750],
|
||||
[0.1250, 0.0000, 0.5000],
|
||||
[0.1250, 0.0000, 0.6250],
|
||||
[0.1250, 0.0000, 0.7500],
|
||||
[0.1250, 0.0000, 0.8750],
|
||||
[0.2500, 0.0000, 0.0000],
|
||||
[0.2500, 0.0000, 0.1250],
|
||||
[0.2500, 0.0000, 0.2500],
|
||||
[0.2500, 0.0000, 0.3750],
|
||||
[0.2500, 0.0000, 0.5000],
|
||||
[0.2500, 0.0000, 0.6250],
|
||||
[0.2500, 0.0000, 0.7500],
|
||||
[0.3750, 0.0000, 0.0000],
|
||||
[0.3750, 0.0000, 0.1250],
|
||||
[0.3750, 0.0000, 0.2500],
|
||||
[0.3750, 0.0000, 0.3750],
|
||||
[0.3750, 0.0000, 0.5000],
|
||||
[0.3750, 0.0000, 0.6250],
|
||||
[0.5000, 0.0000, 0.0000],
|
||||
[0.5000, 0.0000, 0.1250],
|
||||
[0.5000, 0.0000, 0.2500],
|
||||
[0.5000, 0.0000, 0.3750],
|
||||
[0.5000, 0.0000, 0.5000],
|
||||
[0.6250, 0.0000, 0.0000],
|
||||
[0.6250, 0.0000, 0.1250],
|
||||
[0.6250, 0.0000, 0.2500],
|
||||
[0.6250, 0.0000, 0.3750],
|
||||
[0.7500, 0.0000, 0.0000],
|
||||
[0.7500, 0.0000, 0.1250],
|
||||
[0.7500, 0.0000, 0.2500],
|
||||
[0.8750, 0.0000, 0.0000],
|
||||
[0.8750, 0.0000, 0.1250],
|
||||
[1.0000, 0.0000, 0.0000]], device=device, dtype=dtype)
|
||||
|
||||
assert torch.equal(new_vertices, expected_vertices)
|
||||
|
||||
def test_subdivide_3(self, device, dtype):
|
||||
vertices = torch.tensor([[0, 0, 0],
|
||||
[0, 0.5, 0],
|
||||
[0, 0, 1]], dtype=dtype, device=device)
|
||||
|
||||
faces = torch.tensor([[0, 1, 2]], dtype=torch.long, device=device)
|
||||
|
||||
new_vertices = _unbatched_subdivide_vertices(vertices, faces, 2)
|
||||
expected_vertices = torch.tensor([[0.0000, 0.0000, 0.0000],
|
||||
[0.0000, 0.0000, 0.1250],
|
||||
[0.0000, 0.0000, 0.2500],
|
||||
[0.0000, 0.0000, 0.3750],
|
||||
[0.0000, 0.0000, 0.5000],
|
||||
[0.0000, 0.0000, 0.6250],
|
||||
[0.0000, 0.0000, 0.7500],
|
||||
[0.0000, 0.0000, 0.8750],
|
||||
[0.0000, 0.0000, 1.0000],
|
||||
[0.0000, 0.0625, 0.0000],
|
||||
[0.0000, 0.0625, 0.1250],
|
||||
[0.0000, 0.0625, 0.2500],
|
||||
[0.0000, 0.0625, 0.3750],
|
||||
[0.0000, 0.0625, 0.5000],
|
||||
[0.0000, 0.0625, 0.6250],
|
||||
[0.0000, 0.0625, 0.7500],
|
||||
[0.0000, 0.0625, 0.8750],
|
||||
[0.0000, 0.1250, 0.0000],
|
||||
[0.0000, 0.1250, 0.1250],
|
||||
[0.0000, 0.1250, 0.2500],
|
||||
[0.0000, 0.1250, 0.3750],
|
||||
[0.0000, 0.1250, 0.5000],
|
||||
[0.0000, 0.1250, 0.6250],
|
||||
[0.0000, 0.1250, 0.7500],
|
||||
[0.0000, 0.1875, 0.0000],
|
||||
[0.0000, 0.1875, 0.1250],
|
||||
[0.0000, 0.1875, 0.2500],
|
||||
[0.0000, 0.1875, 0.3750],
|
||||
[0.0000, 0.1875, 0.5000],
|
||||
[0.0000, 0.1875, 0.6250],
|
||||
[0.0000, 0.2500, 0.0000],
|
||||
[0.0000, 0.2500, 0.1250],
|
||||
[0.0000, 0.2500, 0.2500],
|
||||
[0.0000, 0.2500, 0.3750],
|
||||
[0.0000, 0.2500, 0.5000],
|
||||
[0.0000, 0.3125, 0.0000],
|
||||
[0.0000, 0.3125, 0.1250],
|
||||
[0.0000, 0.3125, 0.2500],
|
||||
[0.0000, 0.3125, 0.3750],
|
||||
[0.0000, 0.3750, 0.0000],
|
||||
[0.0000, 0.3750, 0.1250],
|
||||
[0.0000, 0.3750, 0.2500],
|
||||
[0.0000, 0.4375, 0.0000],
|
||||
[0.0000, 0.4375, 0.1250],
|
||||
[0.0000, 0.5000, 0.0000]], dtype=dtype, device=device)
|
||||
|
||||
assert torch.equal(new_vertices, expected_vertices)
|
||||
|
||||
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
|
||||
class TestSubdivideTrianglemesh:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_icosahedron(self, device):
|
||||
return torch.tensor([[[-0.5257, 0.8507, 0.0000],
|
||||
[0.5257, 0.8507, 0.0000],
|
||||
[-0.5257, -0.8507, 0.0000],
|
||||
[0.5257, -0.8507, 0.0000],
|
||||
[0.0000, -0.5257, 0.8507],
|
||||
[0.0000, 0.5257, 0.8507],
|
||||
[0.0000, -0.5257, -0.8507],
|
||||
[0.0000, 0.5257, -0.8507],
|
||||
[0.8507, 0.0000, -0.5257],
|
||||
[0.8507, 0.0000, 0.5257],
|
||||
[-0.8507, 0.0000, -0.5257],
|
||||
[-0.8507, 0.0000, 0.5257]]], dtype=torch.float, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces_icosahedron(self, device):
|
||||
return torch.tensor([[0, 11, 5],
|
||||
[0, 5, 1],
|
||||
[0, 1, 7],
|
||||
[0, 7, 10],
|
||||
[0, 10, 11],
|
||||
[1, 5, 9],
|
||||
[5, 11, 4],
|
||||
[11, 10, 2],
|
||||
[10, 7, 6],
|
||||
[7, 1, 8],
|
||||
[3, 9, 4],
|
||||
[3, 4, 2],
|
||||
[3, 2, 6],
|
||||
[3, 6, 8],
|
||||
[3, 8, 9],
|
||||
[4, 9, 5],
|
||||
[2, 4, 11],
|
||||
[6, 2, 10],
|
||||
[8, 6, 7],
|
||||
[9, 8, 1]], dtype=torch.long, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_vertices_default_alpha(self, device):
|
||||
return torch.tensor([[[-0.4035, 0.6529, 0.0000],
|
||||
[0.4035, 0.6529, 0.0000],
|
||||
[-0.4035, -0.6529, 0.0000],
|
||||
[0.4035, -0.6529, 0.0000],
|
||||
[0.0000, -0.4035, 0.6529],
|
||||
[0.0000, 0.4035, 0.6529],
|
||||
[0.0000, -0.4035, -0.6529],
|
||||
[0.0000, 0.4035, -0.6529],
|
||||
[0.6529, 0.0000, -0.4035],
|
||||
[0.6529, 0.0000, 0.4035],
|
||||
[-0.6529, 0.0000, -0.4035],
|
||||
[-0.6529, 0.0000, 0.4035],
|
||||
[0.0000, 0.7694, 0.0000],
|
||||
[-0.2378, 0.6225, 0.3847],
|
||||
[-0.2378, 0.6225, -0.3847],
|
||||
[-0.6225, 0.3847, -0.2378],
|
||||
[-0.6225, 0.3847, 0.2378],
|
||||
[0.2378, 0.6225, 0.3847],
|
||||
[0.2378, 0.6225, -0.3847],
|
||||
[0.6225, 0.3847, -0.2378],
|
||||
[0.6225, 0.3847, 0.2378],
|
||||
[0.0000, -0.7694, 0.0000],
|
||||
[-0.2378, -0.6225, 0.3847],
|
||||
[-0.2378, -0.6225, -0.3847],
|
||||
[-0.6225, -0.3847, -0.2378],
|
||||
[-0.6225, -0.3847, 0.2378],
|
||||
[0.2378, -0.6225, 0.3847],
|
||||
[0.2378, -0.6225, -0.3847],
|
||||
[0.6225, -0.3847, -0.2378],
|
||||
[0.6225, -0.3847, 0.2378],
|
||||
[0.0000, 0.0000, 0.7694],
|
||||
[0.3847, -0.2378, 0.6225],
|
||||
[-0.3847, -0.2378, 0.6225],
|
||||
[0.3847, 0.2378, 0.6225],
|
||||
[-0.3847, 0.2378, 0.6225],
|
||||
[0.0000, 0.0000, -0.7694],
|
||||
[0.3847, -0.2378, -0.6225],
|
||||
[-0.3847, -0.2378, -0.6225],
|
||||
[0.3847, 0.2378, -0.6225],
|
||||
[-0.3847, 0.2378, -0.6225],
|
||||
[0.7694, 0.0000, 0.0000],
|
||||
[-0.7694, 0.0000, 0.0000]]], dtype=torch.float, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_vertices_zero_alpha(self, device):
|
||||
return torch.tensor([[[-0.5257, 0.8507, 0.0000],
|
||||
[0.5257, 0.8507, 0.0000],
|
||||
[-0.5257, -0.8507, 0.0000],
|
||||
[0.5257, -0.8507, 0.0000],
|
||||
[0.0000, -0.5257, 0.8507],
|
||||
[0.0000, 0.5257, 0.8507],
|
||||
[0.0000, -0.5257, -0.8507],
|
||||
[0.0000, 0.5257, -0.8507],
|
||||
[0.8507, 0.0000, -0.5257],
|
||||
[0.8507, 0.0000, 0.5257],
|
||||
[-0.8507, 0.0000, -0.5257],
|
||||
[-0.8507, 0.0000, 0.5257],
|
||||
[0.0000, 0.7694, 0.0000],
|
||||
[-0.2378, 0.6225, 0.3847],
|
||||
[-0.2378, 0.6225, -0.3847],
|
||||
[-0.6225, 0.3847, -0.2378],
|
||||
[-0.6225, 0.3847, 0.2378],
|
||||
[0.2378, 0.6225, 0.3847],
|
||||
[0.2378, 0.6225, -0.3847],
|
||||
[0.6225, 0.3847, -0.2378],
|
||||
[0.6225, 0.3847, 0.2378],
|
||||
[0.0000, -0.7694, 0.0000],
|
||||
[-0.2378, -0.6225, 0.3847],
|
||||
[-0.2378, -0.6225, -0.3847],
|
||||
[-0.6225, -0.3847, -0.2378],
|
||||
[-0.6225, -0.3847, 0.2378],
|
||||
[0.2378, -0.6225, 0.3847],
|
||||
[0.2378, -0.6225, -0.3847],
|
||||
[0.6225, -0.3847, -0.2378],
|
||||
[0.6225, -0.3847, 0.2378],
|
||||
[0.0000, 0.0000, 0.7694],
|
||||
[0.3847, -0.2378, 0.6225],
|
||||
[-0.3847, -0.2378, 0.6225],
|
||||
[0.3847, 0.2378, 0.6225],
|
||||
[-0.3847, 0.2378, 0.6225],
|
||||
[0.0000, 0.0000, -0.7694],
|
||||
[0.3847, -0.2378, -0.6225],
|
||||
[-0.3847, -0.2378, -0.6225],
|
||||
[0.3847, 0.2378, -0.6225],
|
||||
[-0.3847, 0.2378, -0.6225],
|
||||
[0.7694, 0.0000, 0.0000],
|
||||
[-0.7694, 0.0000, 0.0000]]], dtype=torch.float, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_faces_icosahedron_1_iter(self, device):
|
||||
return torch.tensor([[11, 34, 16],
|
||||
[0, 16, 13],
|
||||
[5, 13, 34],
|
||||
[13, 16, 34],
|
||||
[5, 17, 13],
|
||||
[0, 13, 12],
|
||||
[1, 12, 17],
|
||||
[12, 13, 17],
|
||||
[1, 18, 12],
|
||||
[0, 12, 14],
|
||||
[7, 14, 18],
|
||||
[14, 12, 18],
|
||||
[7, 39, 14],
|
||||
[0, 14, 15],
|
||||
[10, 15, 39],
|
||||
[15, 14, 39],
|
||||
[10, 41, 15],
|
||||
[0, 15, 16],
|
||||
[11, 16, 41],
|
||||
[16, 15, 41],
|
||||
[5, 33, 17],
|
||||
[1, 17, 20],
|
||||
[9, 20, 33],
|
||||
[20, 17, 33],
|
||||
[11, 32, 34],
|
||||
[5, 34, 30],
|
||||
[4, 30, 32],
|
||||
[30, 34, 32],
|
||||
[10, 24, 41],
|
||||
[11, 41, 25],
|
||||
[2, 25, 24],
|
||||
[25, 41, 24],
|
||||
[7, 35, 39],
|
||||
[10, 39, 37],
|
||||
[6, 37, 35],
|
||||
[37, 39, 35],
|
||||
[1, 19, 18],
|
||||
[7, 18, 38],
|
||||
[8, 38, 19],
|
||||
[38, 18, 19],
|
||||
[9, 31, 29],
|
||||
[3, 29, 26],
|
||||
[4, 26, 31],
|
||||
[26, 29, 31],
|
||||
[4, 22, 26],
|
||||
[3, 26, 21],
|
||||
[2, 21, 22],
|
||||
[21, 26, 22],
|
||||
[2, 23, 21],
|
||||
[3, 21, 27],
|
||||
[6, 27, 23],
|
||||
[27, 21, 23],
|
||||
[6, 36, 27],
|
||||
[3, 27, 28],
|
||||
[8, 28, 36],
|
||||
[28, 27, 36],
|
||||
[8, 40, 28],
|
||||
[3, 28, 29],
|
||||
[9, 29, 40],
|
||||
[29, 28, 40],
|
||||
[9, 33, 31],
|
||||
[4, 31, 30],
|
||||
[5, 30, 33],
|
||||
[30, 31, 33],
|
||||
[4, 32, 22],
|
||||
[2, 22, 25],
|
||||
[11, 25, 32],
|
||||
[25, 22, 32],
|
||||
[2, 24, 23],
|
||||
[6, 23, 37],
|
||||
[10, 37, 24],
|
||||
[37, 23, 24],
|
||||
[6, 35, 36],
|
||||
[8, 36, 38],
|
||||
[7, 38, 35],
|
||||
[38, 36, 35],
|
||||
[8, 19, 40],
|
||||
[9, 40, 20],
|
||||
[1, 20, 19],
|
||||
[20, 40, 19]], dtype=torch.long, device=device)
|
||||
|
||||
def test_subdivide_trianglemesh_1_iter_default_alpha(self, vertices_icosahedron, faces_icosahedron, expected_vertices_default_alpha, expected_faces_icosahedron_1_iter):
|
||||
new_vertices, new_faces = kaolin.ops.mesh.subdivide_trianglemesh(
|
||||
vertices_icosahedron, faces_icosahedron, 1)
|
||||
check_allclose(new_vertices, expected_vertices_default_alpha, atol=1e-04)
|
||||
assert torch.equal(new_faces, expected_faces_icosahedron_1_iter)
|
||||
|
||||
def test_subdivide_trianglemesh_1_iter_zero_alpha(self, vertices_icosahedron, faces_icosahedron, expected_vertices_zero_alpha, expected_faces_icosahedron_1_iter):
|
||||
alpha = torch.zeros_like(vertices_icosahedron[..., 0])
|
||||
new_vertices, new_faces = kaolin.ops.mesh.subdivide_trianglemesh(
|
||||
vertices_icosahedron, faces_icosahedron, 1, alpha)
|
||||
check_allclose(new_vertices, expected_vertices_zero_alpha, atol=1e-04)
|
||||
assert torch.equal(new_faces, expected_faces_icosahedron_1_iter)
|
||||
|
||||
def test_subdivide_trianglemesh_5_iter(self, vertices_icosahedron, faces_icosahedron):
|
||||
new_vertices, new_faces = kaolin.ops.mesh.subdivide_trianglemesh(
|
||||
vertices_icosahedron, faces_icosahedron, 5)
|
||||
# check total area of all faces
|
||||
check_allclose(
|
||||
kaolin.ops.mesh.face_areas(new_vertices, new_faces).sum(),
|
||||
torch.tensor([6.2005], dtype=new_vertices.dtype, device=new_faces.device),
|
||||
atol=1e-4)
|
||||
assert new_faces.shape[0] == faces_icosahedron.shape[0] * 4 ** 5
|
||||
342
tests/python/kaolin/ops/spc/test_conv.py
Normal file
@@ -0,0 +1,342 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import math
|
||||
import pytest
|
||||
import os
|
||||
from itertools import product
|
||||
|
||||
import torch
|
||||
from kaolin.ops.spc.uint8 import bits_to_uint8, uint8_bits_sum, uint8_to_bits
|
||||
from kaolin.ops.random import random_spc_octrees
|
||||
from kaolin.rep import Spc
|
||||
|
||||
from kaolin.ops import spc
|
||||
|
||||
from kaolin.utils.testing import FLOAT_TYPES, with_seed, check_tensor
|
||||
|
||||
os.environ['NVIDIA_TF32_OVERRIDE'] = '0'
|
||||
|
||||
@pytest.mark.parametrize('batch_size', [1, 3])
|
||||
@pytest.mark.parametrize('height,width,depth,threshold',
|
||||
[(27, 37, 37, 0.7), (64, 64, 64, 0.)])
|
||||
@pytest.mark.parametrize('in_channels', [1, 5])
|
||||
@pytest.mark.parametrize('out_channels', [1, 7])
|
||||
@pytest.mark.parametrize('kernel_size,kernel_offset', [(1, 0), (2, 0), (3, 0), (3, 1), (4, 0), (5, 0), (5, 2)])
|
||||
@pytest.mark.parametrize('with_bias', [False, True])
|
||||
class TestConv3D:
|
||||
@pytest.fixture(autouse=True)
|
||||
def sparsity_masks(self, batch_size, height, width, depth, threshold):
|
||||
return torch.rand(batch_size, height, width, depth,
|
||||
device='cuda') > threshold
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def feature_grids(self, sparsity_masks, batch_size, in_channels, height, width, depth):
|
||||
return torch.rand(batch_size, in_channels, height, width, depth,
|
||||
device='cuda') * sparsity_masks.unsqueeze(1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def kernel_vectors(self, kernel_size, kernel_offset):
|
||||
return torch.tensor(
|
||||
list(product(range(-kernel_offset, kernel_size - kernel_offset), repeat=3)),
|
||||
dtype=torch.int16, device='cuda')
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def dense_weight(self, in_channels, out_channels, kernel_size):
|
||||
return torch.rand(out_channels, in_channels,
|
||||
kernel_size, kernel_size, kernel_size,
|
||||
device='cuda')
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def spc_weight(self, dense_weight, in_channels, out_channels):
|
||||
return dense_weight.reshape(out_channels, in_channels, -1).permute(2, 1, 0)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def bias(self, with_bias, out_channels):
|
||||
if with_bias:
|
||||
return torch.rand(out_channels, device='cuda')
|
||||
else:
|
||||
return None
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees_lengths_features(self, feature_grids, sparsity_masks):
|
||||
return spc.feature_grids_to_spc(feature_grids, sparsity_masks)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees(self, octrees_lengths_features):
|
||||
return octrees_lengths_features[0]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def lengths(self, octrees_lengths_features):
|
||||
return octrees_lengths_features[1]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def coalescent_features(self, octrees_lengths_features):
|
||||
return octrees_lengths_features[2]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def max_level_pyramids_exsum(self, octrees, lengths):
|
||||
return spc.scan_octrees(octrees, lengths)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def max_level(self, max_level_pyramids_exsum):
|
||||
return max_level_pyramids_exsum[0]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def pyramids(self, max_level_pyramids_exsum):
|
||||
return max_level_pyramids_exsum[1]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def exsum(self, max_level_pyramids_exsum):
|
||||
return max_level_pyramids_exsum[2]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def point_hierarchies(self, octrees, pyramids, exsum):
|
||||
return spc.generate_points(octrees, pyramids, exsum)
|
||||
|
||||
@pytest.mark.parametrize('with_spc_to_dict', [False, True])
|
||||
@pytest.mark.parametrize('jump', [0, 1, 2])
|
||||
def test_conv3d(self, height, width, depth, in_channels, out_channels, kernel_size,
|
||||
feature_grids, sparsity_masks, dense_weight, bias,
|
||||
octrees, lengths, coalescent_features, max_level,
|
||||
pyramids, exsum, point_hierarchies,
|
||||
kernel_vectors, kernel_offset, spc_weight, jump, with_spc_to_dict):
|
||||
stride = 2 ** jump
|
||||
coalescent_features = coalescent_features.detach()
|
||||
coalescent_features.requires_grad = True
|
||||
spc_weight = spc_weight.detach()
|
||||
spc_weight.requires_grad = True
|
||||
|
||||
if with_spc_to_dict:
|
||||
input_spc = Spc(octrees, lengths)
|
||||
output_features, output_level = spc.conv3d(
|
||||
**input_spc.to_dict(), level=input_spc.max_level, input=coalescent_features,
|
||||
weight=spc_weight, kernel_vectors=kernel_vectors, jump=jump, bias=bias)
|
||||
output = spc.to_dense(**input_spc.to_dict(), input=output_features,
|
||||
level=output_level)
|
||||
output_sparsity_masks = spc.to_dense(
|
||||
**input_spc.to_dict(),
|
||||
input=torch.ones_like(output_features, requires_grad=False),
|
||||
level=output_level)
|
||||
else:
|
||||
output_features, output_level = spc.conv3d(
|
||||
octrees, point_hierarchies, max_level, pyramids, exsum, coalescent_features,
|
||||
spc_weight, kernel_vectors, jump=jump, bias=bias)
|
||||
output = spc.to_dense(point_hierarchies, pyramids, output_features, output_level)
|
||||
output_sparsity_masks = spc.to_dense(
|
||||
point_hierarchies, pyramids, torch.ones_like(output_features, requires_grad=False),
|
||||
output_level)
|
||||
|
||||
feature_grids = feature_grids.detach()
|
||||
feature_grids.requires_grad = True
|
||||
dense_weight = dense_weight.detach()
|
||||
dense_weight.requires_grad = True
|
||||
|
||||
padded_input = torch.nn.functional.pad(feature_grids,
|
||||
(kernel_offset, kernel_size - 1 - kernel_offset,
|
||||
kernel_offset, kernel_size - 1 - kernel_offset,
|
||||
kernel_offset, kernel_size - 1 - kernel_offset))
|
||||
expected_output = torch.nn.functional.conv3d(padded_input, dense_weight, stride=stride, bias=bias)
|
||||
expected_height, expected_width, expected_depth = expected_output.shape[2:]
|
||||
expected_output *= output_sparsity_masks[:, :, :expected_height, :expected_width, :expected_depth]
|
||||
assert torch.allclose(output[:, :, :expected_height, :expected_width, :expected_depth],
|
||||
expected_output, atol=1e-3, rtol=1e-3)
|
||||
grad_output = torch.rand_like(output)
|
||||
output.backward(grad_output)
|
||||
expected_output.backward(grad_output[:, :, :expected_height, :expected_width, :expected_depth])
|
||||
|
||||
_, _, sparsified_grad = spc.feature_grids_to_spc(feature_grids.grad, sparsity_masks)
|
||||
|
||||
assert torch.allclose(coalescent_features.grad, sparsified_grad, rtol=1e-3, atol=1e-3)
|
||||
assert torch.allclose(spc_weight.grad,
|
||||
dense_weight.grad.reshape(out_channels, in_channels, -1).permute(2, 1, 0),
|
||||
rtol=5e-2, atol=5e-2)
|
||||
|
||||
@pytest.mark.parametrize('with_spc_to_dict', [False, True])
|
||||
@pytest.mark.parametrize('jump', [0, 1, 2])
|
||||
def test_conv_transpose3d(self, height, width, depth, in_channels, out_channels,
|
||||
sparsity_masks, dense_weight, bias,
|
||||
octrees, lengths, max_level, pyramids, exsum, point_hierarchies,
|
||||
kernel_vectors, kernel_size, kernel_offset, spc_weight, jump,
|
||||
with_spc_to_dict):
|
||||
stride = 2 ** jump
|
||||
|
||||
if stride > kernel_size:
|
||||
pytest.skip('stride higher than kernel_size is not tested')
|
||||
|
||||
out_sparsity_masks = sparsity_masks
|
||||
in_level = max_level - jump
|
||||
in_num_nodes = torch.sum(pyramids[:, 0, -(2 + jump)])
|
||||
coalescent_features = torch.rand((in_num_nodes, in_channels), device='cuda',
|
||||
requires_grad=True)
|
||||
|
||||
dense_weight = dense_weight.detach()
|
||||
dense_weight.requires_grad = True
|
||||
spc_weight = spc_weight.detach()
|
||||
spc_weight.requires_grad = True
|
||||
if with_spc_to_dict:
|
||||
input_spc = Spc(octrees, lengths)
|
||||
feature_grids = spc.to_dense(**input_spc.to_dict(), input=coalescent_features,
|
||||
level=in_level)
|
||||
else:
|
||||
feature_grids = spc.to_dense(point_hierarchies, pyramids, coalescent_features, in_level)
|
||||
feature_grids = feature_grids[:, :, :math.ceil(height / stride),
|
||||
:math.ceil(width / stride), :math.ceil(depth / stride)]
|
||||
feature_grids = feature_grids.detach()
|
||||
feature_grids.requires_grad = True
|
||||
if with_spc_to_dict:
|
||||
sparsity_masks = spc.to_dense(
|
||||
**input_spc.to_dict(), input=torch.ones_like(coalescent_features),
|
||||
level=in_level).bool()
|
||||
else:
|
||||
sparsity_masks = spc.to_dense(point_hierarchies, pyramids,
|
||||
torch.ones_like(coalescent_features),
|
||||
in_level).bool()
|
||||
sparsity_masks = sparsity_masks[:, 0, :math.ceil(height / stride),
|
||||
:math.ceil(width / stride), :math.ceil(depth / stride)]
|
||||
|
||||
# test forward
|
||||
if with_spc_to_dict:
|
||||
output_features, output_level = spc.conv_transpose3d(
|
||||
**input_spc.to_dict(), level=in_level, input=coalescent_features,
|
||||
weight=spc_weight, kernel_vectors=kernel_vectors, jump=jump, bias=bias)
|
||||
output = spc.to_dense(**input_spc.to_dict(), input=output_features, level=output_level)
|
||||
else:
|
||||
output_features, output_level = spc.conv_transpose3d(
|
||||
octrees, point_hierarchies, in_level, pyramids, exsum,
|
||||
coalescent_features,
|
||||
spc_weight, kernel_vectors, jump=jump, bias=bias)
|
||||
output = spc.to_dense(point_hierarchies, pyramids, output_features, output_level)
|
||||
|
||||
output = output[:, :, :height, :width, :depth]
|
||||
|
||||
expected_output = torch.nn.functional.conv_transpose3d(
|
||||
feature_grids, dense_weight.permute(1, 0, 2, 3, 4),
|
||||
stride=stride, bias=bias,
|
||||
output_padding=stride - 1)[:, :,
|
||||
kernel_offset:height + kernel_offset,
|
||||
kernel_offset:width + kernel_offset,
|
||||
kernel_offset:depth + kernel_offset]
|
||||
expected_output *= out_sparsity_masks.unsqueeze(1)
|
||||
assert output_level == max_level
|
||||
assert torch.allclose(output, expected_output, rtol=1e-3, atol=1e-3)
|
||||
# test backward
|
||||
grad_out = torch.rand_like(expected_output)
|
||||
expected_output.backward(grad_out)
|
||||
output.backward(grad_out)
|
||||
_, _, sparsified_grad = spc.feature_grids_to_spc(feature_grids.grad, sparsity_masks)
|
||||
assert torch.allclose(coalescent_features.grad, sparsified_grad,
|
||||
rtol=5e-2, atol=5e-2)
|
||||
assert torch.allclose(spc_weight.grad,
|
||||
dense_weight.grad.reshape(out_channels, in_channels, -1).permute(2, 1, 0),
|
||||
rtol=5e-2, atol=5e-2)
|
||||
|
||||
@pytest.mark.parametrize('with_spc_to_dict', [False, True])
|
||||
@pytest.mark.parametrize('jump', [0, 1, 2])
|
||||
def test_module_conv3d(self, height, width, depth, in_channels, out_channels, with_bias,
|
||||
octrees, lengths, coalescent_features, max_level, pyramids, exsum,
|
||||
point_hierarchies, kernel_vectors, jump, with_spc_to_dict):
|
||||
conv = spc.Conv3d(in_channels, out_channels, kernel_vectors,
|
||||
jump, bias=with_bias).cuda()
|
||||
params = dict(conv.named_parameters())
|
||||
weight = params['weight']
|
||||
check_tensor(weight, shape=(kernel_vectors.shape[0],
|
||||
in_channels, out_channels),
|
||||
dtype=torch.float, device='cuda')
|
||||
if with_bias:
|
||||
assert len(params) == 2
|
||||
bias = params['bias']
|
||||
check_tensor(bias, shape=(out_channels,), dtype=torch.float,
|
||||
device='cuda')
|
||||
else:
|
||||
assert len(params) == 1
|
||||
bias = None
|
||||
|
||||
buffers = dict(conv.named_buffers())
|
||||
assert len(buffers) == 1
|
||||
assert torch.equal(buffers['kernel_vectors'], kernel_vectors)
|
||||
|
||||
assert repr(conv) == f'Conv3d(in={in_channels}, out={out_channels}, ' \
|
||||
f'kernel_vector_size={kernel_vectors.shape[0]})'
|
||||
|
||||
if with_spc_to_dict:
|
||||
input_spc = Spc(octrees, lengths)
|
||||
output, output_level = conv(**input_spc.to_dict(), level=max_level,
|
||||
input=coalescent_features)
|
||||
else:
|
||||
output, output_level = conv(
|
||||
octrees, point_hierarchies, max_level, pyramids, exsum,
|
||||
coalescent_features)
|
||||
|
||||
expected_output, expected_output_level = spc.conv3d(
|
||||
octrees, point_hierarchies, max_level, pyramids, exsum, coalescent_features,
|
||||
weight, kernel_vectors, jump=jump, bias=bias)
|
||||
assert torch.equal(output, expected_output)
|
||||
assert output_level == expected_output_level
|
||||
|
||||
@pytest.mark.parametrize('with_spc_to_dict', [False, True])
|
||||
@pytest.mark.parametrize('jump', [0, 1, 2])
|
||||
def test_module_conv_transpose3d(self, height, width, depth, in_channels, out_channels, with_bias,
|
||||
octrees, lengths, max_level, pyramids, exsum, point_hierarchies,
|
||||
kernel_size, kernel_vectors, jump, with_spc_to_dict):
|
||||
stride = 2 ** jump
|
||||
|
||||
if stride > kernel_size:
|
||||
pytest.skip('stride higher than kernel_size is not tested')
|
||||
|
||||
in_level = max_level - jump
|
||||
in_num_nodes = torch.sum(pyramids[:, 0, -(2 + jump)])
|
||||
coalescent_features = torch.rand((in_num_nodes, in_channels), device='cuda',
|
||||
requires_grad=True)
|
||||
|
||||
|
||||
conv = spc.ConvTranspose3d(in_channels, out_channels, kernel_vectors,
|
||||
jump, bias=with_bias).cuda()
|
||||
params = dict(conv.named_parameters())
|
||||
weight = params['weight']
|
||||
check_tensor(weight, shape=(kernel_vectors.shape[0],
|
||||
in_channels, out_channels),
|
||||
dtype=torch.float, device='cuda')
|
||||
if with_bias:
|
||||
assert len(params) == 2
|
||||
bias = params['bias']
|
||||
check_tensor(bias, shape=(out_channels,), dtype=torch.float,
|
||||
device='cuda')
|
||||
else:
|
||||
assert len(params) == 1
|
||||
bias = None
|
||||
|
||||
buffers = dict(conv.named_buffers())
|
||||
assert len(buffers) == 1
|
||||
assert torch.equal(buffers['kernel_vectors'], kernel_vectors)
|
||||
|
||||
assert repr(conv) == f'ConvTranspose3d(in={in_channels}, ' \
|
||||
f'out={out_channels}, ' \
|
||||
f'kernel_vector_size={kernel_vectors.shape[0]})'
|
||||
|
||||
if with_spc_to_dict:
|
||||
input_spc = Spc(octrees, lengths)
|
||||
output, output_level = conv(**input_spc.to_dict(), level=in_level,
|
||||
input=coalescent_features)
|
||||
else:
|
||||
output, output_level = conv(
|
||||
octrees, point_hierarchies, in_level, pyramids, exsum,
|
||||
coalescent_features)
|
||||
|
||||
expected_output, expected_output_level = spc.conv_transpose3d(
|
||||
octrees, point_hierarchies, in_level, pyramids, exsum, coalescent_features,
|
||||
weight, kernel_vectors, jump=jump, bias=bias)
|
||||
assert torch.equal(output, expected_output)
|
||||
assert output_level == expected_output_level
|
||||
308
tests/python/kaolin/ops/spc/test_points.py
Normal file
@@ -0,0 +1,308 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import math
|
||||
import pytest
|
||||
import os
|
||||
import itertools
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.utils.testing import check_allclose
|
||||
from kaolin.ops.spc import points_to_morton, morton_to_points, points_to_corners, \
|
||||
coords_to_trilinear_coeffs, quantize_points, unbatched_query, \
|
||||
scan_octrees, unbatched_points_to_octree, generate_points, \
|
||||
unbatched_make_trinkets, unbatched_make_dual, unbatched_interpolate_trilinear
|
||||
|
||||
class TestPoints:
|
||||
@pytest.fixture(autouse=True)
|
||||
def points(self):
|
||||
return torch.tensor([
|
||||
[0, 0, 0],
|
||||
[0, 0, 1],
|
||||
[0, 0, 2],
|
||||
[0, 0, 3],
|
||||
[0, 1, 0]], device='cuda', dtype=torch.int16)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def morton(self):
|
||||
return torch.tensor([0, 1, 8, 9, 2], device='cuda', dtype=torch.long)
|
||||
|
||||
def test_quantize_points(self):
|
||||
x = torch.tensor([
|
||||
[-1.1, -1.1, -1.1],
|
||||
[-1., -1., -1.],
|
||||
[0., 0., 0.],
|
||||
[0.1, 0.3, 0.6],
|
||||
[0.1, -1.1, 1.1],
|
||||
[0.1, -1., 1.],
|
||||
[1., 1., 1.],
|
||||
[1.1, 1.1, 1.1]], device='cuda', dtype=torch.float)
|
||||
|
||||
points = quantize_points(x, 3)
|
||||
expected_points = torch.tensor([
|
||||
[0, 0, 0],
|
||||
[0, 0, 0],
|
||||
[4, 4, 4],
|
||||
[4, 5, 6],
|
||||
[4, 0, 7],
|
||||
[4, 0, 7],
|
||||
[7, 7, 7],
|
||||
[7, 7, 7]], device='cuda', dtype=torch.int16)
|
||||
|
||||
assert torch.equal(points, expected_points)
|
||||
def test_points_to_morton(self, points, morton):
|
||||
assert torch.equal(points_to_morton(points), morton)
|
||||
|
||||
def test_morton_to_points(self, morton, points):
|
||||
assert torch.equal(morton_to_points(morton), points)
|
||||
|
||||
def test_points_to_corners(self, points):
|
||||
expected_corners = []
|
||||
for offset in itertools.product([0, 1], repeat=3):
|
||||
expected_corners.append(points + torch.tensor([offset], device='cuda', dtype=torch.int16))
|
||||
expected_corners = torch.stack(expected_corners, dim=-2)
|
||||
assert torch.equal(points_to_corners(points), expected_corners)
|
||||
|
||||
def test_coords_to_trilinear_coeffs(self, points):
|
||||
w = torch.rand(points.shape, device='cuda')
|
||||
x = points + w
|
||||
expected_coeffs = torch.stack([
|
||||
(1 - w[:, 0]) * (1 - w[:, 1]) * (1 - w[:, 2]),
|
||||
(1 - w[:, 0]) * (1 - w[:, 1]) * w[:, 2],
|
||||
(1 - w[:, 0]) * w[:, 1] * (1 - w[:, 2]),
|
||||
(1 - w[:, 0]) * w[:, 1] * w[:, 2],
|
||||
w[:, 0] * (1 - w[:, 1]) * (1 - w[:, 2]),
|
||||
w[:, 0] * (1 - w[:, 1]) * w[:, 2],
|
||||
w[:, 0] * w[:, 1] * (1 - w[:, 2]),
|
||||
w[:, 0] * w[:, 1] * w[:, 2]
|
||||
], dim=-1)
|
||||
|
||||
level = 3
|
||||
coords = (x / (2 ** level)) * 2.0 - 1.0
|
||||
check_allclose(coords_to_trilinear_coeffs(coords, points, level), expected_coeffs, rtol=1e-4, atol=1e-4)
|
||||
|
||||
def test_interpolate_trilinear_forward(self, points):
|
||||
w = torch.rand(points.shape, device='cuda')
|
||||
x = torch.cat([
|
||||
points + w,
|
||||
-torch.rand((4, 3), device='cuda')
|
||||
], dim=0)
|
||||
|
||||
level = 3
|
||||
|
||||
octree = unbatched_points_to_octree(points, level)
|
||||
length = torch.tensor([len(octree)], dtype=torch.int32)
|
||||
_, pyramid, prefix = scan_octrees(octree, length)
|
||||
point_hierarchy = generate_points(octree, pyramid, prefix)
|
||||
|
||||
pyramid = pyramid[0]
|
||||
point_hierarchy_dual, pyramid_dual = unbatched_make_dual(point_hierarchy, pyramid)
|
||||
trinkets, parents = unbatched_make_trinkets(point_hierarchy, pyramid, point_hierarchy_dual, pyramid_dual)
|
||||
|
||||
coords = (x / (2 ** level)) * 2.0 - 1.0
|
||||
pidx = unbatched_query(octree, prefix, coords, level, with_parents=False)
|
||||
|
||||
feats = torch.rand([pyramid_dual[0, level], 16], device='cuda')
|
||||
|
||||
corner_feats = feats.index_select(0, trinkets[pidx].view(-1)).view(-1, 8, 16)
|
||||
coeffs = coords_to_trilinear_coeffs(coords, points, level)
|
||||
expected_results = (corner_feats * coeffs[..., None]).sum(-2)
|
||||
expected_results[points.shape[0]:] = 0.
|
||||
|
||||
results = unbatched_interpolate_trilinear(
|
||||
coords[:, None], pidx.int(), point_hierarchy, trinkets, feats, level
|
||||
)[:, 0]
|
||||
|
||||
check_allclose(results, expected_results, rtol=1e-5, atol=1e-5)
|
||||
|
||||
def test_interpolate_trilinear_forward_dtypes(self, points):
|
||||
w = torch.rand(points.shape, device='cuda')
|
||||
x = torch.cat([
|
||||
points + w,
|
||||
-torch.rand((4, 3), device='cuda')
|
||||
], dim=0)
|
||||
|
||||
level = 3
|
||||
|
||||
octree = unbatched_points_to_octree(points, level)
|
||||
length = torch.tensor([len(octree)], dtype=torch.int32)
|
||||
_, pyramid, prefix = scan_octrees(octree, length)
|
||||
point_hierarchy = generate_points(octree, pyramid, prefix)
|
||||
|
||||
pyramid = pyramid[0]
|
||||
point_hierarchy_dual, pyramid_dual = unbatched_make_dual(point_hierarchy, pyramid)
|
||||
trinkets, parents = unbatched_make_trinkets(point_hierarchy, pyramid, point_hierarchy_dual, pyramid_dual)
|
||||
|
||||
coords = (x / (2**level)) * 2.0 - 1.0
|
||||
pidx = unbatched_query(octree, prefix, coords, level, with_parents=False)
|
||||
|
||||
feats = torch.rand([pyramid_dual[0, level], 16], device='cuda')
|
||||
|
||||
results_float = unbatched_interpolate_trilinear(coords[:, None], pidx.int(), point_hierarchy, trinkets, feats, level)[:, 0]
|
||||
results_double = unbatched_interpolate_trilinear(coords[:, None], pidx.int(), point_hierarchy, trinkets, feats.double(), level)[:, 0]
|
||||
results_half = unbatched_interpolate_trilinear(coords[:, None], pidx.int(), point_hierarchy, trinkets, feats.half(), level)[:, 0]
|
||||
|
||||
check_allclose(results_float, results_double.float(), rtol=1e-4, atol=1e-4)
|
||||
check_allclose(results_float.half(), results_half, rtol=1e-3, atol=1e-3)
|
||||
|
||||
def test_interpolate_trilinear_backward(self, points):
|
||||
w = torch.rand(points.shape, device='cuda')
|
||||
x = torch.cat([
|
||||
points + w,
|
||||
-torch.rand((4, 3), device='cuda')
|
||||
], dim=0)
|
||||
|
||||
level = 3
|
||||
|
||||
octree = unbatched_points_to_octree(points, level)
|
||||
length = torch.tensor([len(octree)], dtype=torch.int32)
|
||||
_, pyramid, prefix = scan_octrees(octree, length)
|
||||
point_hierarchy = generate_points(octree, pyramid, prefix)
|
||||
|
||||
pyramid = pyramid[0]
|
||||
point_hierarchy_dual, pyramid_dual = unbatched_make_dual(point_hierarchy, pyramid)
|
||||
trinkets, parents = unbatched_make_trinkets(point_hierarchy, pyramid, point_hierarchy_dual, pyramid_dual)
|
||||
|
||||
coords = (x / (2 ** level)) * 2.0 - 1.0
|
||||
pidx = unbatched_query(octree, prefix, coords, level, with_parents=False)
|
||||
|
||||
feats = torch.rand([pyramid_dual[0, level], 16], device='cuda')
|
||||
feats.requires_grad_(True)
|
||||
if feats.grad is not None:
|
||||
feats.grad.detach()
|
||||
feats.grad.zero_()
|
||||
|
||||
corner_feats = feats.index_select(0, trinkets[pidx].view(-1)).view(-1, 8, 16)
|
||||
coeffs = coords_to_trilinear_coeffs(coords, points, level)
|
||||
expected_results = (corner_feats * coeffs[..., None]).sum(-2)
|
||||
expected_results[points.shape[0]:] = 0.
|
||||
|
||||
loss = expected_results.sum()
|
||||
loss.backward()
|
||||
expected_grad = feats.grad.clone()
|
||||
|
||||
if feats.grad is not None:
|
||||
feats.grad.detach_()
|
||||
feats.grad.zero_()
|
||||
|
||||
results = unbatched_interpolate_trilinear(
|
||||
coords[:, None], pidx.int(), point_hierarchy, trinkets, feats, level
|
||||
)[:, 0]
|
||||
loss = results.sum()
|
||||
loss.backward()
|
||||
grad = feats.grad.clone()
|
||||
|
||||
check_allclose(grad, expected_grad, rtol=1e-5, atol=1e-5)
|
||||
|
||||
def test_interpolate_trilinear_by_coords_backward(self, points):
|
||||
w = torch.rand(points.shape, device='cuda')
|
||||
x = torch.cat([
|
||||
points + w,
|
||||
-torch.rand((4, 3), device='cuda')
|
||||
], dim=0)
|
||||
|
||||
level = 3
|
||||
|
||||
octree = unbatched_points_to_octree(points, level)
|
||||
length = torch.tensor([len(octree)], dtype=torch.int32)
|
||||
_, pyramid, prefix = scan_octrees(octree, length)
|
||||
point_hierarchy = generate_points(octree, pyramid, prefix)
|
||||
|
||||
pyramid = pyramid[0]
|
||||
point_hierarchy_dual, pyramid_dual = unbatched_make_dual(point_hierarchy, pyramid)
|
||||
trinkets, parents = unbatched_make_trinkets(
|
||||
point_hierarchy, pyramid, point_hierarchy_dual, pyramid_dual)
|
||||
|
||||
coords = (x / (2 ** level)) * 2.0 - 1.0
|
||||
pidx = unbatched_query(octree, prefix, coords, level, with_parents=False)
|
||||
feats = torch.rand([pyramid_dual[0, level], 16], device='cuda')
|
||||
|
||||
# w is the relative position inside a cell
|
||||
w = w.detach()
|
||||
w.requires_grad_(True)
|
||||
if w.grad is not None:
|
||||
w.grad.detach()
|
||||
w.grad.zero_()
|
||||
|
||||
# (5, 8, 16)
|
||||
corner_feats = feats.index_select(0, trinkets[pidx].view(-1)).view(-1, 8, 16)
|
||||
corner_feats[points.shape[0]:] = 0.
|
||||
|
||||
# (5, 8)
|
||||
expected_coeffs = torch.cat([torch.stack([
|
||||
(1 - w[:, 0]) * (1 - w[:, 1]) * (1 - w[:, 2]),
|
||||
(1 - w[:, 0]) * (1 - w[:, 1]) * w[:, 2],
|
||||
(1 - w[:, 0]) * w[:, 1] * (1 - w[:, 2]),
|
||||
(1 - w[:, 0]) * w[:, 1] * w[:, 2],
|
||||
w[:, 0] * (1 - w[:, 1]) * (1 - w[:, 2]),
|
||||
w[:, 0] * (1 - w[:, 1]) * w[:, 2],
|
||||
w[:, 0] * w[:, 1] * (1 - w[:, 2]),
|
||||
w[:, 0] * w[:, 1] * w[:, 2]
|
||||
], dim=-1),
|
||||
torch.zeros((4, 8), device='cuda', dtype=torch.float)
|
||||
], dim=0)
|
||||
expected_coeffs = expected_coeffs.requires_grad_(True) # prevents element0 error
|
||||
expected_results = (corner_feats * expected_coeffs[..., None]).sum(1)
|
||||
expected_results[points.shape[0]:] = 0.
|
||||
|
||||
loss = expected_results.sum()
|
||||
loss.backward()
|
||||
expected_grad = torch.zeros_like(x)
|
||||
expected_grad[:points.shape[0]] = w.grad.clone()
|
||||
|
||||
coords.requires_grad_(True)
|
||||
if coords.grad is not None:
|
||||
coords.grad.detach()
|
||||
coords.grad.zero_()
|
||||
results = unbatched_interpolate_trilinear(
|
||||
coords[:, None], pidx.int(), point_hierarchy, trinkets, feats, level)
|
||||
loss = results[:, 0].sum()
|
||||
loss.backward()
|
||||
coords_grad = coords.grad.clone()
|
||||
|
||||
assert torch.allclose(coords_grad, expected_grad, rtol=1e-4, atol=1e-3)
|
||||
|
||||
def test_interpolate_trilinear_by_coords_toggleable(self, points):
|
||||
# Test that features only grad does not generate coords grad
|
||||
w = torch.rand(points.shape, device='cuda')
|
||||
x = torch.cat([
|
||||
points + w,
|
||||
-torch.rand((4, 3), device='cuda')
|
||||
], dim=0)
|
||||
|
||||
|
||||
level = 3
|
||||
|
||||
octree = unbatched_points_to_octree(points, level)
|
||||
length = torch.tensor([len(octree)], dtype=torch.int32)
|
||||
_, pyramid, prefix = scan_octrees(octree, length)
|
||||
point_hierarchy = generate_points(octree, pyramid, prefix)
|
||||
|
||||
pyramid = pyramid[0]
|
||||
point_hierarchy_dual, pyramid_dual = unbatched_make_dual(point_hierarchy, pyramid)
|
||||
trinkets, parents = unbatched_make_trinkets(point_hierarchy, pyramid, point_hierarchy_dual, pyramid_dual)
|
||||
|
||||
coords = (x / (2 ** level)) * 2.0 - 1.0
|
||||
pidx = unbatched_query(octree, prefix, coords, level, with_parents=False)
|
||||
feats = torch.rand([pyramid_dual[0, level], 16], device='cuda')
|
||||
|
||||
feats.requires_grad_(True)
|
||||
coords.requires_grad_(False)
|
||||
results = unbatched_interpolate_trilinear(coords[:, None], pidx.int(), point_hierarchy, trinkets, feats, level)
|
||||
loss = results[:, 0].sum()
|
||||
loss.backward()
|
||||
|
||||
assert coords.grad is None
|
||||
482
tests/python/kaolin/ops/spc/test_spc.py
Normal file
@@ -0,0 +1,482 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import math
|
||||
import pytest
|
||||
import os
|
||||
|
||||
import torch
|
||||
from kaolin.ops.spc.uint8 import bits_to_uint8, uint8_bits_sum, uint8_to_bits
|
||||
from kaolin.ops.random import random_spc_octrees
|
||||
from kaolin.rep import Spc
|
||||
|
||||
from kaolin.ops.spc import scan_octrees, generate_points, to_dense, feature_grids_to_spc
|
||||
from kaolin.ops.spc import unbatched_query, unbatched_points_to_octree
|
||||
from kaolin.ops.spc import unbatched_get_level_points, unbatched_make_dual, unbatched_make_trinkets
|
||||
from kaolin.ops.spc import points_to_corners
|
||||
|
||||
from kaolin.utils.testing import FLOAT_TYPES, with_seed, check_tensor
|
||||
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
class TestSimpleBase:
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees(self, device):
|
||||
bits_t = torch.tensor([
|
||||
[0, 0, 0, 1, 0, 0, 0, 1],
|
||||
[0, 0, 0, 0, 0, 1, 1, 0], [0, 0, 1, 0, 0, 0, 0, 0],
|
||||
[1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0],
|
||||
|
||||
[1, 0, 0, 0, 0, 0, 0, 0],
|
||||
[0, 1, 1, 1, 0, 0, 0, 0],
|
||||
[0, 0, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1], [0, 1, 0, 1, 0, 1, 0, 1]],
|
||||
device='cuda', dtype=torch.float)
|
||||
return bits_to_uint8(torch.flip(bits_t, dims=(-1,)))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def lengths(self):
|
||||
return torch.tensor([6, 5], dtype=torch.int)
|
||||
|
||||
def test_scan_octrees(self, octrees, lengths):
|
||||
expected_pyramids = torch.tensor(
|
||||
[[[1, 2, 3, 3, 0], [0, 1, 3, 6, 9]],
|
||||
[[1, 1, 3, 13, 0], [0, 1, 2, 5, 18]]], dtype=torch.int32)
|
||||
expected_exsum = torch.tensor(
|
||||
[0, 2, 4, 5, 6, 7, 8, 0, 1, 4, 5, 13, 17],
|
||||
dtype=torch.int32, device='cuda')
|
||||
max_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
assert max_level == 3
|
||||
assert torch.equal(pyramids, expected_pyramids)
|
||||
assert torch.equal(exsum, expected_exsum)
|
||||
|
||||
def test_generate_points(self, octrees, lengths):
|
||||
max_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
expected_point_hierarchies = torch.tensor([
|
||||
[0, 0, 0],
|
||||
[0, 0, 0], [1, 0, 0],
|
||||
[0, 0, 1], [0, 1, 0], [3, 0, 1],
|
||||
[1, 1, 3], [1, 3, 1], [6, 1, 3],
|
||||
|
||||
[0, 0, 0],
|
||||
[1, 1, 1],
|
||||
[3, 2, 2], [3, 2, 3], [3, 3, 2],
|
||||
[7, 4, 5], [6, 4, 6], [6, 4, 7], [6, 5, 6], [6, 5, 7], [7, 4, 6], \
|
||||
[7, 4, 7], [7, 5, 6], [7, 5, 7], [6, 6, 4], [6, 7, 4], \
|
||||
[7, 6, 4], [7, 7, 4]
|
||||
], device='cuda', dtype=torch.int16)
|
||||
|
||||
point_hierarchies = generate_points(octrees, pyramids, exsum)
|
||||
|
||||
assert torch.equal(point_hierarchies, expected_point_hierarchies)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
@pytest.mark.parametrize('max_level', [1, 4])
|
||||
@pytest.mark.parametrize('batch_size', [1, 3])
|
||||
class TestBase:
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees_and_lengths(self, batch_size, max_level, device):
|
||||
return random_spc_octrees(batch_size, max_level, device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees(self, octrees_and_lengths):
|
||||
return octrees_and_lengths[0]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def lengths(self, octrees_and_lengths):
|
||||
return octrees_and_lengths[1]
|
||||
|
||||
def test_scan_octrees(self, octrees, lengths, max_level):
|
||||
# Naive implementation
|
||||
num_childrens_per_node = uint8_bits_sum(octrees).cpu()
|
||||
octree_start_idx = 0
|
||||
num_childrens_per_level = []
|
||||
levels_first_idx = []
|
||||
expected_exsum = torch.zeros((num_childrens_per_node.shape[0] +
|
||||
lengths.shape[0], ),
|
||||
dtype=torch.int32)
|
||||
for bs, length in enumerate(lengths):
|
||||
cur_num_childrens_per_node = \
|
||||
num_childrens_per_node[octree_start_idx:octree_start_idx + length]
|
||||
num_childrens_per_level.append([1])
|
||||
levels_first_idx.append([0])
|
||||
for i in range(max_level):
|
||||
cur_idx = levels_first_idx[-1][-1]
|
||||
cur_num_childrens = num_childrens_per_level[-1][-1]
|
||||
num_childrens_per_level[-1].append(int(torch.sum(
|
||||
cur_num_childrens_per_node[cur_idx:cur_idx + cur_num_childrens])))
|
||||
levels_first_idx[-1].append(cur_idx + cur_num_childrens)
|
||||
levels_first_idx[-1].append(levels_first_idx[-1][-1] +
|
||||
num_childrens_per_level[-1][-1])
|
||||
num_childrens_per_level[-1].append(0);
|
||||
# + bs + 1 because torch.cumsum is inclusive
|
||||
expected_exsum[octree_start_idx + bs + 1:octree_start_idx + bs + 1 + length] = \
|
||||
torch.cumsum(cur_num_childrens_per_node, dim=0)
|
||||
octree_start_idx += length
|
||||
num_childrens_per_level = torch.tensor(num_childrens_per_level, dtype=torch.int32)
|
||||
levels_first_idx = torch.tensor(levels_first_idx, dtype=torch.int32)
|
||||
expected_pyramids = torch.stack([num_childrens_per_level, levels_first_idx], dim=1)
|
||||
expected_exsum = expected_exsum.cuda()
|
||||
|
||||
out_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
|
||||
assert out_level == max_level
|
||||
assert torch.equal(pyramids, expected_pyramids)
|
||||
assert torch.equal(exsum, expected_exsum)
|
||||
|
||||
def test_generate_points(self, octrees, lengths, max_level):
|
||||
out_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
point_hierarchies = generate_points(octrees, pyramids, exsum)
|
||||
expected_point_hierarchies = []
|
||||
bits_t = uint8_to_bits(octrees).reshape(-1, 2, 2, 2).cpu()
|
||||
octree_first_idx = 0
|
||||
for bs, length in enumerate(lengths):
|
||||
expected_point_hierarchies.append(torch.tensor([[0, 0, 0]], dtype=torch.long))
|
||||
cur_bits_t = bits_t[octree_first_idx:octree_first_idx + length]
|
||||
offsets = torch.tensor([[0,0,0]], dtype=torch.int32)
|
||||
for i in range(max_level):
|
||||
next_offset = []
|
||||
cur_level_num_nodes = pyramids[bs, 0, i]
|
||||
level_first_idx = pyramids[bs, 1, i]
|
||||
for cur_level_node_idx in range(cur_level_num_nodes):
|
||||
node_bits = cur_bits_t[level_first_idx + cur_level_node_idx]
|
||||
offset = offsets[cur_level_node_idx]
|
||||
point_coords = torch.nonzero(node_bits, as_tuple=False) + offset.unsqueeze(0)
|
||||
expected_point_hierarchies.append(point_coords)
|
||||
next_offset.append(point_coords * 2)
|
||||
offsets = torch.cat(next_offset, dim=0)
|
||||
octree_first_idx += length
|
||||
expected_point_hierarchies = torch.cat(expected_point_hierarchies,
|
||||
dim=0).cuda().short()
|
||||
assert torch.equal(point_hierarchies, expected_point_hierarchies)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
@pytest.mark.parametrize('max_level', [4, 6, 1])
|
||||
@pytest.mark.parametrize('batch_size', [1])
|
||||
class TestTrinkets:
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees_and_lengths(self, batch_size, max_level, device):
|
||||
return random_spc_octrees(batch_size, max_level, device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees(self, octrees_and_lengths):
|
||||
return octrees_and_lengths[0]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def lengths(self, octrees_and_lengths):
|
||||
return octrees_and_lengths[1]
|
||||
|
||||
def test_unbatched_make_trinkets(self, octrees, lengths, max_level):
|
||||
out_level, pyramid, exsum = scan_octrees(octrees, lengths)
|
||||
point_hierarchy = generate_points(octrees, pyramid, exsum)
|
||||
pyramid = pyramid[0]
|
||||
point_hierarchy_dual, pyramid_dual = unbatched_make_dual(point_hierarchy, pyramid)
|
||||
trinkets, parents = unbatched_make_trinkets(point_hierarchy, pyramid, point_hierarchy_dual, pyramid_dual)
|
||||
|
||||
for i in range(0, max_level+1):
|
||||
_idx = pyramid_dual[1, i] + unbatched_get_level_points(trinkets, pyramid, i)
|
||||
pts = point_hierarchy_dual.index_select(0, _idx.view(-1)).view(-1, 8, 3)
|
||||
expected_pts = points_to_corners(unbatched_get_level_points(point_hierarchy, pyramid, i))
|
||||
assert torch.equal(pts, expected_pts)
|
||||
|
||||
assert parents[0] == -1
|
||||
|
||||
for i in range(1, max_level+1):
|
||||
parent = point_hierarchy.index_select(0, unbatched_get_level_points(parents, pyramid, i))
|
||||
assert torch.equal(parent, torch.div(unbatched_get_level_points(point_hierarchy, pyramid, i), 2, rounding_mode='trunc'))
|
||||
|
||||
|
||||
class TestQuery:
|
||||
def test_query(self):
|
||||
points = torch.tensor(
|
||||
[[3,2,0],
|
||||
[3,1,1],
|
||||
[0,0,0],
|
||||
[3,3,3]], device='cuda', dtype=torch.short)
|
||||
level = 2
|
||||
resolution = 2**level
|
||||
octree = unbatched_points_to_octree(points, level)
|
||||
length = torch.tensor([len(octree)], dtype=torch.int32)
|
||||
_, pyramid, prefix = scan_octrees(octree, length)
|
||||
|
||||
query_points = torch.tensor(
|
||||
[[3,2,0],
|
||||
[3,1,1],
|
||||
[0,0,0],
|
||||
[3,3,3],
|
||||
[2,2,2],
|
||||
[1,1,1]], device='cuda', dtype=torch.short)
|
||||
query_coords_float = (2.0 * (query_points.float() / resolution) - 1.0)
|
||||
query_coords_int = query_points
|
||||
|
||||
point_hierarchy = generate_points(octree, pyramid, prefix)
|
||||
|
||||
results_float = unbatched_query(octree, prefix, query_coords_float, 2)
|
||||
results_int = unbatched_query(octree, prefix, query_coords_int, 2)
|
||||
|
||||
expected_results = torch.tensor(
|
||||
[7,6,5,8,-1,-1], dtype=torch.long, device='cuda')
|
||||
|
||||
assert torch.equal(expected_results, results_float)
|
||||
assert torch.equal(expected_results, results_int)
|
||||
assert torch.equal(point_hierarchy[results_float[:-2]], query_points[:-2])
|
||||
assert torch.equal(point_hierarchy[results_int[:-2]], query_points[:-2])
|
||||
|
||||
def test_query_flooredge(self):
|
||||
points = torch.tensor(
|
||||
[[0,0,0]], device='cuda', dtype=torch.short)
|
||||
level = 1
|
||||
octree = unbatched_points_to_octree(points, level)
|
||||
length = torch.tensor([len(octree)], dtype=torch.int32)
|
||||
_, pyramid, prefix = scan_octrees(octree, length)
|
||||
query_coords = torch.tensor(
|
||||
[[-3.0,-3.0,-3.0],
|
||||
[-2.5,-2.5,-2.5],
|
||||
[2.5,2.5,2.5],
|
||||
[3.0,3.0,3.0],
|
||||
[0.0,0.0,0.0],
|
||||
[0.5,0.5,0.5]], device='cuda', dtype=torch.float)
|
||||
results = unbatched_query(octree, prefix, query_coords, 0)
|
||||
expected_results = torch.tensor(
|
||||
[-1,-1,-1,-1,0,0], dtype=torch.long, device='cuda')
|
||||
assert torch.equal(expected_results, results)
|
||||
|
||||
def test_query_multiscale(self):
|
||||
points = torch.tensor(
|
||||
[[3,2,0],
|
||||
[3,1,1],
|
||||
[0,0,0],
|
||||
[3,3,3]], device='cuda', dtype=torch.short)
|
||||
level = 3
|
||||
resolution = 2**level
|
||||
octree = unbatched_points_to_octree(points, level)
|
||||
length = torch.tensor([len(octree)], dtype=torch.int32)
|
||||
_, pyramid, prefix = scan_octrees(octree, length)
|
||||
|
||||
query_points = torch.tensor(
|
||||
[[3,2,0],
|
||||
[3,1,1],
|
||||
[0,0,0],
|
||||
[0,4,4],
|
||||
[3,3,3],
|
||||
[2,2,2],
|
||||
[1,1,1],
|
||||
[16,16,16]], device='cuda', dtype=torch.short)
|
||||
query_coords_float = (2.0 * (query_points.float() / resolution) - 1.0)
|
||||
query_coords_int = query_points
|
||||
|
||||
point_hierarchy = generate_points(octree, pyramid, prefix)
|
||||
|
||||
expected_results0 = unbatched_query(octree, prefix, query_coords_float, 0)
|
||||
expected_results1 = unbatched_query(octree, prefix, query_coords_float, 1)
|
||||
expected_results2 = unbatched_query(octree, prefix, query_coords_float, 2)
|
||||
expected_results3 = unbatched_query(octree, prefix, query_coords_float, 3)
|
||||
|
||||
results03_float = unbatched_query(octree, prefix, query_coords_float, level, with_parents=True)
|
||||
results02_float = unbatched_query(octree, prefix, query_coords_float, level-1, with_parents=True)
|
||||
|
||||
assert torch.equal(expected_results0, results03_float[:,0])
|
||||
assert torch.equal(expected_results1, results03_float[:,1])
|
||||
assert torch.equal(expected_results2, results03_float[:,2])
|
||||
assert torch.equal(expected_results3, results03_float[:,3])
|
||||
|
||||
assert torch.equal(expected_results0, results02_float[:,0])
|
||||
assert torch.equal(expected_results1, results02_float[:,1])
|
||||
assert torch.equal(expected_results2, results02_float[:,2])
|
||||
|
||||
expected_results3 = unbatched_query(octree, prefix, query_coords_int, 3)
|
||||
|
||||
results03_int = unbatched_query(octree, prefix, query_coords_int, level, with_parents=True)
|
||||
|
||||
assert torch.equal(expected_results3, results03_int[:,3])
|
||||
|
||||
class TestToDense:
|
||||
@pytest.mark.parametrize('with_spc_to_dict', [False, True])
|
||||
def test_simple(self, with_spc_to_dict):
|
||||
bits_t = torch.tensor([
|
||||
[0, 0, 0, 1, 0, 0, 0, 1],
|
||||
[0, 0, 0, 0, 0, 1, 1, 0], [0, 0, 1, 0, 0, 0, 0, 0],
|
||||
[1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0],
|
||||
|
||||
[1, 0, 0, 0, 0, 0, 0, 0],
|
||||
[0, 1, 1, 1, 0, 0, 0, 0],
|
||||
[0, 0, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1], [0, 1, 0, 1, 0, 1, 0, 1]],
|
||||
device='cuda', dtype=torch.float)
|
||||
octrees = bits_to_uint8(torch.flip(bits_t, dims=(-1,)))
|
||||
lengths = torch.tensor([6, 5], dtype=torch.int)
|
||||
max_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
point_hierarchies = generate_points(octrees, pyramids, exsum)
|
||||
coalescent_features = torch.tensor([
|
||||
1., 2., 3.,
|
||||
4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.
|
||||
], device='cuda', dtype=torch.float).reshape(-1, 1)
|
||||
|
||||
feat_idx = torch.tensor([
|
||||
[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
|
||||
[1, 1, 6, 7, 6, 6, 6, 6, 7, 7, 7, 7, 6, 6, 7, 7],
|
||||
[1, 3, 1, 4, 4, 4, 5, 5, 4, 4, 5, 5, 6, 7, 6, 7],
|
||||
[3, 1, 3, 5, 6, 7, 6, 7, 6, 7, 6, 7, 4, 4, 4, 4]
|
||||
], dtype=torch.long)
|
||||
|
||||
expected_feature_grids = torch.zeros((2, 1, 8, 8, 8), dtype=torch.float, device='cuda')
|
||||
expected_feature_grids[feat_idx[0], :, feat_idx[1], feat_idx[2], feat_idx[3]] = coalescent_features
|
||||
if with_spc_to_dict:
|
||||
feature_grids = to_dense(**Spc(octrees, lengths).to_dict(),
|
||||
input=coalescent_features)
|
||||
else:
|
||||
feature_grids = to_dense(point_hierarchies, pyramids, coalescent_features, max_level)
|
||||
|
||||
assert torch.equal(feature_grids, expected_feature_grids)
|
||||
|
||||
@pytest.mark.parametrize('max_level', [1, 4])
|
||||
@pytest.mark.parametrize('batch_size', [1, 3])
|
||||
@pytest.mark.parametrize('feature_dim', [1, 4])
|
||||
def test_to_dense(self, batch_size, max_level, feature_dim):
|
||||
octrees, lengths = random_spc_octrees(batch_size, max_level, 'cuda')
|
||||
|
||||
max_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
point_hierarchies = generate_points(octrees, pyramids, exsum)
|
||||
in_num_nodes = torch.sum(pyramids[:, 0, -2])
|
||||
coalescent_features = torch.rand((in_num_nodes, feature_dim), device='cuda',
|
||||
requires_grad=True)
|
||||
expected_size = 2 ** max_level
|
||||
feat_idx = []
|
||||
bs_start_idx = 0
|
||||
for bs in range(batch_size):
|
||||
start_idx = pyramids[bs, 1, -2] + bs_start_idx
|
||||
num_points = pyramids[bs, 0, -2]
|
||||
feat_idx.append(torch.nn.functional.pad(
|
||||
point_hierarchies[start_idx:start_idx + num_points],
|
||||
(1, 0), value=bs))
|
||||
bs_start_idx += pyramids[bs, 1, -1]
|
||||
feat_idx = torch.cat(feat_idx, dim=0).permute(1, 0).long()
|
||||
expected_feature_grids = torch.zeros((batch_size, feature_dim, expected_size,
|
||||
expected_size, expected_size), device='cuda')
|
||||
expected_feature_grids[feat_idx[0], :, feat_idx[1], feat_idx[2], feat_idx[3]] = coalescent_features
|
||||
|
||||
# test forward
|
||||
feature_grids = to_dense(point_hierarchies, pyramids, coalescent_features, max_level)
|
||||
assert torch.equal(expected_feature_grids, feature_grids)
|
||||
|
||||
grad_out = torch.rand_like(feature_grids)
|
||||
feature_grids.backward(grad_out)
|
||||
octrees, lengths, coalescent_expected_grad = feature_grids_to_spc(
|
||||
grad_out, torch.any(feature_grids != 0, dim=1))
|
||||
assert torch.equal(coalescent_features.grad, coalescent_expected_grad)
|
||||
|
||||
@pytest.mark.parametrize('device,height,width,depth,threshold', [
|
||||
('cpu', 2, 2, 2, 0.1),
|
||||
('cuda', 2, 2, 2, 0.1),
|
||||
('cuda', 113, 251, 251, 0.9)])
|
||||
@pytest.mark.parametrize('batch_size', [1, 5])
|
||||
@pytest.mark.parametrize('feature_dim', [1, 3])
|
||||
@pytest.mark.parametrize('dtype', [torch.float])
|
||||
class TestCycleConversionsFeatureGrids:
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_out_size(self, height, width, depth):
|
||||
max_level = math.ceil(math.log2(max(height, width, depth)))
|
||||
return 2 ** max_level
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def sparsity_masks(self, batch_size, height, width, depth,
|
||||
threshold, device):
|
||||
# We want the array to be quite sparse so even at high level
|
||||
# (near the root) there is sparsity
|
||||
return torch.rand(batch_size, height, width, depth,
|
||||
device=device) > threshold
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def feature_grids(self, batch_size, feature_dim, height,
|
||||
width, depth, dtype, device):
|
||||
return torch.rand((
|
||||
batch_size,
|
||||
feature_dim,
|
||||
height,
|
||||
width,
|
||||
depth,
|
||||
), dtype=dtype, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def sparse_feature_grids(self, feature_grids, sparsity_masks):
|
||||
return feature_grids * sparsity_masks.unsqueeze(1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_out_feature_grids(self, sparse_feature_grids, batch_size,
|
||||
feature_dim, height, width, depth,
|
||||
expected_out_size):
|
||||
out = torch.zeros((batch_size, feature_dim, expected_out_size,
|
||||
expected_out_size, expected_out_size),
|
||||
device='cuda',
|
||||
dtype=sparse_feature_grids.dtype)
|
||||
out[:, :, :height, :width, :depth] = sparse_feature_grids
|
||||
return out
|
||||
|
||||
def test_feature_grids_to_spc(self, sparse_feature_grids,
|
||||
expected_out_feature_grids,
|
||||
device):
|
||||
octrees, lengths, features = feature_grids_to_spc(
|
||||
sparse_feature_grids)
|
||||
assert octrees.device.type == device
|
||||
assert features.device.type == device
|
||||
octrees = octrees.cuda()
|
||||
features = features.cuda()
|
||||
max_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
point_hierarchies = generate_points(octrees, pyramids, exsum)
|
||||
out_feature_grids = to_dense(point_hierarchies, pyramids, features, max_level)
|
||||
|
||||
assert torch.equal(out_feature_grids, expected_out_feature_grids)
|
||||
|
||||
def test_feature_grids_to_spc_with_masks(self, feature_grids, sparsity_masks,
|
||||
expected_out_feature_grids, device):
|
||||
octrees, lengths, features = feature_grids_to_spc(feature_grids,
|
||||
sparsity_masks)
|
||||
assert octrees.device.type == device
|
||||
assert features.device.type == device
|
||||
octrees = octrees.cuda()
|
||||
features = features.cuda()
|
||||
max_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
point_hierarchies = generate_points(octrees, pyramids, exsum)
|
||||
out_feature_grids = to_dense(point_hierarchies, pyramids, features, max_level)
|
||||
|
||||
assert torch.equal(out_feature_grids, expected_out_feature_grids)
|
||||
|
||||
def test_zeros(self, batch_size, feature_dim,
|
||||
height, width, depth, dtype, device):
|
||||
feature_grids = torch.zeros((batch_size, feature_dim, height, width, depth),
|
||||
dtype=dtype, device=device)
|
||||
octrees, lengths, features = feature_grids_to_spc(feature_grids)
|
||||
assert torch.equal(octrees, torch.zeros((batch_size), dtype=torch.uint8,
|
||||
device=device))
|
||||
assert torch.equal(lengths, torch.ones((batch_size), dtype=torch.int,
|
||||
device='cpu'))
|
||||
assert torch.equal(features, torch.empty((0, feature_dim), dtype=dtype,
|
||||
device=device))
|
||||
|
||||
def test_ones(self, batch_size, feature_dim,
|
||||
height, width, depth, dtype, device):
|
||||
feature_grids = torch.ones((batch_size, feature_dim, height, width, depth),
|
||||
dtype=dtype, device=device)
|
||||
octrees, lengths, features = feature_grids_to_spc(feature_grids)
|
||||
assert octrees.device.type == device
|
||||
assert features.device.type == device
|
||||
octrees = octrees.cuda()
|
||||
features = features.cuda()
|
||||
max_level, pyramids, exsum = scan_octrees(octrees, lengths)
|
||||
point_hierarchies = generate_points(octrees, pyramids, exsum)
|
||||
out_feature_grids = to_dense(point_hierarchies, pyramids, features, max_level)
|
||||
assert torch.all(out_feature_grids[:, :, :height, :width, :depth] == 1)
|
||||
assert torch.all(out_feature_grids[:, :, height:] == 0)
|
||||
assert torch.all(out_feature_grids[:, :, :, width:] == 0)
|
||||
assert torch.all(out_feature_grids[..., depth:] == 0)
|
||||
61
tests/python/kaolin/ops/spc/test_uint8.py
Normal file
@@ -0,0 +1,61 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
from itertools import product
|
||||
|
||||
import torch
|
||||
|
||||
from kaolin.ops.spc import uint8_to_bits, uint8_bits_sum, \
|
||||
bits_to_uint8
|
||||
|
||||
@pytest.mark.parametrize('test_all', [False, True])
|
||||
@pytest.mark.parametrize('device', ['cpu', 'cuda'])
|
||||
class TestUint8:
|
||||
@pytest.fixture(autouse=True)
|
||||
def bits_t(self, test_all, device):
|
||||
if test_all:
|
||||
bits_t = torch.tensor(list(product([False, True], repeat=8)),
|
||||
dtype=torch.bool, device=device)
|
||||
else:
|
||||
bits_t = torch.tensor([
|
||||
[0, 0, 0, 0, 0, 0, 0, 0],
|
||||
[0, 0, 0, 0, 0, 0, 0, 1],
|
||||
[0, 0, 0, 0, 1, 1, 1, 1],
|
||||
[1, 1, 1, 1, 1, 1, 1, 1],
|
||||
[1, 0, 0, 0, 0, 0, 0, 1]
|
||||
], dtype=torch.bool, device=device)
|
||||
# convert to left-to-right binary
|
||||
return torch.flip(bits_t, dims=(-1,)).contiguous()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def uint8_t(self, test_all, device):
|
||||
if test_all:
|
||||
return torch.arange(256, dtype=torch.uint8, device=device)
|
||||
else:
|
||||
return torch.tensor([0, 1, 15, 255, 129],
|
||||
dtype=torch.uint8, device=device)
|
||||
|
||||
def test_uint8_to_bits(self, uint8_t, bits_t):
|
||||
out = uint8_to_bits(uint8_t)
|
||||
assert torch.equal(out, bits_t)
|
||||
|
||||
def test_uint8_bits_sum(self, uint8_t, bits_t):
|
||||
out = uint8_bits_sum(uint8_t)
|
||||
assert torch.equal(out, torch.sum(bits_t, dim=-1))
|
||||
|
||||
def test_bits_to_uint8(self, uint8_t, bits_t):
|
||||
out = bits_to_uint8(bits_t)
|
||||
assert torch.equal(out, uint8_t)
|
||||
333
tests/python/kaolin/ops/test_batch.py
Normal file
@@ -0,0 +1,333 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.ops import batch
|
||||
from kaolin.ops.random import random_shape_per_tensor, random_tensor
|
||||
from kaolin.utils.testing import FLOAT_DTYPES, INT_DTYPES, ALL_TYPES, NUM_TYPES, \
|
||||
check_packed_tensor, check_padded_tensor
|
||||
|
||||
|
||||
# Same naive implementation than packed_simple_sum for cpu input
|
||||
# as it's pretty straightforward
|
||||
def _torch_tile_to_packed(values, numel_per_tensor):
|
||||
return torch.cat(
|
||||
[torch.full((int(numel),), fill_value=value.item(), dtype=values.dtype,
|
||||
device=values.device)
|
||||
for value, numel in zip(values, numel_per_tensor)], dim=0).unsqueeze(
|
||||
-1)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("numel_per_tensor",
|
||||
[torch.LongTensor([1]),
|
||||
torch.LongTensor([1, 100000]),
|
||||
torch.arange(257, dtype=torch.long)])
|
||||
class Test_TileToPackedCuda:
|
||||
@pytest.fixture(autouse=True)
|
||||
def total_numel(self, numel_per_tensor):
|
||||
return torch.sum(numel_per_tensor)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def inputs_double(self, numel_per_tensor):
|
||||
return torch.rand((numel_per_tensor.shape[0]), dtype=torch.double,
|
||||
device='cuda',
|
||||
requires_grad=True)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_output_double(self, inputs_double, numel_per_tensor):
|
||||
return _torch_tile_to_packed(inputs_double, numel_per_tensor)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_grad_double(self, inputs_double, numel_per_tensor, total_numel):
|
||||
# if test_gradcheck passed the gradient using torch.double inputs is trustable
|
||||
outputs = torch.sum(
|
||||
batch._TileToPackedCuda.apply(inputs_double, numel_per_tensor,
|
||||
total_numel))
|
||||
outputs.backward()
|
||||
return inputs_double.grad.clone()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def inputs_long(self, numel_per_tensor):
|
||||
return torch.randint(0, 32, size=(numel_per_tensor.shape[0],),
|
||||
dtype=torch.long, device='cuda')
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_output_long(self, inputs_long, numel_per_tensor):
|
||||
return _torch_tile_to_packed(inputs_long, numel_per_tensor)
|
||||
|
||||
def test_gradcheck(self, numel_per_tensor, total_numel):
|
||||
# gradcheck only for double
|
||||
inputs = torch.rand((numel_per_tensor.shape[0],), dtype=torch.double,
|
||||
device='cuda', requires_grad=True)
|
||||
torch.autograd.gradcheck(batch._TileToPackedCuda.apply,
|
||||
(inputs, numel_per_tensor, total_numel))
|
||||
|
||||
@pytest.mark.parametrize("dtype", FLOAT_DTYPES)
|
||||
def test_float_types(self, inputs_double, numel_per_tensor, total_numel,
|
||||
dtype,
|
||||
target_output_double, target_grad_double):
|
||||
inputs = inputs_double.type(dtype).detach()
|
||||
inputs.requires_grad = True
|
||||
output = batch._TileToPackedCuda.apply(inputs, numel_per_tensor,
|
||||
total_numel)
|
||||
target_output = target_output_double.to(dtype)
|
||||
assert torch.equal(output, target_output)
|
||||
torch.sum(output).backward()
|
||||
target_grad = target_grad_double.to(dtype)
|
||||
assert torch.allclose(inputs.grad, target_grad, rtol=1e-2, atol=1e-2)
|
||||
|
||||
@pytest.mark.parametrize("dtype", INT_DTYPES)
|
||||
def test_int_types(self, inputs_long, numel_per_tensor, total_numel, dtype,
|
||||
target_output_long):
|
||||
inputs = inputs_long.type(dtype)
|
||||
output = batch._TileToPackedCuda.apply(inputs, numel_per_tensor,
|
||||
total_numel)
|
||||
target_output = target_output_long.to(dtype)
|
||||
assert torch.equal(output, target_output)
|
||||
|
||||
def test_cpu_fail(self, inputs_double, numel_per_tensor, total_numel):
|
||||
inputs = inputs_double.cpu()
|
||||
with pytest.raises(RuntimeError,
|
||||
match="values_tensor must be a CUDA tensor"):
|
||||
batch._TileToPackedCuda.apply(inputs, numel_per_tensor, total_numel)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", NUM_TYPES)
|
||||
@pytest.mark.parametrize("numel_per_tensor",
|
||||
[torch.LongTensor([1]),
|
||||
torch.LongTensor([1, 100000]),
|
||||
torch.arange(257, dtype=torch.long)])
|
||||
class TestTileToPacked:
|
||||
@pytest.fixture(autouse=True)
|
||||
def total_numel(self, numel_per_tensor):
|
||||
return torch.sum(numel_per_tensor)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def high_val(self, dtype):
|
||||
if dtype.is_floating_point:
|
||||
return 1
|
||||
else:
|
||||
return 32
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def inputs(self, high_val, numel_per_tensor, dtype, device):
|
||||
return random_tensor(0, high_val, shape=(numel_per_tensor.shape[0],),
|
||||
dtype=dtype, device=device)
|
||||
|
||||
def test_packed_simple_sum(self, inputs, numel_per_tensor, device, dtype):
|
||||
tiled_tensor = batch.tile_to_packed(inputs, numel_per_tensor)
|
||||
target = _torch_tile_to_packed(inputs, numel_per_tensor)
|
||||
assert torch.allclose(tiled_tensor, target)
|
||||
|
||||
|
||||
class TestBatching:
|
||||
@pytest.fixture(autouse=True)
|
||||
def batch_size(self):
|
||||
return 1
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def min_shape(self):
|
||||
return None
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def max_shape(self):
|
||||
return (3, 3, 3)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def dtype(self):
|
||||
return torch.float
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def device(self):
|
||||
return 'cpu'
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def last_dim(self):
|
||||
return 1
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def shape_per_tensor(self, batch_size, min_shape, max_shape):
|
||||
return random_shape_per_tensor(batch_size, min_shape=min_shape,
|
||||
max_shape=max_shape)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def high_val(self, dtype):
|
||||
return 1 if dtype == torch.bool else 32
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def tensor_list(self, dtype, device, high_val, shape_per_tensor, last_dim):
|
||||
return [random_tensor(0, high_val, shape=tuple(shape) + (last_dim,),
|
||||
dtype=dtype, device=device)
|
||||
for shape in shape_per_tensor]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def numel_per_tensor(self, shape_per_tensor):
|
||||
if shape_per_tensor.shape[1] == 1:
|
||||
output = shape_per_tensor.squeeze(1)
|
||||
else:
|
||||
output = torch.prod(shape_per_tensor, dim=1)
|
||||
return output
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def padding_value(self, dtype):
|
||||
if dtype == torch.bool:
|
||||
val = False
|
||||
elif dtype == torch.uint8:
|
||||
val = 0
|
||||
else:
|
||||
val = -1
|
||||
return val
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def first_idx(self, numel_per_tensor):
|
||||
return batch.get_first_idx(numel_per_tensor)
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", ALL_TYPES)
|
||||
@pytest.mark.parametrize("batch_size", [1, 8])
|
||||
@pytest.mark.parametrize("min_shape,max_shape",
|
||||
[(None, (2,)), ((2, 2, 2), (3, 3, 3))])
|
||||
@pytest.mark.parametrize("last_dim", [1, 8])
|
||||
def test_get_shape_per_tensor(self, tensor_list, shape_per_tensor):
|
||||
output_shape_per_tensor = batch.get_shape_per_tensor(tensor_list)
|
||||
assert torch.equal(output_shape_per_tensor, shape_per_tensor)
|
||||
|
||||
def test_get_shape_per_tensor_fail(self):
|
||||
tensor_list = [
|
||||
random_tensor(0, 32, shape=(2, 2, 2)),
|
||||
random_tensor(0, 32, shape=(3, 3, 3, 3))
|
||||
]
|
||||
with pytest.raises(ValueError,
|
||||
match='Expected all tensors to have 3 dimensions but got 4 at index 1'):
|
||||
output_shape_per_tensor = batch.get_shape_per_tensor(tensor_list)
|
||||
|
||||
@pytest.mark.parametrize("shape_per_tensor", [
|
||||
torch.LongTensor([[1, 1, 1, 4], [1, 2, 1, 1], [1, 1, 3, 1]])])
|
||||
@pytest.mark.parametrize("partial_max_shape,expected_max_shape",
|
||||
[(None, (1, 2, 3, 4)),
|
||||
((-1, -1, -1, 6), (1, 2, 3, 6))])
|
||||
def test_fill_max_shape(self, shape_per_tensor, partial_max_shape,
|
||||
expected_max_shape):
|
||||
expected_max_shape = torch.LongTensor(expected_max_shape)
|
||||
max_shape = batch.fill_max_shape(shape_per_tensor, partial_max_shape)
|
||||
assert torch.equal(max_shape, expected_max_shape)
|
||||
|
||||
@pytest.mark.parametrize("numel_per_tensor",
|
||||
[torch.LongTensor([1, 5, 2, 8, 9, 2])])
|
||||
def test_get_first_idx(self, numel_per_tensor, first_idx, device):
|
||||
first_idx = batch.get_first_idx(numel_per_tensor)
|
||||
assert first_idx.device.type == device
|
||||
assert first_idx[0] == 0
|
||||
for i, numel in enumerate(numel_per_tensor):
|
||||
assert (first_idx[i + 1] - first_idx[i]) == numel
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", ALL_TYPES)
|
||||
@pytest.mark.parametrize("batch_size", [1, 8])
|
||||
@pytest.mark.parametrize("last_dim", [1, 8])
|
||||
@pytest.mark.parametrize("min_shape,max_shape",
|
||||
[((1,), (1,)), ((1, 1, 1), (1, 1, 1)),
|
||||
((2, 6), (5, 10))])
|
||||
def test_list_to_packed_to_list(self, tensor_list, shape_per_tensor,
|
||||
first_idx,
|
||||
last_dim, dtype, device):
|
||||
packed_tensor, output_shape_per_tensor = batch.list_to_packed(
|
||||
tensor_list)
|
||||
assert torch.equal(output_shape_per_tensor, shape_per_tensor)
|
||||
check_packed_tensor(packed_tensor, total_numel=first_idx[-1],
|
||||
last_dim=last_dim,
|
||||
dtype=dtype, device=device)
|
||||
for i, tensor in enumerate(tensor_list):
|
||||
assert torch.equal(packed_tensor[first_idx[i]:first_idx[i + 1]],
|
||||
tensor.reshape(-1, last_dim))
|
||||
output_tensor_list = batch.packed_to_list(packed_tensor,
|
||||
shape_per_tensor, first_idx)
|
||||
for output_tensor, expected_tensor in zip(output_tensor_list,
|
||||
tensor_list):
|
||||
assert torch.equal(output_tensor, expected_tensor)
|
||||
|
||||
def test_list_to_packed_fail1(self):
|
||||
tensor_list = [
|
||||
random_tensor(0, 32, shape=(2, 2, 2)),
|
||||
random_tensor(0, 32, shape=(3, 3, 3))
|
||||
]
|
||||
with pytest.raises(ValueError,
|
||||
match='Expected all tensor to have last dimension 2 but '
|
||||
'got 3 at index 1'):
|
||||
_ = batch.list_to_packed(tensor_list)
|
||||
|
||||
def test_list_to_packed_fail2(self):
|
||||
tensor_list = [
|
||||
random_tensor(0, 32, shape=(2, 2, 2), dtype=torch.long,
|
||||
device='cpu'),
|
||||
random_tensor(0, 32, shape=(2, 2, 2), dtype=torch.float,
|
||||
device='cuda')
|
||||
]
|
||||
with pytest.raises(ValueError,
|
||||
match='Expected all tensor to have type torch.LongTensor but '
|
||||
'got torch.cuda.FloatTensor at index 1'):
|
||||
_ = batch.list_to_packed(tensor_list)
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", ALL_TYPES)
|
||||
@pytest.mark.parametrize("batch_size", [1, 8])
|
||||
@pytest.mark.parametrize("last_dim", [1, 8])
|
||||
@pytest.mark.parametrize("min_shape,max_shape",
|
||||
[((1,), (1,)), ((1, 1, 1), (1, 1, 1)),
|
||||
((2, 6), (5, 10))])
|
||||
def test_list_to_padded_to_list(self, tensor_list, batch_size,
|
||||
padding_value,
|
||||
shape_per_tensor, max_shape, last_dim,
|
||||
dtype, device):
|
||||
padded_tensor, output_shape_per_tensor = batch.list_to_padded(
|
||||
tensor_list, padding_value, max_shape)
|
||||
|
||||
assert torch.equal(output_shape_per_tensor, shape_per_tensor)
|
||||
check_padded_tensor(padded_tensor, batch_size=batch_size,
|
||||
shape_per_tensor=shape_per_tensor,
|
||||
padding_value=padding_value, max_shape=max_shape,
|
||||
last_dim=last_dim,
|
||||
dtype=dtype, device=device)
|
||||
for i, shape in enumerate(shape_per_tensor):
|
||||
assert torch.equal(
|
||||
padded_tensor[[i] + [slice(dim) for dim in shape]],
|
||||
tensor_list[i])
|
||||
output_tensor_list = batch.padded_to_list(padded_tensor,
|
||||
shape_per_tensor)
|
||||
for output_tensor, expected_tensor in zip(output_tensor_list,
|
||||
tensor_list):
|
||||
assert torch.equal(output_tensor, expected_tensor)
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", ALL_TYPES)
|
||||
@pytest.mark.parametrize("batch_size", [1, 8])
|
||||
@pytest.mark.parametrize("last_dim", [1, 8])
|
||||
@pytest.mark.parametrize("min_shape,max_shape",
|
||||
[((1,), (1,)), ((1, 1, 1), (1, 1, 1)),
|
||||
((2, 6), (5, 10))])
|
||||
def test_packed_to_padded_packed(self, tensor_list, batch_size,
|
||||
padding_value,
|
||||
shape_per_tensor, first_idx, max_shape,
|
||||
last_dim,
|
||||
dtype, device):
|
||||
padded_tensor, _ = batch.list_to_padded(tensor_list, padding_value,
|
||||
max_shape)
|
||||
packed_tensor, output_shape_per_tensor = batch.list_to_packed(
|
||||
tensor_list)
|
||||
_padded_tensor = batch.packed_to_padded(packed_tensor,
|
||||
output_shape_per_tensor,
|
||||
first_idx, padding_value,
|
||||
max_shape)
|
||||
assert torch.equal(padded_tensor, _padded_tensor)
|
||||
_packed_tensor = batch.padded_to_packed(padded_tensor,
|
||||
output_shape_per_tensor)
|
||||
assert torch.equal(packed_tensor, _packed_tensor)
|
||||
98
tests/python/kaolin/ops/test_coords.py
Normal file
@@ -0,0 +1,98 @@
|
||||
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import math
|
||||
import torch
|
||||
|
||||
from kaolin.utils.testing import FLOAT_TYPES, check_tensor
|
||||
from kaolin.ops.coords import cartesian2spherical, spherical2cartesian
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
class TestCartesian2Spherical:
|
||||
@pytest.fixture(autouse=True)
|
||||
def coords(self, device, dtype):
|
||||
coords = torch.rand((11, 7, 3), device=device, dtype=dtype) * 10. - 5.
|
||||
return {
|
||||
'x': coords[..., 0],
|
||||
'y': coords[..., 1],
|
||||
'z': coords[..., 2]
|
||||
}
|
||||
|
||||
def test_cartesian2spherical(self, coords, dtype):
|
||||
x = coords['x']
|
||||
y = coords['y']
|
||||
z = coords['z']
|
||||
|
||||
azimuth, elevation, distance = cartesian2spherical(x, y, z)
|
||||
# This is pretty much how it is currently implemented in the function
|
||||
# but this is very simple
|
||||
expected_distance = torch.sqrt(
|
||||
x ** 2 + y ** 2 + z ** 2)
|
||||
expected_elevation = torch.asin(z / distance)
|
||||
expected_azimuth = torch.atan2(y, z)
|
||||
assert torch.allclose(azimuth, expected_azimuth)
|
||||
assert torch.allclose(elevation, expected_elevation)
|
||||
assert torch.allclose(distance, expected_distance)
|
||||
|
||||
def test_cartesian2spherical2cartesian(self, coords):
|
||||
x = coords['x']
|
||||
y = coords['y']
|
||||
z = coords['z']
|
||||
|
||||
azimuth, elevation, distance = cartesian2spherical(x, y, z)
|
||||
out_x, out_y, out_z = spherical2cartesian(azimuth, elevation, distance)
|
||||
assert torch.allclose(x, out_x)
|
||||
assert torch.allclose(y, out_y)
|
||||
assert torch.allclose(z, out_z)
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
class TestCartesian2Spherical:
|
||||
@pytest.fixture(autouse=True)
|
||||
def coords(self, device, dtype):
|
||||
# Not uniform but good enough
|
||||
return {
|
||||
'azimuth': (torch.rand((11, 7), device=device, dtype=dtype) * 2. - 1.) * math.pi,
|
||||
'elevation': (torch.rand((11, 7), device=device, dtype=dtype) - 0.5) * math.pi,
|
||||
'distance': torch.rand((11, 7), device=device, dtype=dtype) * 10. + 0.1
|
||||
}
|
||||
|
||||
def test_spherical2cartesian(self, coords, dtype):
|
||||
azimuth = coords['azimuth']
|
||||
elevation = coords['elevation']
|
||||
distance = coords['distance']
|
||||
|
||||
x, y, z = spherical2cartesian(azimuth, elevation, distance)
|
||||
# This is pretty much how it is currently implemented in the function
|
||||
# but this is very simple
|
||||
expected_z = torch.sin(elevation) * distance
|
||||
temp = torch.cos(elevation) * distance
|
||||
expected_x = torch.cos(azimuth) * temp
|
||||
expected_y = torch.sin(azimuth) * temp
|
||||
assert torch.equal(x, expected_x)
|
||||
assert torch.equal(y, expected_y)
|
||||
assert torch.equal(z, expected_z)
|
||||
|
||||
def test_spherical2cartesian2spherical(self, coords):
|
||||
azimuth = coords['azimuth']
|
||||
elevation = coords['elevation']
|
||||
distance = coords['distance']
|
||||
|
||||
x, y, z = spherical2cartesian(azimuth, elevation, distance)
|
||||
out_azimuth, out_elevation, out_distance = cartesian2spherical(
|
||||
x, y, z)
|
||||
assert torch.allclose(azimuth, out_azimuth, rtol=1e-3, atol=1e-3)
|
||||
assert torch.allclose(elevation, out_elevation, rtol=1e-1, atol=1e-1)
|
||||
assert torch.allclose(distance, out_distance, rtol=1e-3, atol=1e-3)
|
||||
173
tests/python/kaolin/ops/test_gcn.py
Normal file
@@ -0,0 +1,173 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
import os
|
||||
|
||||
from kaolin.ops.gcn import sparse_bmm, normalize_adj, GraphConv
|
||||
from kaolin.utils.testing import ALL_DEVICES
|
||||
|
||||
os.environ['NVIDIA_TF32_OVERRIDE'] = '0'
|
||||
|
||||
@pytest.mark.parametrize('device', ALL_DEVICES)
|
||||
def test_sparse_bmm(device):
|
||||
i = torch.LongTensor([[0, 1, 1, 2, 2, 0], [1, 0, 2, 1, 0, 2]])
|
||||
v = torch.FloatTensor([1, 2, 3, 4, 5, 6])
|
||||
sparse = torch.sparse.FloatTensor(i, v, torch.Size([4, 3])).to(device)
|
||||
dense = torch.tensor(
|
||||
[[[0.47605860, 0.97254932, 0.93103176],
|
||||
[0.56519330, 0.03351519, 0.02914280],
|
||||
[0.16332115, 0.24698994, 0.17907326]],
|
||||
|
||||
[[0.57908791, 0.72093546, 0.19004048],
|
||||
[0.51033562, 0.15572953, 0.24628967],
|
||||
[0.41850159, 0.87904519, 0.06477704]],
|
||||
|
||||
[[0.42210183, 0.37572026, 0.62902039],
|
||||
[0.03129875, 0.26592126, 0.95092678],
|
||||
[0.87077409, 0.28091857, 0.12425283]]],
|
||||
device=device)
|
||||
result = sparse_bmm(sparse, dense)
|
||||
expected = torch.tensor(
|
||||
[[[1.54512024, 1.51545477, 1.10358238],
|
||||
[1.44208074, 2.68606853, 2.39928341],
|
||||
[4.64106607, 4.99680758, 4.77173042],
|
||||
[0.0, 0.0, 0.0]],
|
||||
|
||||
[[3.02134514, 5.43000031, 0.63495189],
|
||||
[2.41368055, 4.07900620, 0.57441211],
|
||||
[4.93678188, 4.22759533, 1.93536115],
|
||||
[0.0, 0.0, 0.0]],
|
||||
|
||||
[[5.25594330, 1.95143259, 1.69644380],
|
||||
[3.45652604, 1.59419620, 1.63079929],
|
||||
[2.23570418, 2.94228649, 6.94880915],
|
||||
[0.0, 0.0, 0.0]]],
|
||||
device=device)
|
||||
assert torch.allclose(result, expected)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ALL_DEVICES)
|
||||
class TestNormalizeAdj(object):
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def adj(self, device):
|
||||
i = torch.LongTensor([[0, 1, 1, 2, 2, 0], [1, 0, 2, 1, 0, 2]])
|
||||
v = torch.FloatTensor([1, 2, 3, 4, 5, 6])
|
||||
return torch.sparse.FloatTensor(i, v, torch.Size([3, 3])).to(device)
|
||||
|
||||
def test_normalize_adj_sparse(self, device, adj):
|
||||
result = normalize_adj(adj)
|
||||
|
||||
norm = torch.sparse.mm(adj, torch.ones((adj.shape[0], 1),
|
||||
device=device))
|
||||
expected = torch.sparse.mm(adj, torch.eye(3, device=device)) / norm
|
||||
|
||||
assert torch.allclose(
|
||||
torch.sparse.mm(result, torch.eye(3, device=device)),
|
||||
expected)
|
||||
|
||||
def test_normalize_adj_dense(self, device, adj):
|
||||
dense_adj = torch.sparse.mm(adj, torch.eye(3, device=device))
|
||||
result = normalize_adj(dense_adj)
|
||||
|
||||
expected = torch.tensor(
|
||||
[[0.0, 0.14285714, 0.85714285],
|
||||
[0.4, 0.0, 0.6],
|
||||
[0.55555555, 0.44444444, 0.0]],
|
||||
device=device)
|
||||
|
||||
assert torch.allclose(result, expected)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ALL_DEVICES)
|
||||
@pytest.mark.parametrize('self_layer', [True, False])
|
||||
class TestGraphConv(object):
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gcn(self, device, self_layer):
|
||||
model = GraphConv(3, 5, self_layer=self_layer)
|
||||
model.to(device)
|
||||
model.linear.weight.data.copy_(torch.tensor(
|
||||
[[-0.61831456, 0.57409757, -0.14574467],
|
||||
[0.00189979, 0.77582508, 0.36306566],
|
||||
[-0.27461752, -0.69267106, 0.61524123],
|
||||
[-0.46579394, -0.00121037, 0.72196031],
|
||||
[0.54187351, -0.42773548, 0.59835148]],
|
||||
device=device))
|
||||
model.linear.bias.data.copy_(torch.tensor(
|
||||
[0.40155911, -0.45286083, -0.19249618, 0.21454012, -0.17628896],
|
||||
device=device))
|
||||
|
||||
if self_layer:
|
||||
model.linear_self.weight.data.copy_(torch.tensor(
|
||||
[[0.81866288, 0.24061465, 0.55818230],
|
||||
[0.37344468, 0.07631248, 0.34876764],
|
||||
[0.51045960, 0.73214161, 0.15645593],
|
||||
[0.01274079, 0.44412971, 0.59611768],
|
||||
[0.31227762, 0.13015020, 0.77652276]],
|
||||
device=device))
|
||||
model.linear_self.bias.data.copy_(torch.tensor(
|
||||
[0.54663211, 0.38193095, 0.71667391, 0.14995629, 0.27089202],
|
||||
device=device))
|
||||
|
||||
return model
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def adj(self, device):
|
||||
i = torch.LongTensor(
|
||||
[[0, 1, 1, 2, 2, 0, 0, 1, 2], [1, 0, 2, 1, 0, 2, 0, 1, 2]])
|
||||
v = torch.FloatTensor([1, 1, 1, 1, 1, 1, 1, 1, 1])
|
||||
return torch.sparse.FloatTensor(i, v, torch.Size([3, 3])).to(device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def node_feat_in(self, device):
|
||||
return torch.tensor(
|
||||
[[[0.17502755, 0.01767362, 0.43572336],
|
||||
[0.84568930, 0.50088108, 0.65273631],
|
||||
[0.18389270, 0.30413085, 0.71014285]]],
|
||||
device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected(self, device, self_layer):
|
||||
result = torch.tensor(
|
||||
[[[0.22333825, -0.02167436, -0.12385714, 0.46001482, 0.28272793],
|
||||
[0.22333825, -0.02167436, -0.12385714, 0.46001482, 0.28272793],
|
||||
[0.22333825, -0.02167436, -0.12385714, 0.46001482, 0.28272793]]],
|
||||
device=device)
|
||||
|
||||
if self_layer:
|
||||
result += torch.tensor(
|
||||
[[0.93738627, 0.60060900, 0.88712949, 0.41977805, 0.66619855],
|
||||
[1.72383165, 0.96362591, 1.61720443, 0.77229488, 1.10703623],
|
||||
[1.16674566, 0.72148854, 1.14431667, 0.71070147, 0.91934240]],
|
||||
device=device)
|
||||
|
||||
return result
|
||||
|
||||
def test_gcn_sparse(self, device, gcn, adj, node_feat_in, expected):
|
||||
node_feat_out = gcn(node_feat_in, adj, normalize_adj=True)
|
||||
assert torch.allclose(node_feat_out, expected, rtol=1e-3, atol=1e-3)
|
||||
adj = normalize_adj(adj)
|
||||
node_feat_out_2 = gcn(node_feat_in, adj, normalize_adj=False)
|
||||
assert torch.allclose(node_feat_out, node_feat_out_2, rtol=1e-4, atol=1e-4)
|
||||
|
||||
def test_gcn_dense(self, device, gcn, adj, node_feat_in, expected):
|
||||
dense_adj = torch.sparse.mm(adj, torch.eye(3, device=device))
|
||||
node_feat_out = gcn(node_feat_in, dense_adj, normalize_adj=True)
|
||||
assert torch.allclose(node_feat_out, expected, rtol=1e-3, atol=1e-3)
|
||||
dense_adj = normalize_adj(dense_adj)
|
||||
node_feat_out_2 = gcn(node_feat_in, dense_adj, normalize_adj=False)
|
||||
assert torch.allclose(node_feat_out, node_feat_out_2, rtol=1e-4, atol=1e-4)
|
||||
69
tests/python/kaolin/ops/test_pointcloud.py
Normal file
@@ -0,0 +1,69 @@
|
||||
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.utils.testing import FLOAT_TYPES, with_seed
|
||||
import kaolin.ops.pointcloud
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
def test_center_points(device, dtype):
|
||||
with_seed(9, 9, 9)
|
||||
if dtype == torch.half:
|
||||
rtol, atol = 1e-3, 1e-3
|
||||
else:
|
||||
rtol, atol = 1e-5, 1e-8 # default torch values
|
||||
|
||||
B = 4
|
||||
N = 20
|
||||
points = torch.rand((B, N, 3), device=device, dtype=dtype) # 0..1
|
||||
points[:, 0, :] = 1.0 # make sure 1 is included
|
||||
points[:, 1, :] = 0.0 # make sure 0 is included
|
||||
points = points - 0.5 # -0.5...0.5
|
||||
|
||||
factors = 0.2 + 2 * torch.rand((B, 1, 1), device=device, dtype=dtype)
|
||||
translations = torch.rand((B, 1, 3), device=device, dtype=dtype) - 0.5
|
||||
|
||||
# Points are already centered
|
||||
assert torch.allclose(points, kaolin.ops.pointcloud.center_points(points), atol=atol, rtol=rtol)
|
||||
assert torch.allclose(points * factors, kaolin.ops.pointcloud.center_points(points * factors), atol=atol, rtol=rtol)
|
||||
|
||||
# Points translated
|
||||
assert torch.allclose(points, kaolin.ops.pointcloud.center_points(points + 0.5), atol=atol, rtol=rtol)
|
||||
|
||||
points_centered = kaolin.ops.pointcloud.center_points(points + translations)
|
||||
assert torch.allclose(points, points_centered, atol=atol, rtol=rtol)
|
||||
|
||||
points_centered = kaolin.ops.pointcloud.center_points(points * factors + translations)
|
||||
assert torch.allclose(points * factors, points_centered, atol=atol, rtol=rtol)
|
||||
|
||||
# Now let's also try to normalize
|
||||
points_centered = kaolin.ops.pointcloud.center_points(points * factors + translations, normalize=True)
|
||||
assert torch.allclose(points, points_centered, atol=atol, rtol=rtol)
|
||||
|
||||
# Now let's test normalizing when there is zero range in one of the dimensions
|
||||
points[:, :, 1] = 1.0
|
||||
points_centered = kaolin.ops.pointcloud.center_points(points * factors + translations, normalize=True)
|
||||
points[:, :, 1] = 0.0
|
||||
assert torch.allclose(points, points_centered, atol=atol, rtol=rtol)
|
||||
|
||||
# Now let's try normalizing when one element of the batch is degenerate
|
||||
points[0, :, :] = torch.tensor([0, 2., 4.], dtype=dtype, device=device).reshape((1, 3))
|
||||
points_centered = kaolin.ops.pointcloud.center_points(points * factors + translations, normalize=True)
|
||||
points[0, :, :] = 0
|
||||
assert torch.allclose(points, points_centered, atol=atol, rtol=rtol)
|
||||
110
tests/python/kaolin/ops/test_random.py
Normal file
@@ -0,0 +1,110 @@
|
||||
# Copyright (c) 2019,20-21 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
import kaolin as kal
|
||||
from kaolin.utils.testing import BOOL_TYPES, NUM_TYPES, FLOAT_TYPES, \
|
||||
check_tensor, check_spc_octrees
|
||||
|
||||
@pytest.mark.parametrize("batch_size", [1, 8])
|
||||
@pytest.mark.parametrize("min_shape,max_shape",
|
||||
[(None, (3, 3)), ((5, 5), (5, 5))])
|
||||
def test_random_shape_per_tensor(batch_size, min_shape, max_shape):
|
||||
old_seed = torch.initial_seed()
|
||||
torch.manual_seed(0)
|
||||
shape_per_tensor = kal.ops.random.random_shape_per_tensor(batch_size, min_shape, max_shape)
|
||||
if min_shape is None:
|
||||
min_shape = tuple([1] * len(max_shape))
|
||||
min_shape = torch.tensor(min_shape).unsqueeze(0)
|
||||
max_shape = torch.tensor(max_shape).unsqueeze(0)
|
||||
assert shape_per_tensor.shape[0] == batch_size
|
||||
assert (min_shape <= shape_per_tensor).all() and (
|
||||
shape_per_tensor <= max_shape).all()
|
||||
|
||||
|
||||
@pytest.mark.parametrize("batch_size", [8])
|
||||
@pytest.mark.parametrize("min_shape,max_shape", [((5, 5, 5), (30, 30, 30))])
|
||||
def test_random_shape_per_tensor_seed(batch_size, min_shape, max_shape):
|
||||
threshold = batch_size * len(max_shape) * 0.9
|
||||
kal.ops.random.manual_seed(0)
|
||||
shape_per_tensor1 = kal.ops.random.random_shape_per_tensor(batch_size, min_shape,
|
||||
max_shape)
|
||||
shape_per_tensor2 = kal.ops.random.random_shape_per_tensor(batch_size, min_shape,
|
||||
max_shape)
|
||||
assert torch.sum(shape_per_tensor1 != shape_per_tensor2) > threshold
|
||||
kal.ops.random.manual_seed(0)
|
||||
shape_per_tensor3 = kal.ops.random.random_shape_per_tensor(batch_size, min_shape,
|
||||
max_shape)
|
||||
assert torch.equal(shape_per_tensor1, shape_per_tensor3)
|
||||
kal.ops.random.manual_seed(1)
|
||||
shape_per_tensor4 = kal.ops.random.random_shape_per_tensor(batch_size, min_shape,
|
||||
max_shape)
|
||||
assert torch.sum(shape_per_tensor1 != shape_per_tensor4) > threshold
|
||||
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", NUM_TYPES)
|
||||
@pytest.mark.parametrize("low,high", [(0, 1), (3, 5), (10, 10)])
|
||||
@pytest.mark.parametrize("shape", [(1,), (3, 3)])
|
||||
def test_random_tensor(low, high, shape, dtype, device):
|
||||
tensor = kal.ops.random.random_tensor(low, high, shape, dtype, device)
|
||||
check_tensor(tensor, shape, dtype, device)
|
||||
assert (low <= tensor).all()
|
||||
assert (tensor <= high).all()
|
||||
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", BOOL_TYPES)
|
||||
@pytest.mark.parametrize("low,high", [(0, 1)])
|
||||
@pytest.mark.parametrize("shape", [(1,), (3, 3)])
|
||||
def test_random_tensor(low, high, shape, dtype, device):
|
||||
tensor = kal.ops.random.random_tensor(low, high, shape, dtype, device)
|
||||
check_tensor(tensor, shape, dtype, device)
|
||||
assert (low <= tensor).all()
|
||||
assert (tensor <= high).all()
|
||||
|
||||
|
||||
@pytest.mark.parametrize("low,high", [(0, 1), (5, 10)])
|
||||
@pytest.mark.parametrize("shape", [(10, 10)])
|
||||
def test_random_tensor_seed(low, high, shape):
|
||||
threshold = shape[0] * shape[1] * 0.9
|
||||
kal.ops.random.manual_seed(0)
|
||||
tensor1 = kal.ops.random.random_tensor(low, high, shape)
|
||||
tensor2 = kal.ops.random.random_tensor(low, high, shape)
|
||||
assert torch.sum(tensor1 != tensor2) > threshold
|
||||
kal.ops.random.manual_seed(0)
|
||||
tensor3 = kal.ops.random.random_tensor(low, high, shape)
|
||||
assert torch.equal(tensor1, tensor3)
|
||||
kal.ops.random.manual_seed(1)
|
||||
tensor4 = kal.ops.random.random_tensor(low, high, shape)
|
||||
assert torch.sum(tensor1 != tensor4) > threshold
|
||||
|
||||
@pytest.mark.parametrize("batch_size", [1, 8])
|
||||
@pytest.mark.parametrize("level", [1, 3])
|
||||
@pytest.mark.parametrize("device", ["cpu", "cuda"])
|
||||
def test_random_spc_octree(batch_size, level, device):
|
||||
octrees, lengths = kal.ops.random.random_spc_octrees(batch_size, level, device)
|
||||
check_spc_octrees(octrees, lengths, batch_size, level, device)
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", FLOAT_TYPES)
|
||||
def test_sample_spherical_coords(device, dtype):
|
||||
azimuth, elevation = kal.ops.random.sample_spherical_coords(
|
||||
(11, 7), azimuth_low=0.1, azimuth_high=0.3,
|
||||
elevation_low=0.3, elevation_high=0.6, device=device, dtype=dtype
|
||||
)
|
||||
check_tensor(azimuth, (11, 7), dtype, device)
|
||||
check_tensor(elevation, (11, 7), dtype, device)
|
||||
assert torch.all(azimuth >= 0.1) and torch.all(azimuth <= 0.3)
|
||||
assert torch.all(elevation >= 0.3) and torch.all(elevation <= 0.6)
|
||||
135
tests/python/kaolin/ops/test_reduction.py
Normal file
@@ -0,0 +1,135 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.ops import batch, reduction
|
||||
from kaolin.ops.random import random_shape_per_tensor, random_tensor
|
||||
|
||||
TEST_TYPES = [('cuda', dtype) for dtype in [torch.half, torch.float, torch.double, torch.bool, torch.int, torch.long]] + \
|
||||
[('cpu', dtype) for dtype in [torch.float, torch.double, torch.bool, torch.int, torch.long]]
|
||||
|
||||
|
||||
# Same naive implementation than packed_simple_sum for cpu input
|
||||
# as it's pretty straightforward
|
||||
def _torch_packed_simple_sum(inputs, numel_per_tensor):
|
||||
outputs = []
|
||||
last_id = 0
|
||||
for i, numel in enumerate(numel_per_tensor):
|
||||
first_id = last_id
|
||||
last_id += int(numel)
|
||||
outputs.append(torch.sum(inputs[first_id:last_id]))
|
||||
return torch.stack(outputs, dim=0)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("numel_per_tensor",
|
||||
[torch.LongTensor([1]),
|
||||
torch.LongTensor([1, 100000]),
|
||||
torch.arange(257, dtype=torch.long)])
|
||||
class Test_PackedSimpleSumCuda:
|
||||
@pytest.fixture(autouse=True)
|
||||
def total_numel(self, numel_per_tensor):
|
||||
return torch.sum(numel_per_tensor)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def inputs_double(self, total_numel):
|
||||
return torch.rand((total_numel, 1), dtype=torch.double, device='cuda',
|
||||
requires_grad=True)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_output_double(self, inputs_double, numel_per_tensor):
|
||||
return _torch_packed_simple_sum(inputs_double, numel_per_tensor)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_grad_double(self, inputs_double, numel_per_tensor):
|
||||
# if test_gradcheck passed the gradient using torch.double inputs is trustable
|
||||
outputs = torch.sum(reduction._PackedSimpleSumCuda.apply(inputs_double,
|
||||
numel_per_tensor))
|
||||
outputs.backward()
|
||||
return inputs_double.grad.clone()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def inputs_long(self, total_numel):
|
||||
return torch.randint(0, 33, size=(total_numel, 1), dtype=torch.long, device='cuda')
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def target_output_long(self, inputs_long, numel_per_tensor):
|
||||
return _torch_packed_simple_sum(inputs_long, numel_per_tensor)
|
||||
|
||||
def test_gradcheck(self, numel_per_tensor, total_numel):
|
||||
# gradcheck only for double
|
||||
inputs = torch.rand((total_numel, 1), dtype=torch.double, device='cuda',
|
||||
requires_grad=True)
|
||||
torch.autograd.gradcheck(reduction._PackedSimpleSumCuda.apply,
|
||||
(inputs, numel_per_tensor))
|
||||
|
||||
@pytest.mark.parametrize("dtype", [torch.double, torch.float, torch.half])
|
||||
def test_float_types(self, inputs_double, numel_per_tensor, dtype,
|
||||
target_output_double, target_grad_double):
|
||||
inputs = inputs_double.type(dtype).detach()
|
||||
inputs.requires_grad = True
|
||||
output = reduction._PackedSimpleSumCuda.apply(inputs, numel_per_tensor)
|
||||
target_output = target_output_double.to(dtype)
|
||||
assert torch.allclose(output, target_output, rtol=1e-3, atol=1e-4)
|
||||
torch.sum(output).backward()
|
||||
target_grad = target_grad_double.to(dtype)
|
||||
assert torch.allclose(inputs.grad, target_grad, rtol=1e-2, atol=1e-2)
|
||||
|
||||
@pytest.mark.parametrize("dtype", [torch.long, torch.int])
|
||||
def test_int_types(self, inputs_long, numel_per_tensor, dtype, target_output_long):
|
||||
inputs = inputs_long.type(dtype)
|
||||
output = reduction._PackedSimpleSumCuda.apply(inputs, numel_per_tensor)
|
||||
target_output = target_output_long
|
||||
assert torch.equal(output, target_output)
|
||||
|
||||
def test_bool_type(self, total_numel, numel_per_tensor):
|
||||
inputs = torch.randint(0, 2, size=(total_numel, 1), dtype=torch.bool, device='cuda')
|
||||
target_outputs = _torch_packed_simple_sum(inputs, numel_per_tensor)
|
||||
outputs = reduction._PackedSimpleSumCuda.apply(inputs, numel_per_tensor)
|
||||
torch.equal(outputs, target_outputs)
|
||||
|
||||
def test_cpu_fail(self, inputs_double, numel_per_tensor):
|
||||
inputs = inputs_double.cpu()
|
||||
with pytest.raises(RuntimeError,
|
||||
match="packed_tensor must be a CUDA tensor"):
|
||||
reduction._PackedSimpleSumCuda.apply(inputs, numel_per_tensor)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("device,dtype", TEST_TYPES)
|
||||
@pytest.mark.parametrize("numel_per_tensor",
|
||||
[torch.LongTensor([1]),
|
||||
torch.LongTensor([1, 100000]),
|
||||
torch.arange(257, dtype=torch.long)])
|
||||
class TestPackedSimpleSum:
|
||||
@pytest.fixture(autouse=True)
|
||||
def total_numel(self, numel_per_tensor):
|
||||
return torch.sum(numel_per_tensor)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def high_val(self, dtype):
|
||||
if dtype.is_floating_point or dtype == torch.bool:
|
||||
return 1
|
||||
else:
|
||||
return 32
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def inputs(self, high_val, total_numel, dtype, device):
|
||||
return random_tensor(0, high_val, shape=(total_numel, 1), dtype=dtype,
|
||||
device=device)
|
||||
|
||||
def test_packed_simple_sum(self, inputs, numel_per_tensor, device, dtype):
|
||||
sum_tensor = reduction.packed_simple_sum(inputs, numel_per_tensor)
|
||||
target = _torch_packed_simple_sum(inputs, numel_per_tensor)
|
||||
assert torch.allclose(sum_tensor, target)
|
||||
377
tests/python/kaolin/ops/test_voxelgrid.py
Normal file
@@ -0,0 +1,377 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
|
||||
import torch
|
||||
import random
|
||||
from kaolin.ops import voxelgrid as vg
|
||||
from kaolin.utils.testing import BOOL_TYPES, FLOAT_TYPES, INT_TYPES
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device,dtype', FLOAT_TYPES + BOOL_TYPES)
|
||||
class TestDownsample:
|
||||
|
||||
def test_scale_val_1(self, device, dtype):
|
||||
# The scale should be smaller or equal to the size of the input.
|
||||
with pytest.raises(ValueError,
|
||||
match="Downsample ratio must be less than voxelgrids "
|
||||
"shape of 6 at index 2, but got 7."):
|
||||
voxelgrids = torch.ones([2, 6, 6, 6], device=device, dtype=dtype)
|
||||
vg.downsample(voxelgrids, [1, 2, 7])
|
||||
|
||||
def test_scale_val_2(self, device, dtype):
|
||||
# Every element in the scale should be greater or equal to one.
|
||||
with pytest.raises(ValueError,
|
||||
match="Downsample ratio must be at least 1 "
|
||||
"along every dimension but got -1 at "
|
||||
"index 0."):
|
||||
voxelgrids = torch.ones([2, 6, 6, 6], device=device, dtype=dtype)
|
||||
vg.downsample(voxelgrids, [-1, 3, 2])
|
||||
|
||||
def test_voxelgrids_dim(self, device, dtype):
|
||||
# The dimension of voxelgrids should be 4 (batched).
|
||||
with pytest.raises(ValueError,
|
||||
match="Expected voxelgrids to have 4 dimensions "
|
||||
"but got 3 dimensions."):
|
||||
voxelgrids = torch.ones([6, 6, 6], device=device, dtype=dtype)
|
||||
vg.downsample(voxelgrids, [2, 2, 2])
|
||||
|
||||
def test_scale_dim(self, device, dtype):
|
||||
# The dimension of scale should be 3 if it is a list.
|
||||
with pytest.raises(ValueError,
|
||||
match="Expected scale to have 3 dimensions "
|
||||
"but got 2 dimensions."):
|
||||
voxelgrids = torch.ones([2, 6, 6, 6], device=device, dtype=dtype)
|
||||
vg.downsample(voxelgrids, [2, 2])
|
||||
|
||||
with pytest.raises(TypeError,
|
||||
match="Expected scale to be type list or int "
|
||||
"but got <class 'str'>."):
|
||||
voxelgrids = torch.ones([2, 6, 6, 6], device=device, dtype=dtype)
|
||||
vg.downsample(voxelgrids, "2")
|
||||
|
||||
def test_output_size(self, device, dtype):
|
||||
# The size of the output should be input.shape / scale
|
||||
voxelgrids = torch.ones([3, 6, 6, 6], device=device, dtype=dtype)
|
||||
output = vg.downsample(voxelgrids, [1, 2, 3])
|
||||
assert (output.shape == torch.Size([3, 6, 3, 2]))
|
||||
|
||||
def test_output_batch(self, device, dtype):
|
||||
if dtype == torch.bool:
|
||||
pytest.skip("This test won't work for torch.bool.")
|
||||
|
||||
# The size of the batched input shoud be correct.
|
||||
# For example, if the input size is [2, 6, 6, 6],
|
||||
# Scale is [3, 3, 3], the output size should be [2, 2, 2, 2]
|
||||
# Also, test the function is numerically correct
|
||||
voxelgrid1 = torch.ones([4, 4, 4], device=device, dtype=dtype)
|
||||
voxelgrid2 = torch.ones((4, 4, 4), device=device, dtype=dtype)
|
||||
voxelgrid2[1, :2] = 0.8
|
||||
voxelgrid2[1, 2:] = 0.4
|
||||
voxelgrid2[3] = 0
|
||||
batched_voxelgrids = torch.stack((voxelgrid1, voxelgrid2))
|
||||
output = vg.downsample(batched_voxelgrids, [2, 2, 2])
|
||||
|
||||
expected1 = torch.ones((2, 2, 2), device=device, dtype=dtype)
|
||||
expected2 = torch.tensor([[[0.9, 0.9],
|
||||
[0.7, 0.7]],
|
||||
|
||||
[[0.5000, 0.5000],
|
||||
[0.5000, 0.5000]]], device=device, dtype=dtype)
|
||||
expected = torch.stack((expected1, expected2))
|
||||
assert torch.allclose(output, expected)
|
||||
|
||||
|
||||
def test_bool_input(self, device, dtype):
|
||||
if dtype != torch.bool:
|
||||
pytest.skip("This test is only for torch.bool.")
|
||||
|
||||
voxelgrids = torch.ones((2, 4, 4, 4), device=device, dtype=dtype)
|
||||
voxelgrids[:, :, 1, :] = 0
|
||||
voxelgrids[:, :, 3, :] = 0
|
||||
|
||||
output = vg.downsample(voxelgrids, 2)
|
||||
|
||||
expected_dtype = torch.half if device == "cuda" else torch.float
|
||||
expected = torch.ones((2, 2, 2, 2), device=device, dtype=expected_dtype) * 0.5
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device,dtype', FLOAT_TYPES + BOOL_TYPES)
|
||||
@pytest.mark.parametrize('mode', ['wide', 'thin'])
|
||||
class TestExtractSurface:
|
||||
|
||||
def test_valid_mode(self, device, dtype, mode):
|
||||
voxelgrids = torch.ones((1, 1, 1, 1), device=device, dtype=dtype)
|
||||
with pytest.raises(ValueError, match='mode "this is not a valid mode" is not supported.'):
|
||||
vg.extract_surface(voxelgrids, mode="this is not a valid mode")
|
||||
|
||||
def test_input_size(self, device, dtype, mode):
|
||||
voxelgrids = torch.ones((3, 3, 3), device=device, dtype=dtype)
|
||||
with pytest.raises(ValueError,
|
||||
match="Expected voxelgrids to have 4 dimensions "
|
||||
"but got 3 dimensions."):
|
||||
vg.extract_surface(voxelgrids, mode=mode)
|
||||
|
||||
def test_output_value(self, device, dtype, mode):
|
||||
voxelgrids = torch.ones((2, 4, 4, 4), device=device, dtype=dtype)
|
||||
voxelgrids[0, 0, 0, 0] = 0 # Remove a voxel on a corner
|
||||
voxelgrids[1, 1, 0, 0] = 0 # Remove a voxel on an edge
|
||||
expected = voxelgrids.clone().bool()
|
||||
|
||||
surface = vg.extract_surface(voxelgrids, mode=mode)
|
||||
|
||||
expected[0, 1:3, 1:3, 1:3] = 0
|
||||
expected[1, 1:3, 1:3, 1:3] = 0
|
||||
if mode == 'wide':
|
||||
expected[0, 1, 1, 1] = 1
|
||||
expected[1, 1:3, 1, 1] = 1
|
||||
|
||||
assert torch.equal(surface, expected)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device,dtype', FLOAT_TYPES + BOOL_TYPES)
|
||||
class TestExtractOdms:
|
||||
|
||||
def test_handmade_input(self, device, dtype):
|
||||
# The input is hand-made.
|
||||
voxelgrid1 = torch.tensor([[[1, 0, 0],
|
||||
[0, 1, 1],
|
||||
[0, 1, 1]],
|
||||
|
||||
[[1, 0, 0],
|
||||
[0, 1, 1],
|
||||
[0, 0, 1]],
|
||||
|
||||
[[0, 1, 0],
|
||||
[1, 1, 0],
|
||||
[0, 0, 1]]], device=device, dtype=dtype)
|
||||
voxelgrid2 = voxelgrid1.transpose(0, 1)
|
||||
|
||||
expected1 = torch.tensor([[[2, 0, 0],
|
||||
[2, 0, 0],
|
||||
[1, 1, 0]],
|
||||
|
||||
[[0, 1, 1],
|
||||
[0, 1, 2],
|
||||
[1, 0, 2]],
|
||||
|
||||
[[2, 0, 0],
|
||||
[2, 1, 0],
|
||||
[1, 1, 0]],
|
||||
|
||||
[[0, 1, 1],
|
||||
[0, 1, 1],
|
||||
[1, 0, 2]],
|
||||
|
||||
[[1, 0, 3],
|
||||
[0, 0, 1],
|
||||
[3, 2, 0]],
|
||||
|
||||
[[0, 2, 3],
|
||||
[2, 0, 0],
|
||||
[3, 0, 0]]], device=device, dtype=torch.long)
|
||||
|
||||
expected2 = torch.tensor([[[2, 2, 1],
|
||||
[0, 0, 1],
|
||||
[0, 0, 0]],
|
||||
|
||||
[[0, 0, 1],
|
||||
[1, 1, 0],
|
||||
[1, 2, 2]],
|
||||
|
||||
[[1, 0, 3],
|
||||
[0, 0, 1],
|
||||
[3, 2, 0]],
|
||||
|
||||
[[0, 2, 3],
|
||||
[2, 0, 0],
|
||||
[3, 0, 0]],
|
||||
|
||||
[[2, 0, 0],
|
||||
[2, 1, 0],
|
||||
[1, 1, 0]],
|
||||
|
||||
[[0, 1, 1],
|
||||
[0, 1, 1],
|
||||
[1, 0, 2]]], device=device, dtype=torch.long)
|
||||
|
||||
voxelgrids = torch.stack([voxelgrid1, voxelgrid2])
|
||||
expected = torch.stack([expected1, expected2])
|
||||
output = vg.extract_odms(voxelgrids)
|
||||
|
||||
assert torch.equal(output, expected)
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
@pytest.mark.parametrize('device', ['cpu'])
|
||||
@pytest.mark.parametrize('dtype', [torch.float, torch.double])
|
||||
class TestFill:
|
||||
|
||||
def test_complex_value(self, device, dtype):
|
||||
# The center of the voxelgrids shoud be filled. Other places unchanged
|
||||
voxelgrid1 = torch.zeros((5, 5, 5), dtype=dtype, device=device)
|
||||
voxelgrid1[1:4, 1:4, 1:4] = 1
|
||||
voxelgrid1[2, 2, 2] = 0
|
||||
|
||||
voxelgrid2 = torch.ones((5, 5, 5), dtype=dtype, device=device)
|
||||
voxelgrid2[1:4, 1:4, 1:4] = 0
|
||||
|
||||
# With 0 in the middle, but not enclosed.
|
||||
voxelgrid3 = torch.zeros((5, 5, 5), dtype=dtype, device=device)
|
||||
voxelgrid3[1:4, 1:4, 1:4] = 1
|
||||
voxelgrid3[2, 2, 2:4] = 0
|
||||
|
||||
batch_voxelgrids = torch.stack((voxelgrid1, voxelgrid2, voxelgrid3))
|
||||
|
||||
|
||||
output = vg.fill(batch_voxelgrids)
|
||||
|
||||
# Only center is changed for batch sample 1.
|
||||
expected1 = voxelgrid1
|
||||
expected1[2, 2, 2] = 1
|
||||
|
||||
# The batch sample 2 should be all ones.
|
||||
expected2 = torch.ones((5, 5, 5), dtype=dtype, device=device)
|
||||
|
||||
# The batch sample 3 should be unchanged.
|
||||
expected3 = voxelgrid3
|
||||
|
||||
expected = torch.stack((expected1, expected2, expected3)).type(torch.bool)
|
||||
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
def test_voxelgrids_dim(self, device, dtype):
|
||||
# The dimension of voxelgrids should be 4 (batched).
|
||||
with pytest.raises(ValueError,
|
||||
match="Expected voxelgrids to have 4 dimensions "
|
||||
"but got 3 dimensions."):
|
||||
voxelgrids = torch.ones([6, 6, 6], device=device, dtype=dtype)
|
||||
vg.fill(voxelgrids)
|
||||
|
||||
@pytest.mark.parametrize('device,dtype', INT_TYPES)
|
||||
class TestProjectOdms:
|
||||
|
||||
def test_batch_match(self, device, dtype):
|
||||
# The batch size of voxelgrids and odms must match.
|
||||
with pytest.raises(ValueError,
|
||||
match="Expected voxelgrids and odms' batch size to be the same, "
|
||||
"but got 2 for odms and 3 for voxelgrid."):
|
||||
voxelgrids = torch.ones((3, 3, 3, 3), device=device, dtype=dtype)
|
||||
odms = torch.ones((2, 6, 3, 3), device=device, dtype=dtype)
|
||||
vg.project_odms(odms, voxelgrids)
|
||||
|
||||
def test_dimension_match(self, device, dtype):
|
||||
# The dimension of voxelgrids and odms must match
|
||||
with pytest.raises(ValueError,
|
||||
match="Expected voxelgrids and odms' dimension size to be the same, "
|
||||
"but got 3 for odms and 4 for voxelgrid."):
|
||||
voxelgrids = torch.ones((2, 4, 4, 4), device=device, dtype=dtype)
|
||||
odms = torch.ones((2, 6, 3, 3), device=device, dtype=dtype)
|
||||
vg.project_odms(odms, voxelgrids)
|
||||
|
||||
def test_empty_filled_odms(self, device, dtype):
|
||||
# If the input is an empty odms, the output should be a filled voxel grid
|
||||
odms1 = torch.zeros((6, 3, 3), device=device, dtype=dtype)
|
||||
|
||||
# If the input odms is a filled odms. the output should be an empty voxel grid
|
||||
odms2 = torch.ones((6, 3, 3), device=device, dtype=dtype) * 3
|
||||
|
||||
odms = torch.stack((odms1, odms2))
|
||||
voxelgrids = vg.project_odms(odms)
|
||||
|
||||
assert torch.equal(voxelgrids[0], torch.ones((3, 3, 3), device=device, dtype=torch.bool))
|
||||
assert torch.equal(voxelgrids[1], torch.zeros((3, 3, 3), device=device, dtype=torch.bool))
|
||||
|
||||
def test_handmade_input_vote1(self, device, dtype):
|
||||
# The input is hand-made.
|
||||
odms = torch.tensor([[[[2, 0, 0],
|
||||
[2, 0, 0],
|
||||
[1, 1, 0]],
|
||||
|
||||
[[0, 1, 1],
|
||||
[0, 1, 2],
|
||||
[1, 0, 2]],
|
||||
|
||||
[[2, 0, 0],
|
||||
[2, 1, 0],
|
||||
[1, 1, 0]],
|
||||
|
||||
[[0, 1, 1],
|
||||
[0, 1, 1],
|
||||
[1, 0, 2]],
|
||||
|
||||
[[1, 0, 3],
|
||||
[0, 0, 1],
|
||||
[3, 2, 0]],
|
||||
|
||||
[[0, 2, 3],
|
||||
[2, 0, 0],
|
||||
[3, 0, 0]]]], device=device, dtype=dtype)
|
||||
|
||||
expected = torch.tensor([[[[1, 0, 0],
|
||||
[0, 1, 1],
|
||||
[0, 1, 1]],
|
||||
|
||||
[[1, 0, 0],
|
||||
[0, 1, 1],
|
||||
[0, 0, 1]],
|
||||
|
||||
[[0, 1, 0],
|
||||
[1, 1, 0],
|
||||
[0, 0, 1]]]], device=device, dtype=torch.bool)
|
||||
|
||||
output = vg.project_odms(odms)
|
||||
assert torch.equal(output, expected)
|
||||
|
||||
def test_handmade_input_vote4(self, device, dtype):
|
||||
# The input is hand-made.
|
||||
odms = torch.tensor([[[[2, 0, 0],
|
||||
[2, 0, 0],
|
||||
[1, 1, 0]],
|
||||
|
||||
[[0, 1, 1],
|
||||
[0, 1, 2],
|
||||
[1, 0, 2]],
|
||||
|
||||
[[2, 0, 0],
|
||||
[2, 1, 0],
|
||||
[1, 1, 0]],
|
||||
|
||||
[[0, 1, 1],
|
||||
[0, 1, 1],
|
||||
[1, 0, 2]],
|
||||
|
||||
[[1, 0, 3],
|
||||
[0, 0, 1],
|
||||
[3, 2, 0]],
|
||||
|
||||
[[0, 2, 3],
|
||||
[2, 0, 0],
|
||||
[3, 0, 0]]]], device=device, dtype=dtype)
|
||||
|
||||
expected_votes = torch.tensor([[[[1, 1, 0],
|
||||
[1, 1, 1],
|
||||
[0, 1, 1]],
|
||||
|
||||
[[1, 1, 0],
|
||||
[1, 1, 1],
|
||||
[0, 1, 1]],
|
||||
|
||||
[[1, 1, 0],
|
||||
[1, 1, 1],
|
||||
[0, 1, 1]]]], device=device, dtype=torch.bool)
|
||||
|
||||
output_votes = vg.project_odms(odms, votes=4)
|
||||
assert torch.equal(output_votes, expected_votes)
|
||||
224
tests/python/kaolin/render/camera/test_camera.py
Normal file
@@ -0,0 +1,224 @@
|
||||
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import pytest
|
||||
import itertools
|
||||
import numpy as np
|
||||
import torch
|
||||
import kaolin
|
||||
from kaolin.render.camera import Camera
|
||||
from kaolin.utils.testing import FLOAT_TYPES
|
||||
|
||||
|
||||
_CAM_DATA_IDX = (0, 1, 2, 3, 4, 5, 6)
|
||||
|
||||
@pytest.fixture(params=itertools.product(_CAM_DATA_IDX, FLOAT_TYPES))
|
||||
def camera_data(request):
|
||||
data_idx = request.param[0]
|
||||
device, dtype = request.param[1]
|
||||
camera = None
|
||||
if data_idx == 0:
|
||||
camera = Camera.from_args(view_matrix=torch.tensor(
|
||||
[[[-5.5742e-01, 1.3878e-17, -8.3023e-01, 0.0000e+00],
|
||||
[1.4097e-01, 9.8548e-01, -9.4651e-02, 0.0000e+00],
|
||||
[8.1817e-01, -1.6980e-01, -5.4933e-01, -2.0000e+00],
|
||||
[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]],
|
||||
|
||||
[[9.6585e-01, 0.0000e+00, 2.5910e-01, 0.0000e+00],
|
||||
[1.8479e-01, 7.0098e-01, -6.8883e-01, 0.0000e+00],
|
||||
[-1.8163e-01, 7.1318e-01, 6.7704e-01, -2.0000e+00],
|
||||
[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]],
|
||||
|
||||
[[-5.3161e-01, -3.4694e-18, 8.4699e-01, 0.0000e+00],
|
||||
[-5.7488e-02, 9.9769e-01, -3.6082e-02, 0.0000e+00],
|
||||
[-8.4504e-01, -6.7873e-02, -5.3038e-01, -2.0000e+00],
|
||||
[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]]],
|
||||
device=device),
|
||||
width=256, height=256,
|
||||
fov=0.8232465982437134, dtype=dtype, device=device)
|
||||
elif data_idx == 1:
|
||||
camera = Camera.from_args(view_matrix=torch.tensor(
|
||||
[[[-5.5742e-01, 1.3878e-17, -8.3023e-01, 0.0000e+00],
|
||||
[1.4097e-01, 9.8548e-01, -9.4651e-02, 0.0000e+00],
|
||||
[8.1817e-01, -1.6980e-01, -5.4933e-01, -2.0000e+00],
|
||||
[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]]],
|
||||
device=device),
|
||||
width=256, height=256,
|
||||
fov=0.8232465982437134, dtype=dtype, device=device)
|
||||
elif data_idx == 2:
|
||||
camera = Camera.from_args(
|
||||
eye=torch.tensor([4.0, 4.0, 4.0]),
|
||||
at=torch.tensor([0.0, 0.0, 0.0]),
|
||||
up=torch.tensor([0.0, 1.0, 0.0]),
|
||||
fov=30 * np.pi / 180, # In radians
|
||||
width=800, height=800,
|
||||
dtype=dtype,
|
||||
device=device
|
||||
)
|
||||
elif data_idx == 3:
|
||||
camera = Camera.from_args(
|
||||
eye=torch.tensor([[4.0, 4.0, 4.0], [4.0, 4.0, 4.0]]),
|
||||
at=torch.tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
|
||||
up=torch.tensor([[0.0, 1.0, 0.0], [0.0, 1.0, 0.0]]),
|
||||
fov=30 * np.pi / 180, # In radians
|
||||
width=800, height=800,
|
||||
dtype=dtype,
|
||||
device=device
|
||||
)
|
||||
elif data_idx == 4:
|
||||
camera = Camera.from_args(
|
||||
eye=torch.tensor([[4.0, 4.0, 4.0], [4.0, 4.0, 4.0]]),
|
||||
at=torch.tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
|
||||
up=torch.tensor([[0.0, 1.0, 0.0], [0.0, 1.0, 0.0]]),
|
||||
fov=30 * np.pi / 180, # In radians
|
||||
x0=12,
|
||||
y0=23,
|
||||
width=800, height=800,
|
||||
dtype=dtype,
|
||||
device=device
|
||||
)
|
||||
elif data_idx == 5:
|
||||
camera = Camera.from_args(
|
||||
eye=torch.tensor([4.0, 4.0, 4.0]),
|
||||
at=torch.tensor([0.0, 0.0, 0.0]),
|
||||
up=torch.tensor([0.0, 1.0, 0.0]),
|
||||
width=800, height=800,
|
||||
dtype=dtype,
|
||||
device=device
|
||||
)
|
||||
elif data_idx == 6:
|
||||
camera = Camera.from_args(
|
||||
eye=torch.tensor([[4.0, 4.0, 4.0], [4.0, 4.0, 4.0]]),
|
||||
at=torch.tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
|
||||
up=torch.tensor([[0.0, 1.0, 0.0], [0.0, 1.0, 0.0]]),
|
||||
width=800, height=800,
|
||||
dtype=dtype,
|
||||
device=device
|
||||
)
|
||||
return dict(camera=camera)
|
||||
|
||||
|
||||
class TestCameraTransforms:
|
||||
|
||||
def test_transform(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
|
||||
# check various camera types
|
||||
# 2 input types supported by cameras
|
||||
vertices_b_3 = torch.rand((5, 3), device=cam.device, dtype=cam.dtype)
|
||||
vertices_c_b_3 = vertices_b_3.unsqueeze(0).expand(len(cam), 5, 3)
|
||||
transform_result_b_3 = cam.transform(vertices_b_3)
|
||||
transform_result_c_b_3 = cam.transform(vertices_c_b_3)
|
||||
|
||||
# transform() should give the same result regardless of input shape
|
||||
if len(cam) > 1:
|
||||
# for multiple cameras in batch, shape output always broadcasts to (C, B, 3)
|
||||
assert torch.allclose(transform_result_c_b_3, transform_result_b_3)
|
||||
else:
|
||||
# for single camera in batch, shape output depends on input (C, B, 3) or (B, 3)
|
||||
assert torch.allclose(transform_result_c_b_3, transform_result_b_3[None])
|
||||
|
||||
# validate transform through direct view_projection_matrix:
|
||||
vertices_b_4 = kaolin.render.camera.intrinsics.up_to_homogeneous(vertices_b_3)
|
||||
vertices_c_b_4 = kaolin.render.camera.intrinsics.up_to_homogeneous(vertices_c_b_3)
|
||||
view_projection = cam.view_projection_matrix()
|
||||
per_cam_mat_result = []
|
||||
# carefully perform test: for batched cameras multiply vectors per single camera matrix
|
||||
for cam_idx in range(len(cam)):
|
||||
mat_result = view_projection[cam_idx] @ vertices_b_4[:, :, None] # 4x4 mat @ Bx4x1 vec = Bx4x1 vec
|
||||
mat_result = mat_result.squeeze(-1)
|
||||
mat_result = kaolin.render.camera.intrinsics.down_from_homogeneous(mat_result)
|
||||
per_cam_mat_result.append(mat_result)
|
||||
if len(cam) == 1: # Single camera, result should be of shape (B,3)
|
||||
per_cam_mat_result = per_cam_mat_result[0]
|
||||
else:
|
||||
per_cam_mat_result = torch.stack(per_cam_mat_result)
|
||||
|
||||
# check that transform and matrix multiplication yield the same result
|
||||
assert torch.allclose(per_cam_mat_result, transform_result_b_3, rtol=1e-3, atol=1e-2)
|
||||
|
||||
# intrinsics also accept (B,4) and (C,B,4) shapes, validate as well against (B,3) and (C,B,3)
|
||||
ext_transformed_vertices_b_3 = cam.extrinsics.transform(vertices_b_3)
|
||||
ext_transformed_vertices_b_4 = kaolin.render.camera.intrinsics.up_to_homogeneous(
|
||||
ext_transformed_vertices_b_3)
|
||||
ext_transformed_vertices_c_b_3 = cam.extrinsics.transform(vertices_c_b_3)
|
||||
ext_transformed_vertices_c_b_4 = kaolin.render.camera.intrinsics.up_to_homogeneous(
|
||||
ext_transformed_vertices_c_b_3)
|
||||
int_transformed_b_3 = cam.intrinsics.transform(ext_transformed_vertices_b_3)
|
||||
int_transformed_b_4 = cam.intrinsics.transform(ext_transformed_vertices_b_4)
|
||||
int_transformed_c_b_3 = cam.intrinsics.transform(ext_transformed_vertices_c_b_3)
|
||||
int_transformed_c_b_4 = cam.intrinsics.transform(ext_transformed_vertices_c_b_4)
|
||||
assert torch.allclose(int_transformed_b_3, int_transformed_b_4)
|
||||
assert torch.allclose(int_transformed_b_3, int_transformed_c_b_3)
|
||||
assert torch.allclose(int_transformed_b_3, int_transformed_c_b_4)
|
||||
|
||||
|
||||
class TestCameraProperties:
|
||||
|
||||
def test_set_width(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
|
||||
width = cam.width
|
||||
height = cam.height
|
||||
cam.width *= 0.5
|
||||
assert (width / 2 == cam.width)
|
||||
assert (height == cam.height)
|
||||
|
||||
def test_set_height(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
|
||||
width = cam.width
|
||||
height = cam.height
|
||||
cam.height *= 0.5
|
||||
assert (height / 2 == cam.height)
|
||||
assert (width == cam.width)
|
||||
|
||||
|
||||
class TestViewportMatrix:
|
||||
|
||||
def test_viewport(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
|
||||
C, B = len(cam), 100
|
||||
|
||||
# vertices to (C, B, 4, 1)
|
||||
vertices = torch.rand((C, B, 1, 3), device=cam.device, dtype=cam.dtype)
|
||||
vertices = kaolin.render.camera.intrinsics.up_to_homogeneous(vertices)
|
||||
vertices = vertices.transpose(-1, -2)
|
||||
|
||||
vp_matrix = cam.view_projection_matrix()[:,None].expand(C, B ,4, 4)
|
||||
viewport_matrix = cam.viewport_matrix()[:,None].expand(C, B ,4, 4)
|
||||
|
||||
clip_coordinates = vp_matrix @ vertices
|
||||
ndc_coordinates = clip_coordinates / clip_coordinates[:,:, -1:]
|
||||
|
||||
screen_space_coords = viewport_matrix @ ndc_coordinates
|
||||
ndc_coordinates = ndc_coordinates.squeeze(-1)
|
||||
x_clip = (ndc_coordinates[:, :, 0] >= -1) & (ndc_coordinates[:, :, 0] <= 1)
|
||||
y_clip = (ndc_coordinates[:, :, 1] >= -1) & (ndc_coordinates[:, :, 1] <= 1)
|
||||
z_clip = (ndc_coordinates[:, :, 2] >= cam.ndc_min) & (ndc_coordinates[:, :, 2] <= cam.ndc_max)
|
||||
|
||||
expected_screen_coords_x = ((ndc_coordinates[:, :, 0] + 1) * (cam.width / 2)).unsqueeze(-1)
|
||||
expected_screen_coords_y = ((ndc_coordinates[:, :, 1] + 1) * (cam.height / 2)).unsqueeze(-1)
|
||||
assert torch.allclose(screen_space_coords[:, :, 0], expected_screen_coords_x, rtol=1e-3, atol=1e-4)
|
||||
assert torch.allclose(screen_space_coords[:, :, 1], expected_screen_coords_y, rtol=1e-3, atol=1e-4)
|
||||
|
||||
verts_in_frustum = screen_space_coords[x_clip & y_clip & z_clip].squeeze(-1)
|
||||
assert (verts_in_frustum[:, 0:2] >= 0).all()
|
||||
assert (verts_in_frustum[:, 0] <= cam.width).all()
|
||||
assert (verts_in_frustum[:, 1] <= cam.height).all()
|
||||
assert (verts_in_frustum[:, 2] >= 0).all()
|
||||
assert (verts_in_frustum[:, 2] <= 1).all()
|
||||
1069
tests/python/kaolin/render/camera/test_extrinsics.py
Normal file
191
tests/python/kaolin/render/camera/test_pinhole.py
Normal file
@@ -0,0 +1,191 @@
|
||||
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import copy
|
||||
import itertools
|
||||
import numpy as np
|
||||
import torch
|
||||
import kaolin
|
||||
from kaolin.render.camera import Camera
|
||||
from kaolin.utils.testing import FLOAT_TYPES
|
||||
|
||||
_CAM_DATA_IDX = (0, 1, 2, 3, 4)
|
||||
|
||||
@pytest.fixture(params=itertools.product(_CAM_DATA_IDX, FLOAT_TYPES))
|
||||
def camera_data(request):
|
||||
data_idx = request.param[0]
|
||||
device, dtype = request.param[1]
|
||||
camera = None
|
||||
if data_idx == 0:
|
||||
camera = Camera.from_args(view_matrix=torch.tensor(
|
||||
[[[-5.5742e-01, 1.3878e-17, -8.3023e-01, 0.0000e+00],
|
||||
[1.4097e-01, 9.8548e-01, -9.4651e-02, 0.0000e+00],
|
||||
[8.1817e-01, -1.6980e-01, -5.4933e-01, -2.0000e+00],
|
||||
[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]],
|
||||
|
||||
[[9.6585e-01, 0.0000e+00, 2.5910e-01, 0.0000e+00],
|
||||
[1.8479e-01, 7.0098e-01, -6.8883e-01, 0.0000e+00],
|
||||
[-1.8163e-01, 7.1318e-01, 6.7704e-01, -2.0000e+00],
|
||||
[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]],
|
||||
|
||||
[[-5.3161e-01, -3.4694e-18, 8.4699e-01, 0.0000e+00],
|
||||
[-5.7488e-02, 9.9769e-01, -3.6082e-02, 0.0000e+00],
|
||||
[-8.4504e-01, -6.7873e-02, -5.3038e-01, -2.0000e+00],
|
||||
[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]]],
|
||||
device=device),
|
||||
width=256, height=256,
|
||||
fov=0.8232465982437134, dtype=dtype, device=device)
|
||||
elif data_idx == 1:
|
||||
camera = Camera.from_args(view_matrix=torch.tensor(
|
||||
[[[-5.5742e-01, 1.3878e-17, -8.3023e-01, 0.0000e+00],
|
||||
[1.4097e-01, 9.8548e-01, -9.4651e-02, 0.0000e+00],
|
||||
[8.1817e-01, -1.6980e-01, -5.4933e-01, -2.0000e+00],
|
||||
[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+00]]],
|
||||
device=device),
|
||||
width=256, height=256,
|
||||
fov=0.8232465982437134, dtype=dtype, device=device)
|
||||
elif data_idx == 2:
|
||||
camera = Camera.from_args(
|
||||
eye=torch.tensor([4.0, 4.0, 4.0]),
|
||||
at=torch.tensor([0.0, 0.0, 0.0]),
|
||||
up=torch.tensor([0.0, 1.0, 0.0]),
|
||||
fov=30 * np.pi / 180, # In radians
|
||||
width=800, height=800,
|
||||
dtype=dtype,
|
||||
device=device
|
||||
)
|
||||
elif data_idx == 3:
|
||||
camera = Camera.from_args(
|
||||
eye=torch.tensor([[4.0, 4.0, 4.0], [4.0, 4.0, 4.0]]),
|
||||
at=torch.tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
|
||||
up=torch.tensor([[0.0, 1.0, 0.0], [0.0, 1.0, 0.0]]),
|
||||
fov=30 * np.pi / 180, # In radians
|
||||
width=800, height=800,
|
||||
dtype=dtype,
|
||||
device=device
|
||||
)
|
||||
elif data_idx == 4:
|
||||
camera = Camera.from_args(
|
||||
eye=torch.tensor([[4.0, 4.0, 4.0], [4.0, 4.0, 4.0]]),
|
||||
at=torch.tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
|
||||
up=torch.tensor([[0.0, 1.0, 0.0], [0.0, 1.0, 0.0]]),
|
||||
fov=30 * np.pi / 180, # In radians
|
||||
x0=12,
|
||||
y0=23,
|
||||
width=800, height=800,
|
||||
dtype=dtype,
|
||||
device=device
|
||||
)
|
||||
return dict(camera=camera)
|
||||
|
||||
|
||||
class TestPinhole:
|
||||
|
||||
def test_project(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
# 2 input types supported by cameras
|
||||
vertices_b_3 = torch.rand((5, 3), device=cam.device, dtype=cam.dtype)
|
||||
vertices_c_b_3 = vertices_b_3.unsqueeze(0).expand(len(cam), 5, 3)
|
||||
project_result_b_4 = cam.project(cam.extrinsics.transform(vertices_b_3))
|
||||
project_result_c_b_4 = cam.project(cam.extrinsics.transform(vertices_c_b_3))
|
||||
|
||||
# project() should give the same result regardless of input shape
|
||||
if len(cam) > 1:
|
||||
# for multiple cameras in batch, shape output always broadcasts to (C, B, 3)
|
||||
assert torch.allclose(project_result_c_b_4, project_result_b_4)
|
||||
else:
|
||||
# for single camera in batch, shape output depends on input (C, B, 3) or (B, 3)
|
||||
assert torch.allclose(project_result_c_b_4, project_result_b_4[None])
|
||||
|
||||
# validate transform through direct view_projection_matrix:
|
||||
vertices_b_4 = kaolin.render.camera.intrinsics.up_to_homogeneous(vertices_b_3)
|
||||
view_projection = cam.view_projection_matrix()
|
||||
per_cam_mat_result = []
|
||||
# carefully perform test: for batched cameras multiply vectors per single camera matrix
|
||||
for cam_idx in range(len(cam)):
|
||||
mat_result = view_projection[cam_idx] @ vertices_b_4[:, :, None] # 4x4 mat @ Bx4x1 vec = Bx4x1 vec
|
||||
mat_result = mat_result.squeeze(-1)
|
||||
per_cam_mat_result.append(mat_result)
|
||||
if len(cam) == 1: # Single camera, result should be of shape (B,3)
|
||||
per_cam_mat_result = per_cam_mat_result[0]
|
||||
else:
|
||||
per_cam_mat_result = torch.stack(per_cam_mat_result)
|
||||
|
||||
# check that transform and matrix multiplication yield the same result
|
||||
assert torch.allclose(per_cam_mat_result, project_result_b_4, rtol=1e-3, atol=1e-3)
|
||||
|
||||
# intrinsics also accept (B,4) and (C,B,4) shapes, validate as well against (B,3) and (C,B,3)
|
||||
ext_transformed_vertices_b_3 = cam.extrinsics.transform(vertices_b_3)
|
||||
ext_transformed_vertices_b_4 = kaolin.render.camera.intrinsics.up_to_homogeneous(
|
||||
ext_transformed_vertices_b_3)
|
||||
ext_transformed_vertices_c_b_3 = cam.extrinsics.transform(vertices_c_b_3)
|
||||
ext_transformed_vertices_c_b_4 = kaolin.render.camera.intrinsics.up_to_homogeneous(
|
||||
ext_transformed_vertices_c_b_3)
|
||||
int_projected_b_3 = cam.intrinsics.project(ext_transformed_vertices_b_3)
|
||||
int_projected_b_4 = cam.intrinsics.project(ext_transformed_vertices_b_4)
|
||||
int_projected_c_b_3 = cam.intrinsics.project(ext_transformed_vertices_c_b_3)
|
||||
int_projected_c_b_4 = cam.intrinsics.project(ext_transformed_vertices_c_b_4)
|
||||
assert torch.allclose(int_projected_b_3, int_projected_b_4)
|
||||
assert torch.allclose(int_projected_b_3, int_projected_c_b_3)
|
||||
assert torch.allclose(int_projected_b_3, int_projected_c_b_4)
|
||||
|
||||
def test_get_principal_point_properties(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
half_w, half_h = cam.width / 2, cam.height / 2
|
||||
assert (cam.cx == (half_w + cam.x0)).all()
|
||||
assert (cam.cy == (half_h + cam.y0)).all()
|
||||
|
||||
def test_set_principal_point_properties(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
half_w, half_h = cam.width / 2, cam.height / 2
|
||||
cam.x0 += 67.0
|
||||
cam.y0 -= 45.0
|
||||
assert (cam.cx == (half_w + cam.x0)).all()
|
||||
assert (cam.cy == (half_h + cam.y0)).all()
|
||||
|
||||
def test_set_width(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
|
||||
focal_x = copy.deepcopy(cam.focal_x)
|
||||
focal_y = copy.deepcopy(cam.focal_y)
|
||||
fov_x = copy.deepcopy(cam.fov_x)
|
||||
fov_y = copy.deepcopy(cam.fov_y)
|
||||
width = cam.width
|
||||
height = cam.height
|
||||
cam.width *= 0.5
|
||||
assert (torch.allclose(cam.fov_x, fov_x, rtol=1e-3))
|
||||
assert (torch.allclose(cam.fov_y, fov_y, rtol=1e-3))
|
||||
assert (torch.allclose(cam.focal_x, (focal_x / 2), rtol=1e-3))
|
||||
assert (torch.allclose(cam.focal_y, focal_y, rtol=1e-5))
|
||||
assert (cam.width == (width * 0.5))
|
||||
assert (cam.height == height)
|
||||
|
||||
def test_set_height(self, camera_data):
|
||||
cam = camera_data['camera']
|
||||
|
||||
focal_x = copy.deepcopy(cam.focal_x)
|
||||
focal_y = copy.deepcopy(cam.focal_y)
|
||||
fov_x = copy.deepcopy(cam.fov_x)
|
||||
fov_y = copy.deepcopy(cam.fov_y)
|
||||
width = cam.width
|
||||
height = cam.height
|
||||
cam.height *= 0.5
|
||||
assert (torch.allclose(cam.fov_x, fov_x, rtol=1e-3))
|
||||
assert (torch.allclose(cam.fov_y, fov_y, rtol=1e-3))
|
||||
assert (torch.allclose(cam.focal_x, focal_x, rtol=1e-5))
|
||||
assert (torch.allclose(cam.focal_y, (focal_y / 2), rtol=1e-3))
|
||||
assert (cam.width == width)
|
||||
assert (cam.height == (height * 0.5))
|
||||
370
tests/python/kaolin/render/lighting/test_sg.py
Normal file
@@ -0,0 +1,370 @@
|
||||
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import math
|
||||
import pytest
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
from PIL import Image
|
||||
|
||||
import kaolin as kal
|
||||
|
||||
ROOT_DIR = os.path.join(
|
||||
os.path.dirname(os.path.abspath(__file__)),
|
||||
os.pardir, os.pardir, os.pardir, os.pardir, 'samples'
|
||||
)
|
||||
|
||||
def _naive_sg_inner_product(intensity, direction, sharpness,
|
||||
other_intensity, other_direction, other_sharpness):
|
||||
dm = math.sqrt(sum([(sharpness * direction[i] + other_sharpness * other_direction[i]) ** 2
|
||||
for i in range(3)]))
|
||||
lm = sharpness + other_sharpness
|
||||
mul = math.exp(dm - lm)
|
||||
expo = [mul * intensity[i] * other_intensity[i] for i in range(3)]
|
||||
other = 1. - math.exp(-2. * dm)
|
||||
return [2. * math.pi * expo[i] * other / dm for i in range(3)]
|
||||
|
||||
@pytest.mark.parametrize("device", ["cpu", "cuda"])
|
||||
@pytest.mark.parametrize("dtype", [torch.float, torch.double])
|
||||
@pytest.mark.parametrize("num_sg", [1])
|
||||
@pytest.mark.parametrize("num_other", [1])
|
||||
class TestUnbatchedSgInnerProduct:
|
||||
@pytest.fixture(autouse=True)
|
||||
def intensity(self, num_sg, device, dtype):
|
||||
return torch.rand((num_sg, 3), device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def direction(self, num_sg, device, dtype):
|
||||
return torch.rand((num_sg, 3), device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def sharpness(self, num_sg, device, dtype):
|
||||
return torch.rand((num_sg), device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def other_intensity(self, num_other, device, dtype):
|
||||
return torch.rand((num_other, 3), device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def other_direction(self, num_other, device, dtype):
|
||||
return torch.rand((num_other, 3), device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def other_sharpness(self, num_other, device, dtype):
|
||||
return torch.rand((num_other), device=device, dtype=dtype)
|
||||
|
||||
def test_forward(self, intensity, direction, sharpness,
|
||||
other_intensity, other_direction, other_sharpness):
|
||||
with torch.no_grad():
|
||||
expected_output = []
|
||||
for i in range(other_intensity.shape[0]):
|
||||
expected_output.append([])
|
||||
for j in range(intensity.shape[0]):
|
||||
expected_output[-1].append(_naive_sg_inner_product(
|
||||
intensity[j], direction[j], sharpness[j],
|
||||
other_intensity[i], other_direction[i], other_sharpness[i]
|
||||
))
|
||||
expected_output = torch.tensor(expected_output,
|
||||
device=intensity.device,
|
||||
dtype=intensity.dtype)
|
||||
|
||||
output = kal.render.lighting.sg.unbatched_sg_inner_product(
|
||||
intensity, direction, sharpness,
|
||||
other_intensity, other_direction, other_sharpness)
|
||||
assert torch.allclose(output, expected_output, rtol=1e-4, atol=1e-4)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("device", ["cuda"])
|
||||
@pytest.mark.parametrize("dtype", [torch.float])
|
||||
@pytest.mark.parametrize("num_sg", [1, 17, 32, 511, 10000])
|
||||
@pytest.mark.parametrize("num_other", [1, 17, 32, 511])
|
||||
class TestUnbatchedReducedSgInnerProduct:
|
||||
@pytest.fixture(autouse=True)
|
||||
def intensity(self, num_sg, device, dtype):
|
||||
return torch.rand((num_sg, 3), device=device, dtype=dtype,
|
||||
requires_grad=True)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def direction(self, num_sg, device, dtype):
|
||||
return torch.rand((num_sg, 3), device=device, dtype=dtype,
|
||||
requires_grad=True)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def sharpness(self, num_sg, device, dtype):
|
||||
return torch.rand((num_sg), device=device, dtype=dtype,
|
||||
requires_grad=True)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def other_intensity(self, num_other, device, dtype):
|
||||
return torch.rand((num_other, 3), device=device, dtype=dtype,
|
||||
requires_grad=True)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def other_direction(self, num_other, device, dtype):
|
||||
return torch.rand((num_other, 3), device=device, dtype=dtype,
|
||||
requires_grad=True)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def other_sharpness(self, num_other, device, dtype):
|
||||
return torch.rand((num_other), device=device, dtype=dtype,
|
||||
requires_grad=True)
|
||||
|
||||
def test_forward(self, intensity, direction, sharpness,
|
||||
other_intensity, other_direction, other_sharpness):
|
||||
with torch.no_grad():
|
||||
expected_output = kal.render.lighting.sg.unbatched_sg_inner_product(
|
||||
intensity, direction, sharpness,
|
||||
other_intensity, other_direction, other_sharpness).sum(1)
|
||||
output = kal.render.lighting.sg.unbatched_reduced_sg_inner_product(
|
||||
intensity, direction, sharpness,
|
||||
other_intensity, other_direction, other_sharpness)
|
||||
assert torch.allclose(output, expected_output, rtol=1e-4, atol=1e-4)
|
||||
|
||||
def test_backward(self, intensity, direction, sharpness,
|
||||
other_intensity, other_direction, other_sharpness):
|
||||
gt_intensity = intensity.detach()
|
||||
gt_intensity.requires_grad = True
|
||||
gt_direction = direction.detach()
|
||||
gt_direction.requires_grad = True
|
||||
gt_sharpness = sharpness.detach()
|
||||
gt_sharpness.requires_grad = True
|
||||
gt_other_intensity = other_intensity.detach()
|
||||
gt_other_intensity.requires_grad = True
|
||||
gt_other_direction = other_direction.detach()
|
||||
gt_other_direction.requires_grad = True
|
||||
gt_other_sharpness = other_sharpness.detach()
|
||||
gt_other_sharpness.requires_grad = True
|
||||
gt_output = kal.render.lighting.sg.unbatched_sg_inner_product(
|
||||
gt_intensity, gt_direction, gt_sharpness,
|
||||
gt_other_intensity, gt_other_direction, gt_other_sharpness).sum(1)
|
||||
output = kal.render.lighting.sg.unbatched_reduced_sg_inner_product(
|
||||
intensity, direction, sharpness,
|
||||
other_intensity, other_direction, other_sharpness)
|
||||
grad_out = torch.rand_like(gt_output)
|
||||
gt_output.backward(grad_out)
|
||||
output.backward(grad_out)
|
||||
assert torch.allclose(intensity.grad, gt_intensity.grad,
|
||||
rtol=1e-4, atol=1e-4)
|
||||
assert torch.allclose(direction.grad, gt_direction.grad,
|
||||
rtol=1e-4, atol=1e-4)
|
||||
assert torch.allclose(sharpness.grad, gt_sharpness.grad,
|
||||
rtol=1e-4, atol=1e-4)
|
||||
assert torch.allclose(other_intensity.grad, gt_other_intensity.grad,
|
||||
rtol=1e-4, atol=1e-4)
|
||||
assert torch.allclose(other_direction.grad, gt_other_direction.grad,
|
||||
rtol=1e-4, atol=1e-4)
|
||||
assert torch.allclose(other_sharpness.grad, gt_other_sharpness.grad,
|
||||
rtol=1e-4, atol=1e-4)
|
||||
|
||||
def _generate_pinhole_rays_dir(camera, device='cuda'):
|
||||
"""Ray direction generation function for pinhole cameras.
|
||||
|
||||
This function assumes that the principal point (the pinhole location) is specified by a
|
||||
displacement (camera.x0, camera.y0) in pixel coordinates from the center of the image.
|
||||
The Kaolin camera class does not enforce a coordinate space for how the principal point is specified,
|
||||
so users will need to make sure that the correct principal point conventions are followed for
|
||||
the cameras passed into this function.
|
||||
|
||||
Args:
|
||||
height (int): The resolution height.
|
||||
width (int): The resolution width.
|
||||
camera (kaolin.render.camera.Camera): The camera instance, should be of batch == 1.
|
||||
Returns:
|
||||
(torch.Tensor): the rays directions, of shape (height, width, 3)
|
||||
"""
|
||||
# Generate centered grid
|
||||
pixel_y, pixel_x = torch.meshgrid(
|
||||
torch.arange(camera.height, device=device),
|
||||
torch.arange(camera.width, device=device),
|
||||
)
|
||||
pixel_x = pixel_x + 0.5 # scale and add bias to pixel center
|
||||
pixel_y = pixel_y + 0.5 # scale and add bias to pixel center
|
||||
|
||||
# Account for principal point (offsets from the center)
|
||||
pixel_x = pixel_x - camera.x0
|
||||
pixel_y = pixel_y + camera.y0
|
||||
|
||||
# pixel values are now in range [-1, 1], both tensors are of shape res_y x res_x
|
||||
# Convert to NDC
|
||||
pixel_x = 2 * (pixel_x / camera.width) - 1.0
|
||||
pixel_y = 2 * (pixel_y / camera.height) - 1.0
|
||||
|
||||
ray_dir = torch.stack((pixel_x * camera.tan_half_fov(kal.render.camera.intrinsics.CameraFOV.HORIZONTAL),
|
||||
-pixel_y * camera.tan_half_fov(kal.render.camera.intrinsics.CameraFOV.VERTICAL),
|
||||
-torch.ones_like(pixel_x)), dim=-1)
|
||||
|
||||
ray_dir = ray_dir.reshape(-1, 3) # Flatten grid rays to 1D array
|
||||
ray_orig = torch.zeros_like(ray_dir)
|
||||
|
||||
# Transform from camera to world coordinates
|
||||
ray_orig, ray_dir = camera.extrinsics.inv_transform_rays(ray_orig, ray_dir)
|
||||
ray_dir /= torch.linalg.norm(ray_dir, dim=-1, keepdim=True)
|
||||
|
||||
return ray_dir[0].reshape(camera.height, camera.width, 3)
|
||||
|
||||
@pytest.mark.parametrize('scene_idx,azimuth,elevation,amplitude,sharpness', [
|
||||
(0, torch.tensor([0., math.pi / 2.], device='cuda'), torch.tensor([0., 0.], device='cuda'),
|
||||
torch.tensor([[5., 2., 2.], [5., 10., 5.]], device='cuda'), torch.tensor([6., 20.], device='cuda')),
|
||||
(1, torch.tensor([0., 0.], device='cuda'), torch.tensor([-math.pi / 2., math.pi / 2.], device='cuda'),
|
||||
torch.tensor([[3., 3., 7.], [8., 8., 1.]], device='cuda'), torch.tensor([5., 40.], device='cuda'))
|
||||
])
|
||||
class TestRenderLighting:
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def rasterization_output(self):
|
||||
MODEL_PATH = os.path.join(ROOT_DIR, 'colored_sphere.obj')
|
||||
obj = kal.io.obj.import_mesh(MODEL_PATH, with_materials=True, with_normals=True)
|
||||
|
||||
vertices = obj.vertices.cuda().unsqueeze(0)
|
||||
# Normalize vertices in [-0.5, 0.5] range
|
||||
vertices_max = vertices.max(dim=1, keepdim=True)[0]
|
||||
vertices_min = vertices.min(dim=1, keepdim=True)[0]
|
||||
vertices = ((vertices - vertices_min) / (vertices_max - vertices_min)) - 0.5
|
||||
|
||||
faces = obj.faces.cuda()
|
||||
num_faces = faces.shape[0]
|
||||
num_vertices = vertices.shape[1]
|
||||
face_vertices = kal.ops.mesh.index_vertices_by_faces(vertices, faces)
|
||||
# Face normals w.r.t to the world coordinate system
|
||||
face_normals_idx = obj.face_normals_idx.cuda()
|
||||
normals = obj.normals.cuda().unsqueeze(0)
|
||||
face_world_normals = kal.ops.mesh.index_vertices_by_faces(normals, face_normals_idx)
|
||||
|
||||
face_uvs_idx = obj.face_uvs_idx.cuda()
|
||||
uvs = obj.uvs.cuda().unsqueeze(0)
|
||||
face_uvs = kal.ops.mesh.index_vertices_by_faces(uvs, face_uvs_idx)
|
||||
# Take diffuse texture map component from materials
|
||||
diffuse_texture = obj.materials[0]['map_Kd'].cuda().float().permute(2, 0, 1).unsqueeze(0) / 255.
|
||||
cam_pos = torch.tensor([
|
||||
[0., 0., 1.],
|
||||
[0., -0.3, 0.9],
|
||||
[0., -1., 1.],
|
||||
[0., -0.999, 0.111],
|
||||
[0., 0.999, 0.111],
|
||||
[0.5, 0., 0.5]
|
||||
], device='cuda')
|
||||
nb_views = cam_pos.shape[0]
|
||||
cam_pos = cam_pos / cam_pos.norm(dim=-1, keepdim=True)
|
||||
cams = kal.render.camera.Camera.from_args(
|
||||
eye=cam_pos,
|
||||
at=torch.tensor([[0., 0., 0.]], device='cuda').repeat(nb_views, 1),
|
||||
up=torch.tensor([[0., 1., 0.]], device='cuda').repeat(nb_views, 1),
|
||||
fov=70. * 2. * math.pi / 360,
|
||||
width=256, height=256, device='cuda'
|
||||
)
|
||||
vertices_camera = cams.extrinsics.transform(vertices)
|
||||
vertices_ndc = cams.intrinsics.transform(vertices_camera)
|
||||
face_vertices_camera = kal.ops.mesh.index_vertices_by_faces(vertices_camera, faces)
|
||||
face_vertices_image = kal.ops.mesh.index_vertices_by_faces(vertices_ndc[..., :2], faces)
|
||||
face_vertices_z = face_vertices_camera[..., -1]
|
||||
|
||||
# Compute the rays
|
||||
rays_d = []
|
||||
for cam in cams:
|
||||
rays_d.append(_generate_pinhole_rays_dir(cam))
|
||||
# Rays must be toward the camera
|
||||
rays_d = -torch.stack(rays_d, dim=0)
|
||||
imsize = 256
|
||||
face_vertices = kal.ops.mesh.index_vertices_by_faces(vertices, faces)
|
||||
im_features, face_idx = kal.render.mesh.rasterize(
|
||||
imsize, imsize, face_vertices_camera[..., -1], face_vertices_image,
|
||||
[face_uvs.repeat(nb_views, 1, 1, 1), face_world_normals.repeat(nb_views, 1, 1, 1)]
|
||||
)
|
||||
hard_mask = face_idx != -1
|
||||
hard_mask = hard_mask
|
||||
uv_map = im_features[0]
|
||||
im_world_normal = im_features[1] / torch.sqrt(torch.sum(im_features[1] * im_features[1], dim=-1, keepdim=True))
|
||||
albedo = kal.render.mesh.texture_mapping(uv_map, diffuse_texture.repeat(nb_views, 1, 1, 1))
|
||||
albedo = torch.clamp(albedo * hard_mask.unsqueeze(-1), min=0., max=1.)
|
||||
return {
|
||||
'albedo': albedo,
|
||||
'im_world_normal': im_world_normal,
|
||||
'hard_mask': hard_mask,
|
||||
'roughness': hard_mask * 0.1,
|
||||
'rays_d': rays_d
|
||||
}
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def albedo(self, rasterization_output):
|
||||
return rasterization_output['albedo']
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def im_world_normal(self, rasterization_output):
|
||||
return rasterization_output['im_world_normal']
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def hard_mask(self, rasterization_output):
|
||||
return rasterization_output['hard_mask']
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def roughness(self, rasterization_output):
|
||||
return rasterization_output['roughness']
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def rays_d(self, rasterization_output):
|
||||
return rasterization_output['rays_d']
|
||||
|
||||
def test_diffuse_inner_product(self, scene_idx, azimuth, elevation, amplitude, sharpness,
|
||||
albedo, im_world_normal, hard_mask):
|
||||
directions = torch.stack(kal.ops.coords.spherical2cartesian(azimuth, elevation), dim=-1).cuda()
|
||||
img = torch.zeros_like(im_world_normal)
|
||||
lighting_effect = kal.render.lighting.sg_diffuse_inner_product(
|
||||
amplitude, directions, sharpness,
|
||||
im_world_normal[hard_mask], albedo[hard_mask]
|
||||
)
|
||||
img[hard_mask] = lighting_effect
|
||||
|
||||
gt = torch.stack([
|
||||
torch.from_numpy(np.array(Image.open(os.path.join(ROOT_DIR, 'render', 'sg', f'diffuse_inner_product_{scene_idx}_{j}.png'))))
|
||||
for j in range(6)
|
||||
], dim=0).cuda().float() / 255.
|
||||
|
||||
assert torch.allclose(torch.clamp(img, 0., 1.), gt, rtol=0., atol=1. / 255.)
|
||||
|
||||
def test_diffuse_fitted(self, scene_idx, azimuth, elevation, amplitude, sharpness,
|
||||
albedo, im_world_normal, hard_mask):
|
||||
directions = torch.stack(kal.ops.coords.spherical2cartesian(azimuth, elevation), dim=-1).cuda()
|
||||
img = torch.zeros_like(im_world_normal)
|
||||
lighting_effect = kal.render.lighting.sg_diffuse_fitted(
|
||||
amplitude, directions, sharpness,
|
||||
im_world_normal[hard_mask], albedo[hard_mask]
|
||||
)
|
||||
img[hard_mask] = lighting_effect
|
||||
|
||||
gt = torch.stack([
|
||||
torch.from_numpy(np.array(Image.open(os.path.join(ROOT_DIR, 'render', 'sg', f'diffuse_fitted_{scene_idx}_{j}.png'))))
|
||||
for j in range(6)
|
||||
], dim=0).cuda().float() / 255.
|
||||
|
||||
assert torch.allclose(torch.clamp(img, 0., 1.), gt, rtol=0., atol=1. / 255.)
|
||||
|
||||
def test_specular(self, scene_idx, azimuth, elevation, amplitude, sharpness,
|
||||
albedo, im_world_normal, roughness, rays_d, hard_mask):
|
||||
directions = torch.stack(kal.ops.coords.spherical2cartesian(azimuth, elevation), dim=-1).cuda()
|
||||
img = torch.zeros_like(im_world_normal)
|
||||
lighting_effect = kal.render.lighting.sg_warp_specular_term(
|
||||
amplitude, directions, sharpness,
|
||||
im_world_normal[hard_mask], roughness[hard_mask], rays_d[hard_mask], albedo[hard_mask]
|
||||
)
|
||||
img[hard_mask] = lighting_effect
|
||||
|
||||
gt = torch.stack([
|
||||
torch.from_numpy(np.array(Image.open(os.path.join(ROOT_DIR, 'render', 'sg', f'specular_{scene_idx}_{j}.png'))))
|
||||
for j in range(6)
|
||||
], dim=0).cuda().float() / 255.
|
||||
|
||||
assert torch.allclose(torch.clamp(img, 0., 1.), gt, rtol=0., atol=1. / 255.)
|
||||
226
tests/python/kaolin/render/lighting/test_sh.py
Normal file
@@ -0,0 +1,226 @@
|
||||
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import os
|
||||
import math
|
||||
import pytest
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
from PIL import Image
|
||||
|
||||
import kaolin as kal
|
||||
|
||||
ROOT_DIR = os.path.join(
|
||||
os.path.dirname(os.path.abspath(__file__)),
|
||||
os.pardir, os.pardir, os.pardir, os.pardir, 'samples'
|
||||
)
|
||||
|
||||
@pytest.mark.parametrize('scene_idx,azimuth,elevation', [
|
||||
(0, torch.tensor([0.], device='cuda'), torch.tensor([0.], device='cuda')),
|
||||
(1, torch.tensor([math.pi / 4.], device='cuda'), torch.tensor([math.pi / 2.], device='cuda'))
|
||||
])
|
||||
class TestRenderLighting:
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def rasterization_output(self):
|
||||
MODEL_PATH = os.path.join(ROOT_DIR, 'colored_sphere.obj')
|
||||
obj = kal.io.obj.import_mesh(MODEL_PATH, with_materials=True, with_normals=True)
|
||||
|
||||
vertices = obj.vertices.cuda().unsqueeze(0)
|
||||
# Normalize vertices in [-0.5, 0.5] range
|
||||
vertices_max = vertices.max(dim=1, keepdim=True)[0]
|
||||
vertices_min = vertices.min(dim=1, keepdim=True)[0]
|
||||
vertices = ((vertices - vertices_min) / (vertices_max - vertices_min)) - 0.5
|
||||
|
||||
faces = obj.faces.cuda()
|
||||
num_faces = faces.shape[0]
|
||||
num_vertices = vertices.shape[1]
|
||||
face_vertices = kal.ops.mesh.index_vertices_by_faces(vertices, faces)
|
||||
# Face normals w.r.t to the world coordinate system
|
||||
normals = obj.normals.cuda().unsqueeze(0)
|
||||
face_normals_idx = obj.face_normals_idx.cuda()
|
||||
face_world_normals = kal.ops.mesh.index_vertices_by_faces(normals, face_normals_idx)
|
||||
|
||||
face_uvs_idx = obj.face_uvs_idx.cuda()
|
||||
uvs = obj.uvs.cuda().unsqueeze(0)
|
||||
face_uvs = kal.ops.mesh.index_vertices_by_faces(uvs, face_uvs_idx)
|
||||
# Take diffuse texture map component from materials
|
||||
diffuse_texture = obj.materials[0]['map_Kd'].cuda().float().permute(2, 0, 1).unsqueeze(0) / 255.
|
||||
cam_pos = torch.tensor([
|
||||
[0., 0., 1.],
|
||||
[0., -0.3, 0.9],
|
||||
[0., -1., 1.],
|
||||
[0., -0.999, 0.111],
|
||||
[0., 0.999, 0.111],
|
||||
[0.5, 0., 0.5]
|
||||
], device='cuda')
|
||||
nb_views = cam_pos.shape[0]
|
||||
cam_pos = cam_pos / cam_pos.norm(dim=-1, keepdim=True)
|
||||
cams = kal.render.camera.Camera.from_args(
|
||||
eye=cam_pos,
|
||||
at=torch.tensor([[0., 0., 0.]], device='cuda').repeat(nb_views, 1),
|
||||
up=torch.tensor([[0., 1., 0.]], device='cuda').repeat(nb_views, 1),
|
||||
fov=70. * 2. * math.pi / 360,
|
||||
width=256, height=256, device='cuda'
|
||||
)
|
||||
vertices_camera = cams.extrinsics.transform(vertices)
|
||||
vertices_ndc = cams.intrinsics.transform(vertices_camera)
|
||||
face_vertices_camera = kal.ops.mesh.index_vertices_by_faces(vertices_camera, faces)
|
||||
face_vertices_image = kal.ops.mesh.index_vertices_by_faces(vertices_ndc[..., :2], faces)
|
||||
face_vertices_z = face_vertices_camera[..., -1]
|
||||
|
||||
imsize = 256
|
||||
face_vertices = kal.ops.mesh.index_vertices_by_faces(vertices, faces)
|
||||
im_features, face_idx = kal.render.mesh.rasterize(
|
||||
imsize, imsize, face_vertices_camera[..., -1], face_vertices_image,
|
||||
[face_uvs.repeat(nb_views, 1, 1, 1), face_world_normals.repeat(nb_views, 1, 1, 1)]
|
||||
)
|
||||
hard_mask = face_idx != -1
|
||||
hard_mask = hard_mask
|
||||
uv_map = im_features[0]
|
||||
im_world_normal = im_features[1] / torch.sqrt(torch.sum(im_features[1] * im_features[1], dim=-1, keepdim=True))
|
||||
albedo = kal.render.mesh.texture_mapping(uv_map, diffuse_texture.repeat(nb_views, 1, 1, 1))
|
||||
albedo = torch.clamp(albedo * hard_mask.unsqueeze(-1), min=0., max=1.)
|
||||
return {
|
||||
'albedo': albedo,
|
||||
'im_world_normal': im_world_normal,
|
||||
'hard_mask': hard_mask,
|
||||
}
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def albedo(self, rasterization_output):
|
||||
return rasterization_output['albedo']
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def im_world_normal(self, rasterization_output):
|
||||
return rasterization_output['im_world_normal']
|
||||
|
||||
@pytest.fixture(autouse=True, scope='class')
|
||||
def hard_mask(self, rasterization_output):
|
||||
return rasterization_output['hard_mask']
|
||||
|
||||
def test_diffuse_sh(self, scene_idx, azimuth, elevation,
|
||||
albedo, im_world_normal, hard_mask):
|
||||
directions = torch.cat(kal.ops.coords.spherical2cartesian(azimuth, elevation), dim=-1).cuda()
|
||||
img = torch.zeros_like(im_world_normal)
|
||||
lighting_effect = kal.render.lighting.sh9_diffuse(
|
||||
directions, im_world_normal[hard_mask], albedo[hard_mask]
|
||||
)
|
||||
img[hard_mask] = lighting_effect
|
||||
|
||||
gt = torch.stack([
|
||||
torch.from_numpy(np.array(Image.open(os.path.join(
|
||||
ROOT_DIR, 'render', 'sh', f'diffuse_{scene_idx}_{j}.png'
|
||||
)))) for j in range(6)
|
||||
], dim=0).cuda().float() / 255.
|
||||
|
||||
@pytest.mark.parametrize('shape', [(1025,)])
|
||||
class TestSh9:
|
||||
# Those a simply regression tests
|
||||
@classmethod
|
||||
def _naive_project_onto_sh9(cls, directions):
|
||||
if isinstance(directions, torch.Tensor):
|
||||
assert directions.shape[-1] == 3
|
||||
x, y, z = torch.split(directions, 1, dim=-1)
|
||||
band0 = torch.full_like(x, 0.28209479177)
|
||||
elif isinstance(directions, list):
|
||||
assert len(directions) == 3
|
||||
x, y, z = directions
|
||||
band0 = 0.28209479177
|
||||
else:
|
||||
raise TypeError(f"direction is a {type(direction)}, "
|
||||
"must be a list or a torch.Tensor")
|
||||
# Band 1
|
||||
band1_m1 = -0.4886025119 * y
|
||||
band1_0 = 0.4886025119 * z
|
||||
band1_p1 = -0.4886025119 * x
|
||||
|
||||
# Band 2
|
||||
band2_m2 = 1.0925484305920792 * (x * y)
|
||||
band2_m1 = -1.0925484305920792 * (y * z)
|
||||
band2_0 = 0.94617469575 * (z * z) - 0.31539156525
|
||||
band2_p1 = -1.0925484305920792 * x * z
|
||||
band2_p2 = 0.5462742152960396 * (x * x - y * y)
|
||||
|
||||
if isinstance(directions, torch.Tensor):
|
||||
return torch.cat([
|
||||
band0,
|
||||
band1_m1, band1_0, band1_p1,
|
||||
band2_m2, band2_m1, band2_0, band2_p1, band2_p2
|
||||
], dim=-1)
|
||||
else:
|
||||
return torch.tensor([
|
||||
band0,
|
||||
band1_m1, band1_0, band1_p1,
|
||||
band2_m2, band2_m1, band2_0, band2_p1, band2_p2
|
||||
])
|
||||
|
||||
@classmethod
|
||||
def _naive_sh9_irradiance(cls, lights, normals):
|
||||
is_batched = lights.ndim == 3
|
||||
assert lights.shape[-1] == 9
|
||||
bands = cls._naive_project_onto_sh9(normals)
|
||||
if is_batched:
|
||||
assert lights.shape[0] == normals.shape[0]
|
||||
num_scenes = lights.shape[0]
|
||||
bands = bands.reshape(num_scenes, -1, 9)
|
||||
else:
|
||||
bands = bands.reshape(-1, 9)
|
||||
|
||||
bands[..., 0] *= math.pi
|
||||
bands[..., 1:4] *= 2. * math.pi / 3.
|
||||
bands[..., 4:] *= math.pi / 4.
|
||||
|
||||
return torch.sum(bands * lights.unsqueeze(-2), dim=-1).reshape(*normals.shape[:-1])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def point_directions(self, shape):
|
||||
directions = torch.rand((*shape, 3), device='cuda') - 0.5
|
||||
directions /= (directions ** 2).sum(-1, keepdim=True)
|
||||
return directions
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def light_directions(self):
|
||||
directions = torch.rand((3), device='cuda') - 0.5
|
||||
directions /= (directions ** 2).sum(-1, keepdim=True)
|
||||
return directions
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def albedo(self, shape):
|
||||
return torch.rand((*shape, 3), device='cuda')
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_lights_sh9(self, light_directions):
|
||||
return self._naive_project_onto_sh9(light_directions)
|
||||
|
||||
def test_project_onto_sh9(self, light_directions, expected_lights_sh9):
|
||||
output = kal.render.lighting.project_onto_sh9(light_directions)
|
||||
assert torch.equal(expected_lights_sh9, output)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_irradiance(self, expected_lights_sh9, point_directions):
|
||||
return self._naive_sh9_irradiance(expected_lights_sh9, point_directions)
|
||||
|
||||
def test_sh9_irradiance(self, expected_lights_sh9, point_directions, expected_irradiance):
|
||||
output = kal.render.lighting.sh9_irradiance(expected_lights_sh9, point_directions)
|
||||
assert torch.equal(output, expected_irradiance)
|
||||
|
||||
def test_sh9_diffuse(self, light_directions, point_directions, albedo, expected_irradiance):
|
||||
output = kal.render.lighting.sh9_diffuse(light_directions, point_directions, albedo)
|
||||
expected_diffuse = albedo * expected_irradiance.unsqueeze(-1)
|
||||
assert torch.equal(output, expected_diffuse)
|
||||
|
||||
|
||||
551
tests/python/kaolin/render/mesh/test_deftet.py
Normal file
@@ -0,0 +1,551 @@
|
||||
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import random
|
||||
import pytest
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
import math
|
||||
import os
|
||||
|
||||
from kaolin.render.camera import perspective_camera, rotate_translate_points
|
||||
from kaolin.render.mesh import deftet_sparse_render
|
||||
from kaolin.render.mesh.deftet import _naive_deftet_sparse_render
|
||||
import kaolin as kal
|
||||
from PIL import Image
|
||||
|
||||
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
MODEL_DIR = os.path.join(ROOT_DIR, os.pardir, os.pardir, os.pardir, os.pardir, 'samples/')
|
||||
|
||||
@pytest.mark.parametrize("device", ["cuda"])
|
||||
@pytest.mark.parametrize("dtype", [torch.float, torch.double])
|
||||
class TestSimpleDeftetSparseRender:
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_image(self, device, dtype):
|
||||
# Mesh 0:
|
||||
# three faces: (no intersection)
|
||||
# - two fully overlapped on left side (not same normal)
|
||||
# - one only overlapping two corners on right side
|
||||
|
||||
# Mesh 1:
|
||||
# three faces, fully overlapped (will have intersection)
|
||||
return torch.tensor(
|
||||
[[[[-1., 0. ], [0., -1.], [ 0., 1.]],
|
||||
[[-1., 0. ], [0., 1.], [ 0., -1.]],
|
||||
[[ 0., -1.], [0., 1.], [ 1., 0.]]],
|
||||
[[[-1., -1.], [1., -1.], [-1., 1.]],
|
||||
[[-1., -1.], [1., -1.], [-1., 1.]],
|
||||
[[-1., -1.], [1., -1.], [-1., 1.]]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_z(self, device, dtype):
|
||||
# Mesh 0:
|
||||
# The face on the right side is in-between
|
||||
# the two faces on the left side
|
||||
|
||||
# Mesh 1:
|
||||
# the three faces are intersecting
|
||||
return torch.tensor(
|
||||
[[[-2., -1., -1.],
|
||||
[-2.5, -3., -3.],
|
||||
[-2., -2., -2.]],
|
||||
[[-2., -1., -3.],
|
||||
[-2., -2., -2.],
|
||||
[-2., -3., -1.]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_features(self, device, dtype):
|
||||
features_per_face = torch.tensor(
|
||||
[[[[0.], [0.], [0.]],
|
||||
[[1.], [1.], [1.]],
|
||||
[[2.], [2.], [2.]]],
|
||||
[[[3.], [3.], [3.]],
|
||||
[[4.], [4.], [4.]],
|
||||
[[5.], [5.], [5.]]]],
|
||||
device=device, dtype=dtype)
|
||||
features_per_vertice = torch.tensor(
|
||||
[[[[0.], [1.], [2.]],
|
||||
[[3.], [4.], [5.]],
|
||||
[[6.], [7.], [8.]]],
|
||||
[[[9.], [10.], [11.]],
|
||||
[[12.], [13.], [14.]],
|
||||
[[15.], [16.], [17.]]]],
|
||||
device=device, dtype=dtype)
|
||||
return [features_per_face, features_per_vertice]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def pixel_coords(self, device, dtype):
|
||||
# slightly shifting coords to stay away from corner cases
|
||||
return torch.tensor(
|
||||
[[[-0.999, 0.], [-0.001, -0.998], [0.001, 0.998], [0.999, 0.], # corners
|
||||
[-0.45, 0.], [0.45, 0.], # centers
|
||||
[-0.999, -0.999]], # void
|
||||
[[-0.998, -0.999], [0.998, -0.999], [-0.999, 0.998], # corners
|
||||
[-0.001, -0.], [0., -0.999], [-0.999, 0.], # center of edges
|
||||
[0.001, 0.001]]], # void
|
||||
device=device, dtype=dtype)
|
||||
|
||||
@pytest.mark.parametrize('cat_features', [False, True])
|
||||
@pytest.mark.parametrize('use_naive', [False, True])
|
||||
def test_full_render(self, pixel_coords, face_vertices_image, face_vertices_z,
|
||||
face_features, device, dtype, use_naive, cat_features):
|
||||
render_ranges = torch.tensor([[[-4., 0.]]], device='cuda',
|
||||
dtype=dtype).repeat(2, 7, 1)
|
||||
if cat_features:
|
||||
face_features = torch.cat(face_features, dim=-1)
|
||||
if use_naive:
|
||||
interpolated_features, face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_features, 5)
|
||||
else:
|
||||
interpolated_features, face_idx = deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_features, 5)
|
||||
gt_face_idx = torch.tensor(
|
||||
[[[0, 1, -1, -1, -1],
|
||||
[0, 1, -1, -1, -1],
|
||||
[2, -1, -1, -1, -1],
|
||||
[2, -1, -1, -1, -1],
|
||||
[0, 1, -1, -1, -1],
|
||||
[2, -1, -1, -1, -1],
|
||||
[-1, -1, -1, -1, -1]],
|
||||
[[0, 1, 2, -1, -1],
|
||||
[0, 1, 2, -1, -1],
|
||||
[2, 1, 0, -1, -1],
|
||||
[2, 1, 0, -1, -1],
|
||||
[0, 1, 2, -1, -1],
|
||||
[2, 1, 0, -1, -1],
|
||||
[-1, -1, -1, -1, -1]]],
|
||||
device=device, dtype=torch.long)
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
gt_interpolated_features0 = (
|
||||
gt_face_idx + torch.arange(2, device=device).view(2, 1, 1) * face_vertices_image.shape[1]
|
||||
).to(dtype).unsqueeze(-1)
|
||||
gt_interpolated_features0[gt_face_idx == -1] = 0.
|
||||
gt_interpolated_features1 = torch.tensor(
|
||||
[[[[0.], [3.], [0.], [0.], [0.]],
|
||||
[[1.], [5.], [0.], [0.], [0.]],
|
||||
[[7.], [0.], [0.], [0.], [0.]],
|
||||
[[8.], [0.], [0.], [0.], [0.]],
|
||||
[[0.825], [3.825], [0.], [0.], [0.]],
|
||||
[[7.175], [0.], [0.], [0.], [0.]],
|
||||
[[0.], [0.], [0.], [0.], [0.]]],
|
||||
[[[9.], [12.], [15.], [0.], [0.]],
|
||||
[[10.], [13.], [16.], [0.], [0.]],
|
||||
[[17.], [14.], [11.], [0.], [0.]],
|
||||
[[16.5], [13.5], [10.5], [0.], [0.]],
|
||||
[[9.5], [12.5], [15.5], [0.], [0.]],
|
||||
[[16.], [13.], [10.], [0.], [0.]],
|
||||
[[0.], [0.], [0.], [0.], [0.]]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
if cat_features:
|
||||
gt_interpolated_features = torch.cat(
|
||||
[gt_interpolated_features0, gt_interpolated_features1], dim=-1)
|
||||
assert torch.allclose(interpolated_features, gt_interpolated_features,
|
||||
atol=3e-3, rtol=1e-5)
|
||||
else:
|
||||
assert torch.allclose(interpolated_features[0], gt_interpolated_features0)
|
||||
assert torch.allclose(interpolated_features[1], gt_interpolated_features1,
|
||||
atol=3e-3, rtol=1e-5)
|
||||
|
||||
@pytest.mark.parametrize('cat_features', [False, True])
|
||||
@pytest.mark.parametrize('use_naive', [False, True])
|
||||
def test_restricted_range(self, pixel_coords, face_vertices_image, face_vertices_z,
|
||||
face_features, device, dtype, use_naive, cat_features):
|
||||
render_ranges = torch.tensor([[[-2.1, 0.]]], device='cuda',
|
||||
dtype=dtype).repeat(2, 7, 1)
|
||||
if cat_features:
|
||||
face_features = torch.cat(face_features, dim=-1)
|
||||
if use_naive:
|
||||
interpolated_features, face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_features, 5)
|
||||
else:
|
||||
interpolated_features, face_idx = deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_features, 5)
|
||||
|
||||
gt_face_idx = torch.tensor(
|
||||
[[[0, -1, -1, -1, -1],
|
||||
[0, -1, -1, -1, -1],
|
||||
[2, -1, -1, -1, -1],
|
||||
[2, -1, -1, -1, -1],
|
||||
[0, -1, -1, -1, -1],
|
||||
[2, -1, -1, -1, -1],
|
||||
[-1, -1, -1, -1, -1]],
|
||||
[[0, 1, 2, -1, -1],
|
||||
[0, 1, -1, -1, -1],
|
||||
[2, 1, -1, -1, -1],
|
||||
[2, 1, 0, -1, -1],
|
||||
[0, 1, -1, -1, -1],
|
||||
[2, 1, -1, -1, -1],
|
||||
[-1, -1, -1, -1, -1]]],
|
||||
device=device, dtype=torch.long)
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
gt_interpolated_features0 = (
|
||||
gt_face_idx + torch.arange(2, device=device).view(2, 1, 1) * face_vertices_image.shape[1]
|
||||
).to(dtype).unsqueeze(-1)
|
||||
gt_interpolated_features0[gt_face_idx == -1] = 0.
|
||||
gt_interpolated_features1 = torch.tensor(
|
||||
[[[[0.], [0.], [0.], [0.], [0.]],
|
||||
[[1.], [0.], [0.], [0.], [0.]],
|
||||
[[7.], [0.], [0.], [0.], [0.]],
|
||||
[[8.], [0.], [0.], [0.], [0.]],
|
||||
[[0.825], [0.], [0.], [0.], [0.]],
|
||||
[[7.175], [0.], [0.], [0.], [0.]],
|
||||
[[0.], [0.], [0.], [0.], [0.]]],
|
||||
[[[9.], [12.], [15.], [0.], [0.]],
|
||||
[[10.], [13.], [0.], [0.], [0.]],
|
||||
[[17.], [14.], [0.], [0.], [0.]],
|
||||
[[16.5], [13.5], [10.5], [0.], [0.]],
|
||||
[[9.5], [12.5], [0.], [0.], [0.]],
|
||||
[[16.], [13.], [0.], [0.], [0.]],
|
||||
[[0.], [0.], [0.], [0.], [0.]]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
if cat_features:
|
||||
gt_interpolated_features = torch.cat(
|
||||
[gt_interpolated_features0, gt_interpolated_features1], dim=-1)
|
||||
assert torch.allclose(interpolated_features, gt_interpolated_features,
|
||||
atol=3e-3, rtol=1e-5)
|
||||
else:
|
||||
assert torch.allclose(interpolated_features[0], gt_interpolated_features0)
|
||||
assert torch.allclose(interpolated_features[1], gt_interpolated_features1,
|
||||
atol=3e-3, rtol=1e-5)
|
||||
|
||||
@pytest.mark.parametrize('cat_features', [False, True])
|
||||
def test_only_closest(self, pixel_coords, face_vertices_image, face_vertices_z,
|
||||
face_features, device, dtype, cat_features):
|
||||
"""Equivalent to rasterization"""
|
||||
render_ranges = torch.tensor([[[-4., 0.]]], device='cuda',
|
||||
dtype=dtype).repeat(2, 7, 1)
|
||||
if cat_features:
|
||||
face_features = torch.cat(face_features, dim=-1)
|
||||
interpolated_features, face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_features, 1)
|
||||
gt_face_idx = torch.tensor(
|
||||
[[[0], [0], [2], [2], [0], [2], [-1]],
|
||||
[[0], [0], [2], [2], [0], [2], [-1]]],
|
||||
device=device, dtype=torch.long)
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
gt_interpolated_features0 = (
|
||||
gt_face_idx + torch.arange(2, device=device).view(2, 1, 1) * face_vertices_image.shape[1]
|
||||
).to(dtype).unsqueeze(-1)
|
||||
gt_interpolated_features0[gt_face_idx == -1] = 0.
|
||||
gt_interpolated_features1 = torch.tensor(
|
||||
[[[[0.]], [[1.]], [[7.]], [[8.]], [[0.825]], [[7.175]], [[0.]]],
|
||||
[[[9.]], [[10.]], [[17.]], [[16.5]], [[9.5]], [[16.]], [[0.]]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
if cat_features:
|
||||
gt_interpolated_features = torch.cat(
|
||||
[gt_interpolated_features0, gt_interpolated_features1], dim=-1)
|
||||
assert torch.allclose(interpolated_features, gt_interpolated_features,
|
||||
atol=3e-3, rtol=1e-5)
|
||||
else:
|
||||
assert torch.allclose(interpolated_features[0], gt_interpolated_features0)
|
||||
assert torch.allclose(interpolated_features[1], gt_interpolated_features1,
|
||||
atol=3e-3, rtol=1e-5)
|
||||
|
||||
@pytest.mark.parametrize('cat_features', [False, True])
|
||||
def test_with_valid_faces(self, pixel_coords, face_vertices_image, face_vertices_z,
|
||||
face_features, device, dtype, cat_features):
|
||||
render_ranges = torch.tensor([[[-4., 0.]]], device='cuda',
|
||||
dtype=dtype).repeat(2, 7, 1)
|
||||
valid_faces = torch.tensor([[False, True, True], [True, False, True]], device='cuda',
|
||||
dtype=torch.bool)
|
||||
if cat_features:
|
||||
face_features = torch.cat(face_features, dim=-1)
|
||||
interpolated_features, face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_features, 3, valid_faces=valid_faces)
|
||||
|
||||
gt_face_idx = torch.tensor(
|
||||
[[[1, -1, -1],
|
||||
[1, -1, -1],
|
||||
[2, -1, -1],
|
||||
[2, -1, -1],
|
||||
[1, -1, -1],
|
||||
[2, -1, -1],
|
||||
[-1, -1, -1]],
|
||||
[[0, 2, -1],
|
||||
[0, 2, -1],
|
||||
[2, 0, -1],
|
||||
[2, 0, -1],
|
||||
[0, 2, -1],
|
||||
[2, 0, -1],
|
||||
[-1, -1, -1]]],
|
||||
device=device, dtype=torch.long)
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
|
||||
gt_interpolated_features0 = (
|
||||
gt_face_idx + torch.arange(2, device=device).view(2, 1, 1) * face_vertices_image.shape[1]
|
||||
).to(dtype).unsqueeze(-1)
|
||||
gt_interpolated_features0[gt_face_idx == -1] = 0.
|
||||
|
||||
gt_interpolated_features1 = torch.tensor(
|
||||
[[[[3.], [0.], [0.]],
|
||||
[[5.], [0.], [0.]],
|
||||
[[7.], [0.], [0.]],
|
||||
[[8.], [0.], [0.]],
|
||||
[[3.825], [0.], [0.]],
|
||||
[[7.175], [0.], [0.]],
|
||||
[[0.], [0.], [0.]]],
|
||||
[[[9.], [15.], [0.]],
|
||||
[[10.], [16.], [0.]],
|
||||
[[17.], [11.], [0.]],
|
||||
[[16.5], [10.5], [0.]],
|
||||
[[9.5], [15.5], [0.]],
|
||||
[[16.], [10.], [0.]],
|
||||
[[0.], [0.], [0.]]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
if cat_features:
|
||||
gt_interpolated_features = torch.cat(
|
||||
[gt_interpolated_features0, gt_interpolated_features1], dim=-1)
|
||||
assert torch.allclose(interpolated_features, gt_interpolated_features,
|
||||
atol=3e-3, rtol=1e-5)
|
||||
else:
|
||||
assert torch.allclose(interpolated_features[0], gt_interpolated_features0)
|
||||
assert torch.allclose(interpolated_features[1], gt_interpolated_features1,
|
||||
atol=3e-3, rtol=1e-5)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("dtype", [torch.float, torch.double])
|
||||
@pytest.mark.parametrize("batch_size", [1, 3])
|
||||
@pytest.mark.parametrize("num_pixels", [1, 31, 1025])
|
||||
@pytest.mark.parametrize("render_up_to_center", [True, False])
|
||||
class TestDeftetSparseRender:
|
||||
@pytest.fixture(autouse=True)
|
||||
def mesh(self):
|
||||
mesh = kal.io.obj.import_mesh(os.path.join(MODEL_DIR, 'model.obj'),
|
||||
with_materials=True)
|
||||
return mesh
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces(self, mesh):
|
||||
return mesh.faces.cuda()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_pos(self, batch_size, dtype):
|
||||
return torch.tensor([[0.5, 0.5, 3.],
|
||||
[2., 2., -2.],
|
||||
[3., 0.5, 0.5]],
|
||||
device='cuda', dtype=dtype)[:batch_size]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def look_at(self, batch_size, dtype):
|
||||
return torch.full((batch_size, 3), 0.5, device='cuda',
|
||||
dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_up(self, batch_size, dtype):
|
||||
return torch.tensor([[0., 1., 0.]], device='cuda',
|
||||
dtype=dtype).repeat(batch_size, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_proj(self, dtype):
|
||||
return kal.render.camera.generate_perspective_projection(
|
||||
fovyangle=math.pi / 4., dtype=dtype).cuda()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_camera(self, mesh, camera_pos, look_at, camera_up, dtype):
|
||||
vertices = mesh.vertices.to('cuda', dtype).unsqueeze(0)
|
||||
min_vertices = vertices.min(dim=1, keepdims=True)[0]
|
||||
max_vertices = vertices.max(dim=1, keepdims=True)[0]
|
||||
vertices = (vertices - min_vertices) / (max_vertices - min_vertices)
|
||||
camera_rot, camera_trans = kal.render.camera.generate_rotate_translate_matrices(
|
||||
camera_pos, look_at, camera_up)
|
||||
return kal.render.camera.rotate_translate_points(
|
||||
vertices, camera_rot, camera_trans)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_image(self, vertices_camera, camera_proj):
|
||||
return kal.render.camera.perspective_camera(
|
||||
vertices_camera, camera_proj)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_z(self, vertices_camera, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
vertices_camera[:, :, -1:], faces).squeeze(-1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_image(self, vertices_image, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
vertices_image, faces)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def texture_map(self, mesh, dtype):
|
||||
return mesh.materials[0]['map_Kd'].to('cuda', dtype).permute(
|
||||
2, 0, 1).unsqueeze(0) / 255.
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_uvs(self, mesh, batch_size, dtype):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
mesh.uvs.unsqueeze(0).to('cuda', dtype),
|
||||
mesh.face_uvs_idx.cuda()).repeat(batch_size, 1, 1, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def pixel_coords(self, batch_size, num_pixels, dtype):
|
||||
return torch.rand((batch_size, num_pixels, 2), device='cuda',
|
||||
dtype=dtype) * 2. - 1.
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def render_ranges(self, vertices_camera, render_up_to_center, num_pixels):
|
||||
min_z = vertices_camera[:, :, -1].min(dim=1)[0]
|
||||
max_z = vertices_camera[:, :, -1].max(dim=1)[0]
|
||||
min_render_range = (min_z + max_z) / 2. if render_up_to_center else \
|
||||
min_z
|
||||
render_range = torch.nn.functional.pad(min_render_range.unsqueeze(-1), (0, 1),
|
||||
value=0.)
|
||||
return render_range.unsqueeze(1).repeat(1, num_pixels, 1)
|
||||
|
||||
@pytest.mark.parametrize('knum', [20, 30])
|
||||
def test_forward(self, pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_uvs, knum):
|
||||
interpolated_features, face_idx = deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_uvs, knum)
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_uvs, knum)
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
assert torch.allclose(interpolated_features,
|
||||
gt_interpolated_features,
|
||||
rtol=1e-4, atol=1e-4)
|
||||
|
||||
@pytest.mark.parametrize('knum', [20, 30])
|
||||
def test_forward_with_mask(self, pixel_coords, render_ranges,
|
||||
face_vertices_z, face_vertices_image,
|
||||
face_uvs, knum):
|
||||
face_mask = torch.ones_like(face_uvs[:, :, :, :1])
|
||||
interpolated_features, face_idx = deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, [face_uvs, face_mask], knum)
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, [face_uvs, face_mask], knum)
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
assert torch.allclose(interpolated_features[0],
|
||||
gt_interpolated_features[0],
|
||||
rtol=1e-4, atol=1e-4)
|
||||
assert torch.allclose(interpolated_features[0],
|
||||
gt_interpolated_features[0],
|
||||
rtol=1e-4, atol=1e-4)
|
||||
|
||||
@pytest.mark.parametrize('knum', [20, 30])
|
||||
def test_backward(self, pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_uvs, knum):
|
||||
pixel_coords = pixel_coords.detach()
|
||||
pixel_coords.requires_grad = True
|
||||
render_ranges = render_ranges.detach()
|
||||
render_ranges.requires_grad = True
|
||||
face_vertices_z = face_vertices_z.detach()
|
||||
face_vertices_z.requires_grad = True
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
face_uvs = face_uvs.detach()
|
||||
face_uvs.requires_grad = True
|
||||
pixel_coords2 = pixel_coords.detach()
|
||||
pixel_coords2.requires_grad = True
|
||||
render_ranges2 = render_ranges.detach()
|
||||
render_ranges2.requires_grad = True
|
||||
face_vertices_z2 = face_vertices_z.detach()
|
||||
face_vertices_z2.requires_grad = True
|
||||
face_vertices_image2 = face_vertices_image.detach()
|
||||
face_vertices_image2.requires_grad = True
|
||||
face_uvs2 = face_uvs.detach()
|
||||
face_uvs2.requires_grad = True
|
||||
|
||||
interpolated_features, face_idx = deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_uvs, knum)
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords2, render_ranges2, face_vertices_z2,
|
||||
face_vertices_image2, face_uvs2, knum)
|
||||
|
||||
grad_out = torch.rand_like(interpolated_features)
|
||||
interpolated_features.backward(grad_out)
|
||||
gt_interpolated_features.backward(grad_out)
|
||||
|
||||
assert pixel_coords.grad is None or torch.all(pixel_coords.grad == 0.)
|
||||
assert render_ranges.grad is None or torch.all(render_ranges.grad == 0.)
|
||||
assert face_vertices_z.grad is None or torch.all(face_vertices_z.grad == 0.)
|
||||
assert torch.allclose(face_vertices_image.grad,
|
||||
face_vertices_image2.grad,
|
||||
rtol=5e-3, atol=5e-3)
|
||||
assert torch.allclose(face_uvs.grad,
|
||||
face_uvs2.grad,
|
||||
rtol=1e-3, atol=1e-3)
|
||||
|
||||
@pytest.mark.parametrize('knum', [20, 30])
|
||||
def test_backward_with_mask(self, pixel_coords, render_ranges,
|
||||
face_vertices_z, face_vertices_image,
|
||||
face_uvs, knum):
|
||||
pixel_coords = pixel_coords.detach()
|
||||
pixel_coords.requires_grad = True
|
||||
render_ranges = render_ranges.detach()
|
||||
render_ranges.requires_grad = True
|
||||
face_vertices_z = face_vertices_z.detach()
|
||||
face_vertices_z.requires_grad = True
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
face_uvs = face_uvs.detach()
|
||||
face_uvs.requires_grad = True
|
||||
face_mask = torch.ones_like(face_uvs[:, :, :, -1:],
|
||||
requires_grad=True)
|
||||
pixel_coords2 = pixel_coords.detach()
|
||||
pixel_coords2.requires_grad = True
|
||||
render_ranges2 = render_ranges.detach()
|
||||
render_ranges2.requires_grad = True
|
||||
face_vertices_z2 = face_vertices_z.detach()
|
||||
face_vertices_z2.requires_grad = True
|
||||
face_vertices_image2 = face_vertices_image.detach()
|
||||
face_vertices_image2.requires_grad = True
|
||||
face_uvs2 = face_uvs.detach()
|
||||
face_uvs2.requires_grad = True
|
||||
face_mask2 = torch.ones_like(face_uvs2[:, :, :, -1:],
|
||||
requires_grad=True)
|
||||
|
||||
interpolated_features, face_idx = deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, [face_uvs, face_mask], knum)
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords2, render_ranges2, face_vertices_z2,
|
||||
face_vertices_image2, [face_uvs2, face_mask2], knum)
|
||||
|
||||
interpolated_features = torch.cat(interpolated_features, dim=-1)
|
||||
gt_interpolated_features = torch.cat(gt_interpolated_features, dim=-1)
|
||||
grad_out = torch.rand_like(interpolated_features)
|
||||
interpolated_features.backward(grad_out)
|
||||
gt_interpolated_features.backward(grad_out)
|
||||
|
||||
assert pixel_coords.grad is None or torch.all(pixel_coords.grad == 0.)
|
||||
assert render_ranges.grad is None or torch.all(render_ranges.grad == 0.)
|
||||
assert face_vertices_z.grad is None or torch.all(face_vertices_z.grad == 0.)
|
||||
assert torch.allclose(face_vertices_image.grad,
|
||||
face_vertices_image2.grad,
|
||||
rtol=5e-2, atol=5e-2)
|
||||
assert torch.allclose(face_uvs.grad,
|
||||
face_uvs2.grad,
|
||||
rtol=1e-3, atol=1e-3)
|
||||
assert torch.allclose(face_mask.grad,
|
||||
face_mask2.grad,
|
||||
rtol=1e-3, atol=1e-3)
|
||||
529
tests/python/kaolin/render/mesh/test_dibr.py
Normal file
@@ -0,0 +1,529 @@
|
||||
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import random
|
||||
import pytest
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
import math
|
||||
import os
|
||||
|
||||
from kaolin.render.camera import perspective_camera, rotate_translate_points
|
||||
from kaolin.render.mesh import rasterize
|
||||
import kaolin as kal
|
||||
|
||||
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
MODEL_DIR = os.path.join(ROOT_DIR, os.pardir, os.pardir,
|
||||
os.pardir, os.pardir, 'samples/')
|
||||
SIMPLE_GT_DIR = os.path.join(ROOT_DIR, os.pardir, os.pardir,
|
||||
os.pardir, os.pardir, 'samples/dibr/simple/')
|
||||
SPHERE_GT_DIR = os.path.join(ROOT_DIR, os.pardir, os.pardir,
|
||||
os.pardir, os.pardir, 'samples/dibr/sphere/')
|
||||
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
@pytest.mark.parametrize('dtype', [torch.float, torch.double])
|
||||
@pytest.mark.parametrize('height,width', [(35, 31)])
|
||||
@pytest.mark.parametrize('sigmainv', [7000, 70])
|
||||
@pytest.mark.parametrize('boxlen', [0.02, 0.2])
|
||||
class TestSimpleDibrSoftMask:
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_image(self, device, dtype):
|
||||
return torch.tensor(
|
||||
[[[[-0.7, 0. ], [0. , -0.7], [ 0. , 0.7]],
|
||||
[[-0.7, 0. ], [0. , 0.7], [ 0. , -0.7]],
|
||||
[[ 0. , -0.7], [0. , 0.7], [ 0.7, 0. ]]],
|
||||
[[[-0.7, -0.7], [0.7, -0.7], [-0.7, 0.7]],
|
||||
[[-0.7, -0.7], [0.7, -0.7], [-0.7, 0.7]],
|
||||
[[-0.7, -0.7], [0.7, -0.7], [-0.7, 0.7]]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_z(self, device, dtype):
|
||||
return torch.tensor(
|
||||
[[[-2. , -1., -1.],
|
||||
[-2.5, -3., -3.],
|
||||
[-2. , -2., -2.]],
|
||||
[[-2. , -1., -3.],
|
||||
[-2. , -2., -2.],
|
||||
[-2. , -3., -1.]]],
|
||||
device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def selected_face_idx(self, height, width, face_vertices_image,
|
||||
face_vertices_z):
|
||||
# this face_features is not really used
|
||||
# but we need it to run rasterize
|
||||
face_features = torch.zeros(face_vertices_z.shape + (1,),
|
||||
dtype=face_vertices_z.dtype,
|
||||
device=face_vertices_z.device)
|
||||
_, face_idx = kal.render.mesh.rasterize(
|
||||
height, width, face_vertices_z,
|
||||
face_vertices_image, face_features)
|
||||
return face_idx
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_soft_mask(self, height, width, sigmainv, boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SIMPLE_GT_DIR,
|
||||
f'soft_mask_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_close_face_idx(self, height, width, sigmainv, boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SIMPLE_GT_DIR,
|
||||
f'close_face_idx_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype).long() - 1
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_close_face_dist(self, height, width, sigmainv, boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SIMPLE_GT_DIR,
|
||||
f'close_face_dist_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_close_face_dist_type(self, height, width, sigmainv, boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SIMPLE_GT_DIR,
|
||||
f'close_face_dist_type_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=torch.uint8)
|
||||
|
||||
@pytest.mark.parametrize('multiplier', [1000, 100, 1])
|
||||
@pytest.mark.parametrize('knum', [30, 20])
|
||||
def test_C_dibr_soft_mask_forward(
|
||||
self, face_vertices_image, selected_face_idx, sigmainv, boxlen,
|
||||
knum, multiplier, gt_soft_mask, gt_close_face_idx,
|
||||
gt_close_face_dist, gt_close_face_dist_type):
|
||||
# This is testing the CUDA Op so we can also check for stored tensors
|
||||
face_vertices_image = face_vertices_image * multiplier
|
||||
points_min = torch.min(face_vertices_image, dim=-2)[0]
|
||||
points_max = torch.max(face_vertices_image, dim=-2)[0]
|
||||
face_large_bboxes = torch.cat([
|
||||
points_min - boxlen * multiplier,
|
||||
points_max + boxlen * multiplier
|
||||
], dim=-1)
|
||||
soft_mask, close_face_dist, close_face_idx, close_face_dist_type = \
|
||||
kal._C.render.mesh.dibr_soft_mask_forward_cuda(
|
||||
face_vertices_image,
|
||||
face_large_bboxes,
|
||||
selected_face_idx,
|
||||
sigmainv,
|
||||
knum,
|
||||
multiplier)
|
||||
|
||||
assert torch.allclose(
|
||||
soft_mask, gt_soft_mask, atol=1e-5, rtol=1e-5)
|
||||
assert torch.equal(
|
||||
close_face_idx, gt_close_face_idx[..., :knum])
|
||||
assert torch.allclose(
|
||||
close_face_dist, gt_close_face_dist[..., :knum],
|
||||
atol=1e-5, rtol=1e-5)
|
||||
assert torch.equal(
|
||||
close_face_dist_type, gt_close_face_dist_type[..., :knum])
|
||||
|
||||
@pytest.mark.parametrize('multiplier', [1000, 100])
|
||||
@pytest.mark.parametrize('knum', [30, 20])
|
||||
def test_dibr_soft_mask_forward(self, face_vertices_image, selected_face_idx,
|
||||
sigmainv, boxlen, knum, multiplier, gt_soft_mask):
|
||||
soft_mask = kal.render.mesh.dibr_soft_mask(
|
||||
face_vertices_image,
|
||||
selected_face_idx,
|
||||
sigmainv,
|
||||
boxlen,
|
||||
knum,
|
||||
multiplier
|
||||
)
|
||||
|
||||
assert torch.allclose(
|
||||
soft_mask, gt_soft_mask, atol=1e-5, rtol=1e-5)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_grad_face_vertices_image(self, height, width, sigmainv,
|
||||
boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SIMPLE_GT_DIR,
|
||||
f'grad_face_vertices_image_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype)
|
||||
|
||||
@pytest.mark.parametrize('multiplier', [1000, 100, 1])
|
||||
@pytest.mark.parametrize('knum', [30, 20])
|
||||
def test_dibr_soft_mask_backward(self, face_vertices_image, selected_face_idx,
|
||||
sigmainv, boxlen, knum, multiplier,
|
||||
gt_grad_face_vertices_image):
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
soft_mask = kal.render.mesh.dibr_soft_mask(
|
||||
face_vertices_image,
|
||||
selected_face_idx,
|
||||
sigmainv,
|
||||
boxlen,
|
||||
knum,
|
||||
multiplier
|
||||
)
|
||||
mask = selected_face_idx != -1
|
||||
shifted_mask = torch.nn.functional.pad(
|
||||
mask, (0, 5)
|
||||
)[..., 5:]
|
||||
loss = kal.metrics.render.mask_iou(soft_mask, shifted_mask)
|
||||
loss.backward()
|
||||
|
||||
assert torch.allclose(
|
||||
face_vertices_image.grad, gt_grad_face_vertices_image,
|
||||
rtol=1e-5, atol=1e-5)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('device', ['cuda'])
|
||||
@pytest.mark.parametrize('dtype', [torch.float, torch.double])
|
||||
@pytest.mark.parametrize("batch_size", [1, 3])
|
||||
@pytest.mark.parametrize("height,width", [(35, 31)])
|
||||
@pytest.mark.parametrize("flip", [False, True])
|
||||
@pytest.mark.parametrize('sigmainv', [7000, 70])
|
||||
@pytest.mark.parametrize('boxlen', [0.02, 0.01])
|
||||
class TestDibrSoftMask:
|
||||
@pytest.fixture(autouse=True)
|
||||
def mesh(self):
|
||||
mesh = kal.io.obj.import_mesh(os.path.join(MODEL_DIR, 'model.obj'),
|
||||
with_materials=False)
|
||||
return mesh
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces(self, mesh, flip):
|
||||
out = mesh.faces.cuda()
|
||||
if flip:
|
||||
out = torch.flip(out, dims=(-1,))
|
||||
return out
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_pos(self, batch_size, dtype):
|
||||
return torch.tensor([[0.5, 0.5, 3.],
|
||||
[2., 2., -2.],
|
||||
[3., 0.5, 0.5]],
|
||||
device='cuda', dtype=dtype)[:batch_size]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def look_at(self, batch_size, dtype):
|
||||
return torch.full((batch_size, 3), 0.5, device='cuda',
|
||||
dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_up(self, batch_size, dtype):
|
||||
return torch.tensor([[0., 1., 0.]], device='cuda',
|
||||
dtype=dtype).repeat(batch_size, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_proj(self, dtype):
|
||||
return kal.render.camera.generate_perspective_projection(
|
||||
fovyangle=math.pi / 4., dtype=dtype).cuda()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_camera(self, mesh, camera_pos, look_at, camera_up, dtype):
|
||||
vertices = mesh.vertices.to('cuda', dtype).unsqueeze(0)
|
||||
min_vertices = vertices.min(dim=1, keepdims=True)[0]
|
||||
max_vertices = vertices.max(dim=1, keepdims=True)[0]
|
||||
vertices = (vertices - min_vertices) / (max_vertices - min_vertices)
|
||||
camera_rot, camera_trans = kal.render.camera.generate_rotate_translate_matrices(
|
||||
camera_pos, look_at, camera_up)
|
||||
return kal.render.camera.rotate_translate_points(
|
||||
vertices, camera_rot, camera_trans)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_image(self, vertices_camera, camera_proj):
|
||||
return kal.render.camera.perspective_camera(
|
||||
vertices_camera, camera_proj)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_z(self, vertices_camera, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
vertices_camera[:, :, -1:], faces).squeeze(-1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_image(self, vertices_image, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
vertices_image, faces)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def selected_face_idx(self, height, width, face_vertices_image,
|
||||
face_vertices_z):
|
||||
# this face_features is not really used
|
||||
# but we need it to run rasterize
|
||||
face_features = torch.zeros(face_vertices_z.shape + (1,),
|
||||
dtype=face_vertices_z.dtype,
|
||||
device=face_vertices_z.device)
|
||||
_, face_idx = kal.render.mesh.rasterize(
|
||||
height, width, face_vertices_z,
|
||||
face_vertices_image, face_features)
|
||||
return face_idx
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_soft_mask(self, batch_size, height, width, sigmainv, boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SPHERE_GT_DIR,
|
||||
f'soft_mask_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype)[:batch_size]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_close_face_idx(self, batch_size, height, width, sigmainv, boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0.
|
||||
return torch.load(os.path.join(
|
||||
SPHERE_GT_DIR,
|
||||
f'close_face_idx_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype)[:batch_size].long() - 1
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_close_face_dist(self, batch_size, height, width, sigmainv, boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SPHERE_GT_DIR,
|
||||
f'close_face_dist_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype)[:batch_size]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_close_face_dist_type(self, batch_size, height, width, sigmainv, boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SPHERE_GT_DIR,
|
||||
f'close_face_dist_type_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype)[:batch_size]
|
||||
|
||||
@pytest.mark.parametrize('multiplier', [1000, 100])
|
||||
@pytest.mark.parametrize('knum', [30, 40])
|
||||
def test_C_dibr_soft_mask_forward(
|
||||
self, face_vertices_image, selected_face_idx, knum, multiplier,
|
||||
sigmainv, boxlen, gt_soft_mask, gt_close_face_idx,
|
||||
gt_close_face_dist, gt_close_face_dist_type):
|
||||
# This is testing the CUDA Op so we can also check for stored tensors
|
||||
face_vertices_image = face_vertices_image * multiplier
|
||||
points_min = torch.min(face_vertices_image, dim=-2)[0]
|
||||
points_max = torch.max(face_vertices_image, dim=-2)[0]
|
||||
face_large_bboxes = torch.cat([
|
||||
points_min - boxlen * multiplier,
|
||||
points_max + boxlen * multiplier
|
||||
], dim=-1)
|
||||
soft_mask, close_face_dist, close_face_idx, close_face_dist_type = \
|
||||
kal._C.render.mesh.dibr_soft_mask_forward_cuda(
|
||||
face_vertices_image,
|
||||
face_large_bboxes,
|
||||
selected_face_idx,
|
||||
sigmainv,
|
||||
knum,
|
||||
multiplier
|
||||
)
|
||||
|
||||
assert torch.allclose(
|
||||
soft_mask, gt_soft_mask, atol=1e-5, rtol=1e-5)
|
||||
assert torch.equal(close_face_idx, gt_close_face_idx[..., :knum])
|
||||
_, height, width = selected_face_idx.shape
|
||||
assert torch.allclose(
|
||||
close_face_dist, gt_close_face_dist[..., :knum],
|
||||
atol=1e-5, rtol=1e-5)
|
||||
same = close_face_dist_type != gt_close_face_dist_type[..., :knum]
|
||||
assert torch.sum(same) / same.numel() <= 0.01
|
||||
|
||||
@pytest.mark.parametrize('multiplier', [1000, 100])
|
||||
@pytest.mark.parametrize('knum', [30, 40])
|
||||
def test_dibr_soft_mask_forward(self, face_vertices_image, selected_face_idx,
|
||||
sigmainv, boxlen, knum, multiplier, gt_soft_mask):
|
||||
soft_mask = kal.render.mesh.dibr_soft_mask(
|
||||
face_vertices_image,
|
||||
selected_face_idx,
|
||||
sigmainv,
|
||||
boxlen,
|
||||
knum,
|
||||
multiplier
|
||||
)
|
||||
|
||||
assert torch.allclose(
|
||||
soft_mask, gt_soft_mask, atol=1e-5, rtol=1e-5)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def gt_grad_face_vertices_image(self, batch_size, height, width, sigmainv,
|
||||
boxlen, device, dtype):
|
||||
# From Kaolin V0.10.0
|
||||
return torch.load(os.path.join(
|
||||
SPHERE_GT_DIR,
|
||||
f'grad_face_vertices_image_{height}_{width}_{int(sigmainv)}_{boxlen}.pt'
|
||||
)).to(device=device, dtype=dtype)[:batch_size]
|
||||
|
||||
@pytest.mark.parametrize('multiplier', [1000, 100, 1])
|
||||
@pytest.mark.parametrize('knum', [30, 40])
|
||||
def test_dibr_soft_mask_backward(self, face_vertices_image, selected_face_idx,
|
||||
sigmainv, boxlen, knum, multiplier,
|
||||
gt_grad_face_vertices_image):
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
soft_mask = kal.render.mesh.dibr_soft_mask(
|
||||
face_vertices_image,
|
||||
selected_face_idx,
|
||||
sigmainv,
|
||||
boxlen,
|
||||
knum,
|
||||
multiplier
|
||||
)
|
||||
mask = selected_face_idx != -1
|
||||
shifted_mask = torch.nn.functional.pad(
|
||||
mask, (0, 5)
|
||||
)[..., 5:]
|
||||
loss = kal.metrics.render.mask_iou(soft_mask, shifted_mask)
|
||||
loss.backward()
|
||||
|
||||
# rtol and atol must be high because numerical differences leads to different
|
||||
# distance types
|
||||
assert torch.allclose(
|
||||
face_vertices_image.grad, gt_grad_face_vertices_image,
|
||||
rtol=1e-1, atol=1e-1)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('dtype', [torch.float, torch.double])
|
||||
@pytest.mark.parametrize("batch_size", [3, 1])
|
||||
@pytest.mark.parametrize("height,width", [(35, 31)])
|
||||
@pytest.mark.parametrize("flip", [False, True])
|
||||
class TestDibrRasterization:
|
||||
@pytest.fixture(autouse=True)
|
||||
def mesh(self):
|
||||
mesh = kal.io.obj.import_mesh(os.path.join(MODEL_DIR, 'model.obj'),
|
||||
with_materials=True)
|
||||
return mesh
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces(self, mesh, flip):
|
||||
out = mesh.faces.cuda()
|
||||
if flip:
|
||||
out = torch.flip(out, dims=(-1,))
|
||||
return out
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_pos(self, batch_size, dtype):
|
||||
return torch.tensor([[0.5, 0.5, 3.],
|
||||
[2., 2., -2.],
|
||||
[3., 0.5, 0.5]],
|
||||
device='cuda', dtype=dtype)[:batch_size]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def look_at(self, batch_size, dtype):
|
||||
return torch.full((batch_size, 3), 0.5, device='cuda',
|
||||
dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_up(self, batch_size, dtype):
|
||||
return torch.tensor([[0., 1., 0.]], device='cuda',
|
||||
dtype=dtype).repeat(batch_size, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_proj(self, dtype):
|
||||
return kal.render.camera.generate_perspective_projection(
|
||||
fovyangle=math.pi / 4., dtype=dtype).cuda()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_camera(self, mesh, camera_pos, look_at, camera_up, dtype):
|
||||
vertices = mesh.vertices.to('cuda', dtype).unsqueeze(0)
|
||||
min_vertices = vertices.min(dim=1, keepdims=True)[0]
|
||||
max_vertices = vertices.max(dim=1, keepdims=True)[0]
|
||||
vertices = (vertices - min_vertices) / (max_vertices - min_vertices)
|
||||
camera_rot, camera_trans = kal.render.camera.generate_rotate_translate_matrices(
|
||||
camera_pos, look_at, camera_up)
|
||||
return kal.render.camera.rotate_translate_points(
|
||||
vertices, camera_rot, camera_trans)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_image(self, vertices_camera, camera_proj):
|
||||
return kal.render.camera.perspective_camera(
|
||||
vertices_camera, camera_proj)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_camera(self, vertices_camera, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
vertices_camera, faces)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_z(self, face_vertices_camera):
|
||||
return face_vertices_camera[..., -1]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_image(self, vertices_image, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
vertices_image, faces)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_normals_z(self, face_vertices_camera):
|
||||
return kal.ops.mesh.face_normals(
|
||||
face_vertices_camera, unit=True
|
||||
)[..., -1]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_uvs(self, mesh, batch_size, dtype, flip):
|
||||
face_uvs_idx = mesh.face_uvs_idx.cuda()
|
||||
if flip:
|
||||
face_uvs_idx = torch.flip(face_uvs_idx, dims=(-1,))
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
mesh.uvs.unsqueeze(0).to('cuda', dtype),
|
||||
face_uvs_idx).repeat(batch_size, 1, 1, 1)
|
||||
|
||||
@pytest.mark.parametrize('sigmainv', [7000, 70])
|
||||
@pytest.mark.parametrize('boxlen', [0.02, 0.01])
|
||||
@pytest.mark.parametrize('knum', [30, 40])
|
||||
@pytest.mark.parametrize('multiplier', [1000, 100])
|
||||
@pytest.mark.parametrize('rast_backend', ['cuda', 'nvdiffrast_fwd', 'nvdiffrast'])
|
||||
def test_dibr_rasterization(self, height, width, face_vertices_z,
|
||||
face_vertices_image, face_uvs, face_normals_z,
|
||||
sigmainv, boxlen, knum, multiplier, rast_backend):
|
||||
if rast_backend in {'nvdiffrast_fwd', 'nvdiffrast'}:
|
||||
if os.getenv('KAOLIN_TEST_NVDIFFRAST', '0') == '0':
|
||||
pytest.skip(f'test is ignored as KAOLIN_TEST_NVDIFFRAST is not set')
|
||||
if face_vertices_z.dtype == torch.double:
|
||||
pytest.skip("nvdiffrast not compatible with double")
|
||||
gt_interpolated_features, gt_face_idx = rasterize(
|
||||
height, width,
|
||||
face_vertices_z,
|
||||
face_vertices_image,
|
||||
face_uvs,
|
||||
face_normals_z >= 0.,
|
||||
multiplier,
|
||||
backend=rast_backend
|
||||
)
|
||||
_multiplier = 1000. if multiplier is None else multiplier
|
||||
gt_soft_mask = kal.render.mesh.dibr_soft_mask(
|
||||
face_vertices_image,
|
||||
gt_face_idx,
|
||||
sigmainv,
|
||||
boxlen,
|
||||
knum,
|
||||
_multiplier
|
||||
)
|
||||
|
||||
interpolated_features, soft_mask, face_idx = kal.render.mesh.dibr_rasterization(
|
||||
height, width,
|
||||
face_vertices_z,
|
||||
face_vertices_image,
|
||||
face_uvs,
|
||||
face_normals_z,
|
||||
sigmainv,
|
||||
boxlen,
|
||||
knum,
|
||||
multiplier,
|
||||
rast_backend=rast_backend
|
||||
)
|
||||
|
||||
assert torch.equal(interpolated_features, gt_interpolated_features)
|
||||
assert torch.equal(soft_mask, gt_soft_mask)
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
592
tests/python/kaolin/render/mesh/test_rasterization.py
Normal file
@@ -0,0 +1,592 @@
|
||||
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import random
|
||||
import pytest
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
import math
|
||||
import os
|
||||
|
||||
from kaolin.render.camera import perspective_camera, rotate_translate_points
|
||||
from kaolin.render.mesh import rasterize
|
||||
from kaolin.render.mesh.deftet import _naive_deftet_sparse_render
|
||||
import kaolin as kal
|
||||
|
||||
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
MODEL_DIR = os.path.join(ROOT_DIR, os.pardir, os.pardir, os.pardir, os.pardir, 'samples/')
|
||||
|
||||
@pytest.mark.parametrize("dtype", [torch.float, torch.double])
|
||||
@pytest.mark.parametrize("batch_size", [1, 3])
|
||||
@pytest.mark.parametrize("height,width", [(35, 31)])
|
||||
@pytest.mark.parametrize("flip", [False, True])
|
||||
class TestRasterize:
|
||||
@pytest.fixture(autouse=True)
|
||||
def mesh(self):
|
||||
mesh = kal.io.obj.import_mesh(os.path.join(MODEL_DIR, 'model.obj'),
|
||||
with_materials=True)
|
||||
return mesh
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def faces(self, mesh, flip):
|
||||
out = mesh.faces.cuda()
|
||||
if flip:
|
||||
out = torch.flip(out, dims=(-1,))
|
||||
return out
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_pos(self, batch_size, dtype):
|
||||
return torch.tensor([[0.5, 0.5, 3.],
|
||||
[2., 2., -2.],
|
||||
[3., 0.5, 0.5]],
|
||||
device='cuda', dtype=dtype)[:batch_size]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def look_at(self, batch_size, dtype):
|
||||
return torch.full((batch_size, 3), 0.5, device='cuda',
|
||||
dtype=dtype)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_up(self, batch_size, dtype):
|
||||
return torch.tensor([[0., 1., 0.]], device='cuda',
|
||||
dtype=dtype).repeat(batch_size, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_proj(self, dtype):
|
||||
return kal.render.camera.generate_perspective_projection(
|
||||
fovyangle=math.pi / 4., dtype=dtype).cuda()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_camera(self, mesh, camera_pos, look_at, camera_up, dtype):
|
||||
vertices = mesh.vertices.to('cuda', dtype).unsqueeze(0)
|
||||
min_vertices = vertices.min(dim=1, keepdims=True)[0]
|
||||
max_vertices = vertices.max(dim=1, keepdims=True)[0]
|
||||
vertices = (vertices - min_vertices) / (max_vertices - min_vertices)
|
||||
camera_rot, camera_trans = kal.render.camera.generate_rotate_translate_matrices(
|
||||
camera_pos, look_at, camera_up)
|
||||
return kal.render.camera.rotate_translate_points(
|
||||
vertices, camera_rot, camera_trans)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices_image(self, vertices_camera, camera_proj):
|
||||
return kal.render.camera.perspective_camera(
|
||||
vertices_camera, camera_proj)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_z(self, vertices_camera, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
vertices_camera[:, :, -1:], faces).squeeze(-1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def valid_faces(self, batch_size, face_vertices_z):
|
||||
min_z = face_vertices_z.reshape(batch_size, -1).min(dim=1, keepdims=True)[0]
|
||||
max_z = face_vertices_z.reshape(batch_size, -1).max(dim=1, keepdims=True)[0]
|
||||
middle_z = (min_z + max_z) / 2.
|
||||
return torch.all(face_vertices_z < middle_z.unsqueeze(-1), dim=-1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_vertices_image(self, vertices_image, faces):
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
vertices_image, faces)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def texture_map(self, mesh, dtype):
|
||||
return mesh.materials[0]['map_Kd'].to('cuda', dtype).permute(
|
||||
2, 0, 1).unsqueeze(0) / 255.
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def face_uvs(self, mesh, batch_size, dtype, flip):
|
||||
face_uvs_idx = mesh.face_uvs_idx.cuda()
|
||||
if flip:
|
||||
face_uvs_idx = torch.flip(face_uvs_idx, dims=(-1,))
|
||||
return kal.ops.mesh.index_vertices_by_faces(
|
||||
mesh.uvs.unsqueeze(0).to('cuda', dtype),
|
||||
face_uvs_idx).repeat(batch_size, 1, 1, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def pixel_coords(self, batch_size, height, width, dtype):
|
||||
x = (2 * torch.arange(width, device='cuda', dtype=dtype) + 1 - width) / width
|
||||
y = (height - 2 * torch.arange(height, device='cuda', dtype=dtype) - 1.) / height
|
||||
return torch.stack([
|
||||
x.reshape(1, 1, -1).repeat(batch_size, height, 1),
|
||||
y.reshape(1, -1, 1).repeat(batch_size, 1, width)
|
||||
], dim=-1).reshape(batch_size, -1, 2)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def render_ranges(self, vertices_camera, height, width):
|
||||
min_z = vertices_camera[:, :, -1].min(dim=1)[0]
|
||||
max_z = vertices_camera[:, :, -1].max(dim=1)[0]
|
||||
render_range = torch.stack([min_z - 1e-2, max_z + 1e-2], dim=-1)
|
||||
|
||||
return render_range.unsqueeze(1).repeat(1, height * width, 1)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_cuda_forward(self, batch_size, height, width, pixel_coords,
|
||||
render_ranges, face_vertices_z, face_vertices_image,
|
||||
face_uvs, with_valid_faces, valid_faces):
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_attr = face_uvs
|
||||
|
||||
interpolated_features, face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_attr, backend='cuda', **kwargs)
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_attr, 1, **kwargs)
|
||||
|
||||
assert torch.equal(face_idx, gt_face_idx.reshape(batch_size, height, width))
|
||||
assert torch.allclose(
|
||||
interpolated_features,
|
||||
gt_interpolated_features.reshape(batch_size, height, width, face_uvs.shape[-1]),
|
||||
rtol=1e-5, atol=1e-5
|
||||
)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_cuda_forward_with_list(
|
||||
self, batch_size, height, width, pixel_coords,
|
||||
render_ranges, face_vertices_z, face_vertices_image,
|
||||
face_uvs, with_valid_faces, valid_faces):
|
||||
"""Test with list of tensors as features"""
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_attr = [face_uvs, torch.ones_like(face_uvs[..., 1:])]
|
||||
|
||||
(uvs_map, mask), face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_attr, backend='cuda', **kwargs)
|
||||
(gt_uvs_map, gt_mask), gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_attr, 1, **kwargs)
|
||||
|
||||
assert torch.equal(face_idx, gt_face_idx.reshape(batch_size, height, width))
|
||||
assert torch.allclose(
|
||||
uvs_map,
|
||||
gt_uvs_map.reshape(batch_size, height, width, face_uvs.shape[-1]),
|
||||
rtol=1e-5, atol=1e-5
|
||||
)
|
||||
assert torch.allclose(
|
||||
mask,
|
||||
gt_mask.reshape(batch_size, height, width, 1),
|
||||
rtol=1e-5, atol=1e-5
|
||||
)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_cuda_backward(self, batch_size, height, width, pixel_coords,
|
||||
render_ranges, face_vertices_z, face_vertices_image,
|
||||
face_uvs, with_valid_faces, valid_faces):
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_vertices_z = face_vertices_z.detach()
|
||||
face_vertices_z.requires_grad = True
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
face_uvs = face_uvs.detach()
|
||||
face_uvs.requires_grad = True
|
||||
pixel_coords2 = pixel_coords.detach()
|
||||
pixel_coords2.requires_grad = True
|
||||
render_ranges2 = render_ranges.detach()
|
||||
render_ranges2.requires_grad = True
|
||||
face_vertices_z2 = face_vertices_z.detach()
|
||||
face_vertices_z2.requires_grad = True
|
||||
face_vertices_image2 = face_vertices_image.detach()
|
||||
face_vertices_image2.requires_grad = True
|
||||
face_uvs2 = face_uvs.detach()
|
||||
face_uvs2.requires_grad = True
|
||||
|
||||
interpolated_features, face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_uvs, backend='cuda')
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords2, render_ranges2, face_vertices_z2,
|
||||
face_vertices_image2, face_uvs2, 1)
|
||||
gt_interpolated_features = gt_interpolated_features.reshape(
|
||||
batch_size, height, width, face_uvs.shape[-1])
|
||||
|
||||
grad_out = torch.rand_like(interpolated_features)
|
||||
interpolated_features.backward(grad_out)
|
||||
gt_interpolated_features.backward(grad_out)
|
||||
|
||||
assert face_vertices_z.grad is None or torch.all(face_vertices_z.grad == 0.)
|
||||
assert torch.allclose(face_vertices_image.grad,
|
||||
face_vertices_image2.grad,
|
||||
rtol=1e-3, atol=1e-2)
|
||||
assert torch.allclose(face_uvs.grad,
|
||||
face_uvs2.grad,
|
||||
rtol=1e-3, atol=1e-3)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_cuda_backward_with_list(
|
||||
self, batch_size, height, width, pixel_coords,
|
||||
render_ranges, face_vertices_z, face_vertices_image,
|
||||
face_uvs, with_valid_faces, valid_faces):
|
||||
"""Test with list of tensors as features"""
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_vertices_z = face_vertices_z.detach()
|
||||
face_vertices_z.requires_grad = True
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
face_uvs = face_uvs.detach()
|
||||
face_uvs.requires_grad = True
|
||||
face_mask = torch.ones_like(face_uvs[..., :1], requires_grad=True)
|
||||
pixel_coords2 = pixel_coords.detach()
|
||||
pixel_coords2.requires_grad = True
|
||||
render_ranges2 = render_ranges.detach()
|
||||
render_ranges2.requires_grad = True
|
||||
face_vertices_z2 = face_vertices_z.detach()
|
||||
face_vertices_z2.requires_grad = True
|
||||
face_vertices_image2 = face_vertices_image.detach()
|
||||
face_vertices_image2.requires_grad = True
|
||||
face_uvs2 = face_uvs.detach()
|
||||
face_uvs2.requires_grad = True
|
||||
face_mask2 = face_mask.detach()
|
||||
face_mask2.requires_grad = True
|
||||
|
||||
interpolated_features, face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
[face_uvs, face_mask], backend='cuda', **kwargs)
|
||||
interpolated_features = torch.cat(interpolated_features, dim=-1)
|
||||
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords2, render_ranges2, face_vertices_z2,
|
||||
face_vertices_image2, [face_uvs2, face_mask2], 1, **kwargs)
|
||||
gt_interpolated_features = torch.cat([
|
||||
feat.reshape(batch_size, height, width, -1) for feat in gt_interpolated_features
|
||||
], dim=-1)
|
||||
|
||||
grad_out = torch.rand_like(gt_interpolated_features)
|
||||
interpolated_features.backward(grad_out)
|
||||
gt_interpolated_features.backward(grad_out)
|
||||
|
||||
assert face_vertices_z.grad is None or torch.all(face_vertices_z.grad == 0.)
|
||||
assert torch.allclose(face_vertices_image.grad,
|
||||
face_vertices_image2.grad,
|
||||
rtol=1e-3, atol=1e-2)
|
||||
assert torch.allclose(face_uvs.grad,
|
||||
face_uvs2.grad,
|
||||
rtol=1e-3, atol=1e-3)
|
||||
assert torch.allclose(face_mask.grad,
|
||||
face_mask2.grad,
|
||||
rtol=1e-3, atol=1e-3)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_nvdiffrast_fwd_forward(
|
||||
self, batch_size, height, width, pixel_coords,
|
||||
render_ranges, face_vertices_z, face_vertices_image,
|
||||
face_uvs, with_valid_faces, valid_faces):
|
||||
if os.getenv('KAOLIN_TEST_NVDIFFRAST', '0') == '0':
|
||||
pytest.skip(f'test is ignored as KAOLIN_TEST_NVDIFFRAST is not set')
|
||||
if face_vertices_image.dtype == torch.double:
|
||||
pytest.skip("nvdiffrast not compatible with double")
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
|
||||
interpolated_features, face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_uvs, backend='nvdiffrast_fwd', **kwargs)
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_uvs, 1, **kwargs)
|
||||
gt_interpolated_features = gt_interpolated_features.reshape(
|
||||
batch_size, height, width, face_uvs.shape[-1])
|
||||
gt_face_idx = gt_face_idx.reshape(batch_size, height, width)
|
||||
|
||||
face_idx_same = face_idx == gt_face_idx
|
||||
# Numerical differences can lead to difference
|
||||
# face being rasterized, we assume about 98% similarity
|
||||
assert torch.sum(face_idx_same) / face_idx.numel() > 0.98
|
||||
mask_intersection = (face_idx >= 0) & (gt_face_idx >= 0)
|
||||
|
||||
# Attribute can be quite different if the face getting rasterized is different
|
||||
assert torch.allclose(
|
||||
interpolated_features[face_idx_same],
|
||||
gt_interpolated_features[face_idx_same],
|
||||
rtol=1e-3, atol=1e-3
|
||||
)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_nvdiffrast_fwd_forward_with_list(
|
||||
self, batch_size, height, width, pixel_coords,
|
||||
render_ranges, face_vertices_z, face_vertices_image,
|
||||
face_uvs, with_valid_faces, valid_faces, dtype):
|
||||
"""Test with list of tensors as features"""
|
||||
if os.getenv('KAOLIN_TEST_NVDIFFRAST', '0') == '0':
|
||||
pytest.skip(f'test is ignored as KAOLIN_TEST_NVDIFFRAST is not set')
|
||||
if face_vertices_image.dtype == torch.double:
|
||||
pytest.skip("nvdiffrast not compatible with double")
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_attr = [face_uvs, face_vertices_z.unsqueeze(-1)]
|
||||
|
||||
(uvs_map, depth_map), face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_attr, backend='nvdiffrast_fwd', **kwargs)
|
||||
(gt_uvs_map, gt_depth_map), gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords, render_ranges, face_vertices_z,
|
||||
face_vertices_image, face_attr, 1, **kwargs)
|
||||
gt_uvs_map = gt_uvs_map.reshape(batch_size, height, width, face_uvs.shape[-1])
|
||||
gt_depth_map = gt_depth_map.reshape(batch_size, height, width, 1)
|
||||
gt_face_idx = gt_face_idx.reshape(batch_size, height, width)
|
||||
|
||||
face_idx_same = face_idx == gt_face_idx
|
||||
# Numerical differences can lead to difference
|
||||
# face being rasterized, we assume about 98% similarity
|
||||
assert torch.sum(face_idx_same) / face_idx.numel() > 0.98
|
||||
mask_intersection = (face_idx >= 0) & (gt_face_idx >= 0)
|
||||
|
||||
# On a smooth enough surface the depth maps should match
|
||||
# (exclusing border because of numerical difference)
|
||||
assert torch.allclose(
|
||||
depth_map[mask_intersection],
|
||||
gt_depth_map[mask_intersection],
|
||||
rtol=1e-3, atol=1e-3
|
||||
)
|
||||
# Attribute can be quite different if the face getting rasterized is different
|
||||
assert torch.allclose(
|
||||
uvs_map[face_idx_same],
|
||||
gt_uvs_map[face_idx_same],
|
||||
rtol=1e-3, atol=1e-3
|
||||
)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_nvdiffrast_fwd_backward(
|
||||
self, batch_size, height, width, pixel_coords,
|
||||
render_ranges, face_vertices_z, face_vertices_image,
|
||||
face_uvs, with_valid_faces, valid_faces):
|
||||
if os.getenv('KAOLIN_TEST_NVDIFFRAST', '0') == '0':
|
||||
pytest.skip(f'test is ignored as KAOLIN_TEST_NVDIFFRAST is not set')
|
||||
if face_vertices_image.dtype == torch.double:
|
||||
pytest.skip("nvdiffrast not compatible with double")
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_vertices_z = face_vertices_z.detach()
|
||||
face_vertices_z.requires_grad = True
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
face_uvs = face_uvs.detach()
|
||||
face_uvs.requires_grad = True
|
||||
pixel_coords2 = pixel_coords.detach()
|
||||
pixel_coords2.requires_grad = True
|
||||
render_ranges2 = render_ranges.detach()
|
||||
render_ranges2.requires_grad = True
|
||||
face_vertices_z2 = face_vertices_z.detach()
|
||||
face_vertices_z2.requires_grad = True
|
||||
face_vertices_image2 = face_vertices_image.detach()
|
||||
face_vertices_image2.requires_grad = True
|
||||
face_uvs2 = face_uvs.detach()
|
||||
face_uvs2.requires_grad = True
|
||||
|
||||
interpolated_features, face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_uvs, backend='nvdiffrast_fwd', **kwargs)
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords2, render_ranges2, face_vertices_z2,
|
||||
face_vertices_image2, face_uvs2, 1, **kwargs)
|
||||
gt_interpolated_features = gt_interpolated_features.reshape(
|
||||
batch_size, height, width, -1)
|
||||
gt_face_idx = gt_face_idx.reshape(batch_size, height, width)
|
||||
|
||||
face_idx_diff = face_idx != gt_face_idx
|
||||
|
||||
grad_out = torch.rand_like(gt_interpolated_features)
|
||||
grad_out[face_idx_diff] = 0.
|
||||
interpolated_features.backward(grad_out)
|
||||
gt_interpolated_features.backward(grad_out)
|
||||
|
||||
assert face_vertices_z.grad is None or torch.all(face_vertices_z.grad == 0.)
|
||||
assert torch.allclose(face_vertices_image.grad,
|
||||
face_vertices_image2.grad,
|
||||
rtol=5e-2, atol=5e-2)
|
||||
assert torch.allclose(face_uvs.grad,
|
||||
face_uvs2.grad,
|
||||
rtol=1e-2, atol=5e-2)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_nvdiffrast_fwd_backward_with_mask(
|
||||
self, batch_size, height, width, pixel_coords,
|
||||
render_ranges, face_vertices_z, face_vertices_image,
|
||||
face_uvs, with_valid_faces, valid_faces):
|
||||
if os.getenv('KAOLIN_TEST_NVDIFFRAST', '0') == '0':
|
||||
pytest.skip(f'test is ignored as KAOLIN_TEST_NVDIFFRAST is not set')
|
||||
if face_vertices_image.dtype == torch.double:
|
||||
pytest.skip("nvdiffrast not compatible with double")
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_vertices_z = face_vertices_z.detach()
|
||||
face_vertices_z.requires_grad = True
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
face_uvs = face_uvs.detach()
|
||||
face_uvs.requires_grad = True
|
||||
face_mask = torch.ones_like(face_uvs[..., :1], requires_grad=True)
|
||||
pixel_coords2 = pixel_coords.detach()
|
||||
pixel_coords2.requires_grad = True
|
||||
render_ranges2 = render_ranges.detach()
|
||||
render_ranges2.requires_grad = True
|
||||
face_vertices_z2 = face_vertices_z.detach()
|
||||
face_vertices_z2.requires_grad = True
|
||||
face_vertices_image2 = face_vertices_image.detach()
|
||||
face_vertices_image2.requires_grad = True
|
||||
face_uvs2 = face_uvs.detach()
|
||||
face_uvs2.requires_grad = True
|
||||
face_mask2 = face_mask.detach()
|
||||
face_mask2.requires_grad = True
|
||||
|
||||
interpolated_features, face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
[face_uvs, face_mask], backend='nvdiffrast_fwd', **kwargs)
|
||||
interpolated_features = torch.cat(interpolated_features, dim=-1)
|
||||
|
||||
gt_interpolated_features, gt_face_idx = _naive_deftet_sparse_render(
|
||||
pixel_coords2, render_ranges2, face_vertices_z2,
|
||||
face_vertices_image2, [face_uvs2, face_mask2], 1, **kwargs)
|
||||
gt_interpolated_features = torch.cat([
|
||||
feat.reshape(batch_size, height, width, -1) for feat in gt_interpolated_features
|
||||
], dim=-1)
|
||||
gt_face_idx = gt_face_idx.reshape(batch_size, height, width)
|
||||
|
||||
face_idx_diff = face_idx != gt_face_idx
|
||||
|
||||
grad_out = torch.rand_like(gt_interpolated_features)
|
||||
grad_out[face_idx_diff] = 0.
|
||||
interpolated_features.backward(grad_out)
|
||||
gt_interpolated_features.backward(grad_out)
|
||||
|
||||
assert face_vertices_z.grad is None or torch.all(face_vertices_z.grad == 0.)
|
||||
assert torch.allclose(face_vertices_image.grad,
|
||||
face_vertices_image2.grad,
|
||||
rtol=5e-2, atol=5e-2)
|
||||
assert torch.allclose(face_uvs.grad,
|
||||
face_uvs2.grad,
|
||||
rtol=1e-2, atol=5e-2)
|
||||
assert torch.allclose(face_mask.grad,
|
||||
face_mask2.grad,
|
||||
rtol=1e-2, atol=5e-2)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_nvdiffrast_forward(
|
||||
self, batch_size, height, width, face_vertices_z,
|
||||
face_vertices_image, face_uvs, with_valid_faces, valid_faces):
|
||||
if os.getenv('KAOLIN_TEST_NVDIFFRAST', '0') == '0':
|
||||
pytest.skip(f'test is ignored as KAOLIN_TEST_NVDIFFRAST is not set')
|
||||
if face_vertices_image.dtype == torch.double:
|
||||
pytest.skip("nvdiffrast not compatible with double")
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_attr = face_uvs
|
||||
|
||||
interpolated_features, face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_attr, backend='nvdiffrast', **kwargs)
|
||||
# To simplify the test we use nvdiffrast_fwd for ground truth
|
||||
# already tested above
|
||||
gt_interpolated_features, gt_face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_attr, backend='nvdiffrast_fwd', **kwargs)
|
||||
gt_interpolated_features = gt_interpolated_features.reshape(
|
||||
batch_size, height, width, face_uvs.shape[-1])
|
||||
gt_face_idx = gt_face_idx.reshape(batch_size, height, width)
|
||||
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
assert torch.equal(interpolated_features, gt_interpolated_features)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_nvdiffrast_forward_with_list(
|
||||
self, batch_size, height, width, face_vertices_z,
|
||||
face_vertices_image, face_uvs, with_valid_faces, valid_faces):
|
||||
"""Test with list of tensors as features"""
|
||||
if os.getenv('KAOLIN_TEST_NVDIFFRAST', '0') == '0':
|
||||
pytest.skip(f'test is ignored as KAOLIN_TEST_NVDIFFRAST is not set')
|
||||
if face_vertices_image.dtype == torch.double:
|
||||
pytest.skip("nvdiffrast not compatible with double")
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
face_attr = [face_uvs, torch.ones_like(face_uvs[..., :1])]
|
||||
|
||||
(uvs_map, mask), face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_attr, backend='nvdiffrast', **kwargs)
|
||||
# To simplify the test we use nvdiffrast_fwd for ground truth
|
||||
# already tested above
|
||||
(gt_uvs_map, gt_mask), gt_face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_attr, backend='nvdiffrast_fwd', **kwargs)
|
||||
gt_uvs_map = gt_uvs_map.reshape(
|
||||
batch_size, height, width, face_uvs.shape[-1])
|
||||
gt_mask = gt_mask.reshape(batch_size, height, width, 1)
|
||||
gt_face_idx = gt_face_idx.reshape(batch_size, height, width)
|
||||
|
||||
assert torch.equal(face_idx, gt_face_idx)
|
||||
assert torch.equal(uvs_map, gt_uvs_map)
|
||||
assert torch.equal(mask, gt_mask)
|
||||
|
||||
@pytest.mark.parametrize('with_valid_faces', [False, True])
|
||||
def test_nvdiffrast_backward(
|
||||
self, batch_size, height, width, face_vertices_z,
|
||||
face_vertices_image, face_uvs, with_valid_faces, valid_faces):
|
||||
if os.getenv('KAOLIN_TEST_NVDIFFRAST', '0') == '0':
|
||||
pytest.skip(f'test is ignored as KAOLIN_TEST_NVDIFFRAST is not set')
|
||||
if face_vertices_image.dtype == torch.double:
|
||||
pytest.skip("nvdiffrast not compatible with double")
|
||||
kwargs = {}
|
||||
if with_valid_faces:
|
||||
kwargs['valid_faces'] = valid_faces
|
||||
|
||||
face_vertices_z = face_vertices_z.detach()
|
||||
face_vertices_z.requires_grad = True
|
||||
face_vertices_image = face_vertices_image.detach()
|
||||
face_vertices_image.requires_grad = True
|
||||
face_uvs = face_uvs.detach()
|
||||
face_uvs.requires_grad = True
|
||||
face_vertices_z2 = face_vertices_z.detach()
|
||||
face_vertices_z2.requires_grad = True
|
||||
face_vertices_image2 = face_vertices_image.detach()
|
||||
face_vertices_image2.requires_grad = True
|
||||
face_uvs2 = face_uvs.detach()
|
||||
face_uvs2.requires_grad = True
|
||||
|
||||
interpolated_features, face_idx = rasterize(
|
||||
height, width, face_vertices_z, face_vertices_image,
|
||||
face_uvs, backend='nvdiffrast', **kwargs)
|
||||
gt_interpolated_features, gt_face_idx = rasterize(
|
||||
height, width, face_vertices_z2, face_vertices_image2,
|
||||
face_uvs2, backend='nvdiffrast_fwd', **kwargs)
|
||||
gt_interpolated_features = gt_interpolated_features.reshape(
|
||||
batch_size, height, width, -1)
|
||||
gt_face_idx = gt_face_idx.reshape(batch_size, height, width)
|
||||
|
||||
grad_out = torch.rand_like(gt_interpolated_features)
|
||||
interpolated_features.backward(grad_out)
|
||||
gt_interpolated_features.backward(grad_out)
|
||||
|
||||
assert face_vertices_z.grad is None or torch.all(face_vertices_z.grad == 0.)
|
||||
assert torch.allclose(face_vertices_image.grad,
|
||||
face_vertices_image2.grad,
|
||||
rtol=5e-2, atol=5e-2)
|
||||
assert torch.allclose(face_uvs.grad,
|
||||
face_uvs2.grad,
|
||||
rtol=1e-2, atol=1e-2)
|
||||
106
tests/python/kaolin/render/mesh/test_utils.py
Normal file
@@ -0,0 +1,106 @@
|
||||
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import math
|
||||
import torch
|
||||
from kaolin.utils.testing import FLOAT_TYPES, check_tensor
|
||||
from kaolin.render.mesh.utils import texture_mapping
|
||||
|
||||
@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
|
||||
class TestTextureMapping:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def texture_map_1d(self, device, dtype):
|
||||
texture_map_l1 = torch.tensor([
|
||||
[11.0, 12.0, 13.0, 14.0, 15.0],
|
||||
[21.0, 22.0, 23.0, 24.0, 25.0],
|
||||
[31.0, 32.0, 33.0, 34.0, 35.0],
|
||||
[41.0, 42.0, 43.0, 44.0, 45.0]
|
||||
], device=device, dtype=dtype)
|
||||
texture_map_ll2 = texture_map_l1 + 100
|
||||
texture_map = torch.stack((texture_map_l1, texture_map_ll2)).unsqueeze(1)
|
||||
return texture_map
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def texture_map_3d(self, texture_map_1d):
|
||||
return torch.cat((texture_map_1d, -texture_map_1d, texture_map_1d), dim=1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def sparse_coords_batch(self, device, dtype):
|
||||
texture_coordinates = torch.tensor([
|
||||
[0.0, 0.0], [1.0, 1.0], [0, 1.0], [0.5, 0.5]
|
||||
], device=device, dtype=dtype)
|
||||
texture_coordinates = torch.stack((texture_coordinates,
|
||||
torch.flip(texture_coordinates, dims=(0,))))
|
||||
return texture_coordinates
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def dense_coords_batch(self, device, dtype):
|
||||
texture_coordinates = torch.tensor([
|
||||
[[0.0, 0.0], [0.25, 0.0], [0.5, 0.0], [0.75, 0.0], [1.0, 0.0]],
|
||||
[[0.0, 1/8], [0.25, 1/8], [0.5, 1/8], [0.75, 1/8], [1.0, 1/8]]
|
||||
], device=device, dtype=dtype)
|
||||
texture_coordinates = torch.stack((texture_coordinates,
|
||||
torch.flip(texture_coordinates, dims=(0,))))
|
||||
return texture_coordinates
|
||||
|
||||
@pytest.mark.parametrize('mode', ['nearest', 'bilinear'])
|
||||
def test_sparse_1d_texture_mapping(self, sparse_coords_batch, texture_map_1d, mode):
|
||||
interop = texture_mapping(texture_coordinates=sparse_coords_batch, texture_maps=texture_map_1d, mode=mode)
|
||||
|
||||
if mode == 'nearest':
|
||||
expected = torch.tensor([[41, 15, 11, 33], [133, 111, 115, 141]]).unsqueeze(-1)
|
||||
elif mode == 'bilinear':
|
||||
expected = torch.tensor([[41, 15, 11, 28], [128, 111, 115, 141]]).unsqueeze(-1)
|
||||
expected = expected.to(texture_map_1d.device).type(texture_map_1d.dtype)
|
||||
assert check_tensor(interop, shape=(2,4,1), dtype=texture_map_1d.dtype)
|
||||
assert torch.equal(interop, expected)
|
||||
|
||||
@pytest.mark.parametrize('mode', ['nearest', 'bilinear'])
|
||||
def test_sparse_3d_texture_mapping(self, sparse_coords_batch, texture_map_3d, mode):
|
||||
interop = texture_mapping(texture_coordinates=sparse_coords_batch,
|
||||
texture_maps=texture_map_3d,
|
||||
mode=mode)
|
||||
if mode == 'nearest':
|
||||
expected_d1 = torch.tensor([[41, 15, 11, 33], [133, 111, 115, 141]])
|
||||
expected_d2 = -torch.tensor([[41, 15, 11, 33], [133, 111, 115, 141]])
|
||||
expected_d3 = torch.tensor([[41, 15, 11, 33], [133, 111, 115, 141]])
|
||||
expected = torch.stack([expected_d1, expected_d2, expected_d3], dim=-1)
|
||||
elif mode == 'bilinear':
|
||||
expected_d1 = torch.tensor([[41, 15, 11, 28], [128, 111, 115, 141]])
|
||||
expected_d2 = -torch.tensor([[41, 15, 11, 28], [128, 111, 115, 141]])
|
||||
expected_d3 = torch.tensor([[41, 15, 11, 28], [128, 111, 115, 141]])
|
||||
expected = torch.stack([expected_d1, expected_d2, expected_d3], dim=-1)
|
||||
expected = expected.to(texture_map_3d.device).type(texture_map_3d.dtype)
|
||||
assert check_tensor(interop, shape=(2,4,3), dtype=texture_map_3d.dtype)
|
||||
assert torch.equal(interop, expected)
|
||||
|
||||
@pytest.mark.parametrize('mode', ['nearest', 'bilinear'])
|
||||
def test_dense_3d_texture_mapping(self, dense_coords_batch, texture_map_3d,
|
||||
mode, device, dtype):
|
||||
interop = texture_mapping(texture_coordinates=dense_coords_batch, texture_maps=texture_map_3d, mode=mode)
|
||||
|
||||
if mode == 'nearest':
|
||||
expected_base = torch.tensor([41., 42., 43., 44., 45.],
|
||||
device=device, dtype=dtype)
|
||||
elif mode == 'bilinear':
|
||||
expected_base = torch.tensor([41., 41.75, 43., 44.25, 45.],
|
||||
device=device, dtype=dtype)
|
||||
expected = torch.stack([expected_base, expected_base + 100], dim=0)
|
||||
expected = torch.stack([expected, -expected, expected],
|
||||
dim=-1).reshape(2, 1, -1, 3).repeat(1, 2, 1, 1)
|
||||
|
||||
assert torch.equal(interop, expected)
|
||||
208
tests/python/kaolin/render/spc/test_rayops.py
Normal file
@@ -0,0 +1,208 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
import kaolin.render.spc as spc_render
|
||||
|
||||
class TestRaytrace:
|
||||
@pytest.fixture(autouse=True)
|
||||
def feats(self):
|
||||
feats = torch.tensor([
|
||||
[1,1],[1,1],[1,1],[2,2],[3,3],[5,5]
|
||||
],
|
||||
device='cuda', dtype=torch.float)
|
||||
return feats
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def feats_big(self):
|
||||
feats = torch.rand([10000, 100, 32], device='cuda', dtype=torch.float)
|
||||
return feats
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def boundaries_big(self):
|
||||
boundary = torch.zeros([10000, 100], device='cuda', dtype=torch.bool)
|
||||
boundary[:, 0] = True
|
||||
return boundary.reshape(-1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def tau(self):
|
||||
feats = torch.tensor([
|
||||
[0],[0],[0],[1],[0],[1]
|
||||
],
|
||||
device='cuda', dtype=torch.float)
|
||||
return feats
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def boundaries(self):
|
||||
boundary = torch.tensor([1,0,1,0,0,1], device='cuda', dtype=torch.bool)
|
||||
return boundary
|
||||
|
||||
def test_mark_pack_boundaries(self):
|
||||
ridx = torch.tensor([1,1,1,1,2,2,3,3,3], device='cuda', dtype=torch.int)
|
||||
|
||||
expected_boundary = torch.tensor([1,0,0,0,1,0,1,0,0], device='cuda', dtype=torch.bool)
|
||||
|
||||
output = spc_render.mark_pack_boundaries(ridx)
|
||||
|
||||
assert torch.equal(output, expected_boundary)
|
||||
|
||||
def test_diff(self, feats, boundaries):
|
||||
diff = spc_render.diff(feats, boundaries)
|
||||
expected = torch.tensor([[0,0], [0,0], [1,1], [1,1], [0,0], [0,0]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(diff, expected)
|
||||
|
||||
def test_sum_reduce(self, feats, boundaries):
|
||||
sum_reduce = spc_render.sum_reduce(feats, boundaries)
|
||||
expected = torch.tensor([[2,2], [6,6], [5,5]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(sum_reduce, expected)
|
||||
|
||||
def test_sum_reduce_big(self, feats_big, boundaries_big):
|
||||
fdim = feats_big.shape[-1]
|
||||
sum_reduce = spc_render.sum_reduce(feats_big.reshape(-1, fdim), boundaries_big)
|
||||
expected = feats_big.sum(1)
|
||||
assert torch.allclose(sum_reduce, expected, atol=1e-5)
|
||||
|
||||
def test_sum_reduce_big_backward(self, feats_big, boundaries_big):
|
||||
|
||||
feats_big.requires_grad = True
|
||||
fdim = feats_big.shape[-1]
|
||||
|
||||
if feats_big.grad is not None:
|
||||
feats_big.grad.detach_()
|
||||
feats_big.grad.zero_()
|
||||
sum_reduce = spc_render.sum_reduce(feats_big.reshape(-1, fdim), boundaries_big)
|
||||
loss = sum_reduce.sum()
|
||||
loss.backward()
|
||||
grad0 = feats_big.grad.clone()
|
||||
|
||||
if feats_big.grad is not None:
|
||||
feats_big.grad.detach_()
|
||||
feats_big.grad.zero_()
|
||||
expected = feats_big.sum(1)
|
||||
loss = expected.sum()
|
||||
loss.backward()
|
||||
grad1 = feats_big.grad.clone()
|
||||
|
||||
assert torch.allclose(grad0, grad1, atol=1e-5)
|
||||
|
||||
def test_cumsum(self, feats, boundaries):
|
||||
cumsum = spc_render.cumsum(feats, boundaries)
|
||||
expected = torch.tensor([[1,1], [2,2], [1,1], [3,3], [6,6], [5,5]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(cumsum, expected)
|
||||
|
||||
def test_cumsum_big(self, feats_big, boundaries_big):
|
||||
fdim = feats_big.shape[-1]
|
||||
cumsum = spc_render.cumsum(feats_big.reshape(-1, fdim), boundaries_big)
|
||||
expected = torch.cumsum(feats_big, dim=1).reshape(-1, fdim)
|
||||
assert torch.allclose(cumsum, expected, atol=1e-5)
|
||||
|
||||
def test_cumsum_big_backward(self, feats_big, boundaries_big):
|
||||
|
||||
feats_big.requires_grad = True
|
||||
fdim = feats_big.shape[-1]
|
||||
|
||||
if feats_big.grad is not None:
|
||||
feats_big.grad.detach_()
|
||||
feats_big.grad.zero_()
|
||||
cumsum = spc_render.cumsum(feats_big.reshape(-1, fdim), boundaries_big)
|
||||
loss = cumsum.sum()
|
||||
loss.backward()
|
||||
grad0 = feats_big.grad.clone()
|
||||
|
||||
if feats_big.grad is not None:
|
||||
feats_big.grad.detach_()
|
||||
feats_big.grad.zero_()
|
||||
expected = torch.cumsum(feats_big, dim=1)
|
||||
loss = expected.sum()
|
||||
loss.backward()
|
||||
grad1 = feats_big.grad.clone()
|
||||
|
||||
assert torch.allclose(grad0, grad1, atol=1e-4)
|
||||
|
||||
def test_cumsum_reverse(self, feats, boundaries):
|
||||
cumsum = spc_render.cumsum(feats, boundaries, reverse=True)
|
||||
expected = torch.tensor([[2,2], [1,1], [6,6], [5,5], [3,3], [5,5]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(cumsum, expected)
|
||||
|
||||
def test_cumsum_exclusive(self, feats, boundaries):
|
||||
cumsum = spc_render.cumsum(feats, boundaries, reverse=False, exclusive=True)
|
||||
expected = torch.tensor([[0,0], [1,1], [0,0], [1,1], [3,3], [0,0]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(cumsum, expected)
|
||||
|
||||
def test_cumsum_exclusive_reverse(self, feats, boundaries):
|
||||
cumsum = spc_render.cumsum(feats, boundaries, reverse=True, exclusive=True)
|
||||
expected = torch.tensor([[1,1], [0,0], [5,5], [3,3], [0,0], [0,0]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(cumsum, expected)
|
||||
|
||||
def test_cumprod(self, feats, boundaries):
|
||||
cumprod = spc_render.cumprod(feats, boundaries)
|
||||
expected = torch.tensor([[1,1], [1,1], [1,1], [2,2], [6,6], [5,5]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(cumprod, expected)
|
||||
|
||||
def test_cumprod_big(self, feats_big, boundaries_big):
|
||||
fdim = feats_big.shape[-1]
|
||||
cumprod = spc_render.cumprod(feats_big.reshape(-1, fdim), boundaries_big)
|
||||
expected = torch.cumprod(feats_big, dim=1).reshape(-1, fdim)
|
||||
assert torch.allclose(cumprod, expected, atol=1e-4)
|
||||
|
||||
def test_cumprod_big_backward(self, feats_big, boundaries_big):
|
||||
|
||||
feats_big += 1e-3
|
||||
feats_big.requires_grad = True
|
||||
fdim = feats_big.shape[-1]
|
||||
|
||||
if feats_big.grad is not None:
|
||||
feats_big.grad.detach_()
|
||||
feats_big.grad.zero_()
|
||||
cumprod = spc_render.cumprod(feats_big.reshape(-1, fdim), boundaries_big)
|
||||
loss = cumprod.sum()
|
||||
loss.backward()
|
||||
grad0 = feats_big.grad.clone()
|
||||
|
||||
if feats_big.grad is not None:
|
||||
feats_big.grad.detach_()
|
||||
feats_big.grad.zero_()
|
||||
expected = torch.cumprod(feats_big, dim=1)
|
||||
loss = expected.sum()
|
||||
loss.backward()
|
||||
grad1 = feats_big.grad.clone()
|
||||
|
||||
assert torch.allclose(grad0, grad1, atol=1e-2)
|
||||
|
||||
def test_cumprod_reverse(self, feats, boundaries):
|
||||
cumprod = spc_render.cumprod(feats, boundaries, reverse=True)
|
||||
expected = torch.tensor([[1,1], [1,1], [6,6], [6,6], [3,3], [5,5]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(cumprod, expected)
|
||||
|
||||
def test_cumprod_exclusive(self, feats, boundaries):
|
||||
cumprod = spc_render.cumprod(feats, boundaries, reverse=False, exclusive=True)
|
||||
expected = torch.tensor([[1,1], [1,1], [1,1], [1,1], [2,2], [1,1]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(cumprod, expected)
|
||||
|
||||
def test_cumprod_exclusive_reverse(self, feats, boundaries):
|
||||
cumprod = spc_render.cumprod(feats, boundaries, reverse=True, exclusive=True)
|
||||
expected = torch.tensor([[1,1], [1,1], [6,6], [3,3], [1,1], [1,1]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(cumprod, expected)
|
||||
|
||||
def test_exponential_integration(self, feats, tau, boundaries):
|
||||
integrated_feats, transmittance = spc_render.exponential_integration(feats, tau, boundaries, exclusive=False)
|
||||
expected_feats = torch.tensor([[0,0], [0.4651,0.4651], [1.1627, 1.1627]], device='cuda', dtype=torch.float)
|
||||
expected_transmittance = torch.tensor([[0.0],[0.0],[0.0],[0.2325],[0.0],[0.2325]], device='cuda', dtype=torch.float)
|
||||
assert torch.allclose(integrated_feats, expected_feats, atol=1e-4)
|
||||
assert torch.allclose(transmittance, expected_transmittance, atol=1e-4)
|
||||
|
||||
|
||||
350
tests/python/kaolin/render/spc/test_raytrace.py
Normal file
@@ -0,0 +1,350 @@
|
||||
# Copyright (c) 2021,22 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from kaolin.ops.spc import scan_octrees, generate_points, bits_to_uint8
|
||||
|
||||
from kaolin.render.spc import unbatched_raytrace, mark_pack_boundaries
|
||||
|
||||
class TestRaytrace:
|
||||
@pytest.fixture(autouse=True)
|
||||
def octree(self):
|
||||
bits_t = torch.tensor([
|
||||
[0, 0, 0, 1, 0, 1, 1, 1],
|
||||
[1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 1, 1],
|
||||
[0, 0, 0, 0, 0, 0, 0, 1], [ 0, 0, 0, 0, 0, 0, 0, 0]],
|
||||
device='cuda', dtype=torch.float)
|
||||
return bits_to_uint8(torch.flip(bits_t, dims=(-1,)))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def length(self, octree):
|
||||
return torch.tensor([len(octree)], dtype=torch.int)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def max_level_pyramids_exsum(self, octree, length):
|
||||
return scan_octrees(octree, length)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def pyramid(self, max_level_pyramids_exsum):
|
||||
return max_level_pyramids_exsum[1].squeeze(0)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def exsum(self, max_level_pyramids_exsum):
|
||||
return max_level_pyramids_exsum[2]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def point_hierarchy(self, octree, pyramid, exsum):
|
||||
return generate_points(octree, pyramid.unsqueeze(0), exsum)
|
||||
|
||||
def _generate_rays_origin (self, height, width, camera_dist):
|
||||
"""Make simple orthographic rays"""
|
||||
camera_dist = torch.tensor(camera_dist, dtype=torch.float, device='cuda')
|
||||
camera_dist = camera_dist.repeat(height, width)
|
||||
ii, jj = torch.meshgrid(
|
||||
torch.arange(height, dtype=torch.float, device='cuda'),
|
||||
torch.arange(width, dtype=torch.float, device='cuda'))
|
||||
ii = (ii * 2. / height) - (height - 1.) / height
|
||||
jj = (jj * 2. / width) - (width - 1.) / width
|
||||
return torch.stack([ii, jj, camera_dist], dim=-1).reshape(-1, 3)
|
||||
|
||||
def test_raytrace_positive(self, octree, point_hierarchy, pyramid, exsum):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., 1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, -3)
|
||||
ridx, pidx = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 2, return_depth=False)
|
||||
|
||||
expected_nuggets = torch.tensor([
|
||||
[ 0, 5],
|
||||
[ 0, 6],
|
||||
[ 0, 13],
|
||||
[ 0, 14],
|
||||
[ 1, 7],
|
||||
[ 1, 8],
|
||||
[ 2, 15],
|
||||
[ 4, 9],
|
||||
[ 4, 10],
|
||||
[ 5, 11],
|
||||
[ 5, 12]], device='cuda', dtype=torch.int)
|
||||
assert torch.equal(ridx, expected_nuggets[...,0])
|
||||
assert torch.equal(pidx, expected_nuggets[...,1])
|
||||
|
||||
def test_raytrace_negative(self, octree, point_hierarchy, pyramid, exsum):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., -1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, 3)
|
||||
ridx, pidx = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 2, return_depth=False)
|
||||
|
||||
expected_nuggets = torch.tensor([
|
||||
[ 0, 14],
|
||||
[ 0, 13],
|
||||
[ 0, 6],
|
||||
[ 0, 5],
|
||||
[ 1, 8],
|
||||
[ 1, 7],
|
||||
[ 2, 15],
|
||||
[ 4, 10],
|
||||
[ 4, 9],
|
||||
[ 5, 12],
|
||||
[ 5, 11]], device='cuda', dtype=torch.int)
|
||||
assert torch.equal(ridx, expected_nuggets[...,0])
|
||||
assert torch.equal(pidx, expected_nuggets[...,1])
|
||||
|
||||
def test_raytrace_none(self, octree, point_hierarchy, pyramid, exsum):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., 1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, 3)
|
||||
ridx, pidx, depth = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 2, return_depth=True, with_exit=True)
|
||||
|
||||
expected_nuggets = torch.zeros((0, 2), device='cuda', dtype=torch.int)
|
||||
expected_depths = torch.zeros((0, 2), device='cuda', dtype=torch.float)
|
||||
assert torch.equal(ridx, expected_nuggets[...,0])
|
||||
assert torch.equal(pidx, expected_nuggets[...,1])
|
||||
assert torch.equal(depth, expected_depths)
|
||||
|
||||
def test_raytrace_coarser(self, octree, point_hierarchy, pyramid, exsum):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., 1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, -3)
|
||||
ridx, pidx = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 1, return_depth=False)
|
||||
|
||||
expected_nuggets = torch.tensor([
|
||||
[ 0, 1],
|
||||
[ 0, 2],
|
||||
[ 1, 1],
|
||||
[ 1, 2],
|
||||
[ 2, 3],
|
||||
[ 3, 3],
|
||||
[ 4, 1],
|
||||
[ 4, 2],
|
||||
[ 5, 1],
|
||||
[ 5, 2],
|
||||
[ 6, 3],
|
||||
[ 7, 3],
|
||||
[ 8, 4],
|
||||
[ 9, 4],
|
||||
[12, 4],
|
||||
[13, 4]], device='cuda', dtype=torch.int)
|
||||
assert torch.equal(ridx, expected_nuggets[...,0])
|
||||
assert torch.equal(pidx, expected_nuggets[...,1])
|
||||
|
||||
def test_raytrace_with_depth(self, octree, point_hierarchy, pyramid, exsum):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., -1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, 3)
|
||||
ridx, pidx, depth = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 2, return_depth=True)
|
||||
|
||||
expected_nuggets = torch.tensor([
|
||||
[ 0, 14],
|
||||
[ 0, 13],
|
||||
[ 0, 6],
|
||||
[ 0, 5],
|
||||
[ 1, 8],
|
||||
[ 1, 7],
|
||||
[ 2, 15],
|
||||
[ 4, 10],
|
||||
[ 4, 9],
|
||||
[ 5, 12],
|
||||
[ 5, 11]], device='cuda', dtype=torch.int)
|
||||
assert torch.equal(ridx, expected_nuggets[...,0])
|
||||
assert torch.equal(pidx, expected_nuggets[...,1])
|
||||
|
||||
expected_depth = torch.tensor([
|
||||
[2.0],
|
||||
[2.5],
|
||||
[3.0],
|
||||
[3.5],
|
||||
[3.0],
|
||||
[3.5],
|
||||
[3.5],
|
||||
[3.0],
|
||||
[3.5],
|
||||
[3.0],
|
||||
[3.5]], device='cuda', dtype=torch.float)
|
||||
assert torch.equal(depth, expected_depth)
|
||||
|
||||
def test_raytrace_with_depth_with_exit(self, octree, point_hierarchy, pyramid, exsum):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., -1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, 3)
|
||||
ridx, pidx, depth = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 2, return_depth=True, with_exit=True)
|
||||
|
||||
expected_nuggets = torch.tensor([
|
||||
[ 0, 14],
|
||||
[ 0, 13],
|
||||
[ 0, 6],
|
||||
[ 0, 5],
|
||||
[ 1, 8],
|
||||
[ 1, 7],
|
||||
[ 2, 15],
|
||||
[ 4, 10],
|
||||
[ 4, 9],
|
||||
[ 5, 12],
|
||||
[ 5, 11]], device='cuda', dtype=torch.int)
|
||||
assert torch.equal(ridx, expected_nuggets[...,0])
|
||||
assert torch.equal(pidx, expected_nuggets[...,1])
|
||||
|
||||
expected_depth = torch.tensor([
|
||||
[2.0, 2.5],
|
||||
[2.5, 3.0],
|
||||
[3.0, 3.5],
|
||||
[3.5, 4.0],
|
||||
[3.0, 3.5],
|
||||
[3.5, 4.0],
|
||||
[3.5, 4.0],
|
||||
[3.0, 3.5],
|
||||
[3.5, 4.0],
|
||||
[3.0, 3.5],
|
||||
[3.5, 4.0]], device='cuda', dtype=torch.float)
|
||||
|
||||
assert torch.equal(depth, expected_depth)
|
||||
|
||||
@pytest.mark.parametrize('return_depth,with_exit', [(False, False), (True, False), (True, True)])
|
||||
def test_raytrace_inside(self, octree, point_hierarchy, pyramid, exsum, return_depth, with_exit):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., -1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, 0.9)
|
||||
outputs = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 2,
|
||||
return_depth=return_depth, with_exit=with_exit)
|
||||
|
||||
ridx = outputs[0]
|
||||
pidx = outputs[1]
|
||||
|
||||
expected_nuggets = torch.tensor([
|
||||
[ 0, 13],
|
||||
[ 0, 6],
|
||||
[ 0, 5],
|
||||
[ 1, 8],
|
||||
[ 1, 7],
|
||||
[ 2, 15],
|
||||
[ 4, 10],
|
||||
[ 4, 9],
|
||||
[ 5, 12],
|
||||
[ 5, 11]], device='cuda', dtype=torch.int)
|
||||
assert torch.equal(ridx, expected_nuggets[...,0])
|
||||
assert torch.equal(pidx, expected_nuggets[...,1])
|
||||
if return_depth:
|
||||
depth = outputs[2]
|
||||
if with_exit:
|
||||
expected_depth = torch.tensor([
|
||||
[0.4, 0.9],
|
||||
[0.9, 1.4],
|
||||
[1.4, 1.9],
|
||||
[0.9, 1.4],
|
||||
[1.4, 1.9],
|
||||
[1.4, 1.9],
|
||||
[0.9, 1.4],
|
||||
[1.4, 1.9],
|
||||
[0.9, 1.4],
|
||||
[1.4, 1.9]], device='cuda', dtype=torch.float)
|
||||
else:
|
||||
expected_depth = torch.tensor([
|
||||
[0.4],
|
||||
[0.9],
|
||||
[1.4],
|
||||
[0.9],
|
||||
[1.4],
|
||||
[1.4],
|
||||
[0.9],
|
||||
[1.4],
|
||||
[0.9],
|
||||
[1.4]], device='cuda', dtype=torch.float)
|
||||
assert torch.allclose(depth, expected_depth)
|
||||
|
||||
def test_ambiguous_raytrace(self):
|
||||
# TODO(ttakikawa):
|
||||
# Since 0.10.0, the behaviour of raytracing exactly between voxels
|
||||
# has been changed from no hits at all to hitting all adjacent voxels.
|
||||
# This has numerical ramifications because it may cause instability / error
|
||||
# in the estimation of optical thickness in the volume rendering process
|
||||
# among other issues. However, we have found that this doesn't lead to any
|
||||
# obvious visual errors, whereas the no hit case causes speckle noise.
|
||||
# We will eventually do a more thorough analysis of the numerical consideration of this
|
||||
# behaviour, but for now we choose to prevent obvious visual errors.
|
||||
|
||||
octree = torch.tensor([255], dtype=torch.uint8, device='cuda')
|
||||
length = torch.tensor([1], dtype=torch.int32)
|
||||
max_level, pyramids, exsum = scan_octrees(octree, length)
|
||||
point_hierarchy = generate_points(octree, pyramids, exsum)
|
||||
origin = torch.tensor([
|
||||
[0., 0., 3.],
|
||||
[3., 3., 3.]], dtype=torch.float, device='cuda')
|
||||
direction = torch.tensor([
|
||||
[0., 0., -1.],
|
||||
[-1. / 3., -1. / 3., -1. / 3.]], dtype=torch.float, device='cuda')
|
||||
ridx, pidx, depth = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramids[0], exsum, origin, direction, 1, return_depth=True)
|
||||
expected_nuggets = torch.tensor([
|
||||
[0, 2],
|
||||
[0, 1],
|
||||
[0, 4],
|
||||
[0, 6],
|
||||
[0, 3],
|
||||
[0, 5],
|
||||
[0, 8],
|
||||
[0, 7],
|
||||
[1, 8],
|
||||
[1, 1]], device='cuda', dtype=torch.int)
|
||||
assert torch.equal(ridx, expected_nuggets[...,0])
|
||||
assert torch.equal(pidx, expected_nuggets[...,1])
|
||||
|
||||
def test_mark_first_positive(self, octree, point_hierarchy, pyramid, exsum):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., 1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, -3)
|
||||
ridx, pidx = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 2, return_depth=False)
|
||||
first_hits = mark_pack_boundaries(ridx)
|
||||
expected_first_hits = torch.tensor([1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0],
|
||||
device='cuda', dtype=torch.bool)
|
||||
assert torch.equal(first_hits, expected_first_hits)
|
||||
|
||||
def test_mark_first_negative(self, octree, point_hierarchy, pyramid, exsum):
|
||||
height = 4
|
||||
width = 4
|
||||
direction = torch.tensor([[0., 0., -1.]], dtype=torch.float,
|
||||
device='cuda').repeat(height * width , 1)
|
||||
origin = self._generate_rays_origin(height, width, 3)
|
||||
ridx, pidx = unbatched_raytrace(
|
||||
octree, point_hierarchy, pyramid, exsum, origin, direction, 2, return_depth=False)
|
||||
first_hits = mark_pack_boundaries(ridx)
|
||||
expected_first_hits = torch.tensor([1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0],
|
||||
device='cuda', dtype=torch.bool)
|
||||
assert torch.equal(first_hits, expected_first_hits)
|
||||
|
||||
140
tests/python/kaolin/render/test_camera.py
Normal file
@@ -0,0 +1,140 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import random
|
||||
import pytest
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
from kaolin.render.camera import perspective_camera, generate_perspective_projection, \
|
||||
rotate_translate_points, generate_rotate_translate_matrices, \
|
||||
generate_transformation_matrix
|
||||
|
||||
|
||||
@pytest.mark.parametrize("device", ["cuda"])
|
||||
@pytest.mark.parametrize("batch_size", [5])
|
||||
@pytest.mark.parametrize("height", [256])
|
||||
@pytest.mark.parametrize("width", [512])
|
||||
class TestCamera:
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_pos(self, batch_size, device):
|
||||
# shape: (batch_size, 3)
|
||||
return torch.tensor([[0., 0., 4.]], dtype=torch.float,
|
||||
device=device).repeat(batch_size, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def object_pos(self, batch_size, device):
|
||||
# shape: (batch_size, 3)
|
||||
return torch.tensor([[0., 0., 0.]], dtype=torch.float,
|
||||
device=device).repeat(batch_size, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_up(self, batch_size, device):
|
||||
# shape: (batch_size, 3)
|
||||
return torch.tensor([[0., 1., 0.]], dtype=torch.float,
|
||||
device=device).repeat(batch_size, 1)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera_fovy(self):
|
||||
# 2.5 means tan(fov angle)
|
||||
# tan(fov_y/2) = 2.5, fovy_y is around 45 deg
|
||||
angle = np.arctan(1.0 / 2.5) * 2
|
||||
return angle
|
||||
|
||||
def test_camera_rot(self, batch_size, device, width, height, camera_pos,
|
||||
object_pos, camera_up):
|
||||
# shape: (batch_size, 3, 3)
|
||||
mtx_rot, _ = generate_rotate_translate_matrices(
|
||||
camera_pos, object_pos, camera_up)
|
||||
mtx_rot2 = torch.tensor([[[1., 0., 0.],
|
||||
[0., 1., 0.],
|
||||
[0., 0., 1.]]],
|
||||
dtype=torch.float,
|
||||
device=device).repeat(batch_size, 1, 1)
|
||||
|
||||
nRet = torch.allclose(mtx_rot,
|
||||
mtx_rot2,
|
||||
rtol=1e-05,
|
||||
atol=1e-08,
|
||||
equal_nan=False)
|
||||
assert nRet
|
||||
|
||||
def test_camera_trans(self, batch_size, device, width, height, camera_pos,
|
||||
object_pos, camera_up):
|
||||
# shape: (batch_size, 3, 1)
|
||||
_, mtx_trans = generate_rotate_translate_matrices(
|
||||
camera_pos, object_pos, camera_up)
|
||||
mtx_trans2 = torch.tensor([[0., 0., 4.]],
|
||||
dtype=torch.float,
|
||||
device=device).repeat(batch_size, 1)
|
||||
nRet = torch.allclose(mtx_trans,
|
||||
mtx_trans2,
|
||||
rtol=1e-05,
|
||||
atol=1e-08,
|
||||
equal_nan=False)
|
||||
assert nRet
|
||||
|
||||
def test_camera_transform(self, batch_size, device, width, height, camera_pos,
|
||||
object_pos, camera_up):
|
||||
mtx_transform = generate_transformation_matrix(
|
||||
camera_pos, object_pos, camera_up)
|
||||
mtx_transform2 = torch.tensor([[[1., 0., 0.],
|
||||
[0., 1., 0.],
|
||||
[0., 0., 1.],
|
||||
[0., 0., -4.]]],
|
||||
dtype=torch.float,
|
||||
device=device).repeat(batch_size, 1, 1)
|
||||
assert torch.allclose(mtx_transform, mtx_transform2)
|
||||
|
||||
|
||||
def test_camera_proj(self, batch_size, height, width, device, camera_fovy):
|
||||
# shape: (3, 1)
|
||||
# we support arbitrary height and width
|
||||
mtx_proj = generate_perspective_projection(camera_fovy,
|
||||
ratio=width / height)
|
||||
mtx_proj = mtx_proj.to(device)
|
||||
|
||||
mtx_proj2 = torch.tensor([[2.5 / (width / height)], [2.5], [-1]],
|
||||
dtype=torch.float,
|
||||
device=device)
|
||||
nRet = torch.allclose(mtx_proj,
|
||||
mtx_proj2,
|
||||
rtol=1e-05,
|
||||
atol=1e-08,
|
||||
equal_nan=False)
|
||||
assert nRet
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def vertices(self, batch_size, device):
|
||||
return torch.tensor([[[0., 0., 0.],
|
||||
[1., 1., 0.],
|
||||
[1., 1., 1.],
|
||||
[-2., -3., -4.]]], dtype=torch.float,
|
||||
device=device).repeat(batch_size, 1, 1)
|
||||
|
||||
def test_cmp_transform_rotate_translate(self, batch_size, device, width, height, camera_pos,
|
||||
object_pos, camera_up, vertices):
|
||||
mtx_rot, mtx_trans = generate_rotate_translate_matrices(
|
||||
camera_pos, object_pos, camera_up)
|
||||
mtx_transform = generate_transformation_matrix(
|
||||
camera_pos, object_pos, camera_up)
|
||||
vertices_camera = rotate_translate_points(vertices, mtx_rot, mtx_trans)
|
||||
padded_vertices = torch.nn.functional.pad(
|
||||
vertices, (0, 1), mode='constant', value=1.
|
||||
)
|
||||
vertices_camera2 = padded_vertices @ mtx_transform
|
||||
assert torch.allclose(vertices_camera, vertices_camera2)
|
||||
|
||||
|
||||
242
tests/python/kaolin/rep/test_rep_spc.py
Normal file
@@ -0,0 +1,242 @@
|
||||
# Copyright (c) 2021 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pytest
|
||||
import torch
|
||||
from itertools import combinations, chain
|
||||
|
||||
from kaolin.rep import Spc
|
||||
from kaolin.ops.spc import bits_to_uint8
|
||||
|
||||
def _test_func(octrees, lengths, another_arg=None, **kwargs):
|
||||
remaining_kwargs = kwargs.keys() - Spc.KEYS
|
||||
if len(remaining_kwargs) > 0:
|
||||
raise TypeError("_test_func got an unexpected keyword argument "
|
||||
f"{list(remaining_kwargs)[0]}")
|
||||
|
||||
class TestSpc:
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees(self):
|
||||
bits_t = torch.tensor([
|
||||
[0, 0, 0, 1, 0, 0, 0, 1],
|
||||
[0, 0, 0, 0, 0, 1, 1, 0], [0, 0, 1, 0, 0, 0, 0, 0],
|
||||
[1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0],
|
||||
|
||||
[1, 0, 0, 0, 0, 0, 0, 0],
|
||||
[0, 1, 1, 1, 0, 0, 0, 0],
|
||||
[0, 0, 1, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1], [0, 1, 0, 1, 0, 1, 0, 1]],
|
||||
device='cuda', dtype=torch.float)
|
||||
return bits_to_uint8(torch.flip(bits_t, dims=(-1,)))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def lengths(self):
|
||||
return torch.tensor([6, 5], dtype=torch.int)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_max_level(self):
|
||||
return 3
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_pyramids(self):
|
||||
return torch.tensor(
|
||||
[[[1, 2, 3, 3, 0], [0, 1, 3, 6, 9]],
|
||||
[[1, 1, 3, 13, 0], [0, 1, 2, 5, 18]]], dtype=torch.int32)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_exsum(self):
|
||||
return torch.tensor(
|
||||
[0, 2, 4, 5, 6, 7, 8, 0, 1, 4, 5, 13, 17],
|
||||
dtype=torch.int32, device='cuda')
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_point_hierarchies(self):
|
||||
return torch.tensor([
|
||||
[0, 0, 0],
|
||||
[0, 0, 0], [1, 0, 0],
|
||||
[0, 0, 1], [0, 1, 0], [3, 0, 1],
|
||||
[1, 1, 3], [1, 3, 1], [6, 1, 3],
|
||||
|
||||
[0, 0, 0],
|
||||
[1, 1, 1],
|
||||
[3, 2, 2], [3, 2, 3], [3, 3, 2],
|
||||
[7, 4, 5], [6, 4, 6], [6, 4, 7], [6, 5, 6], [6, 5, 7], [7, 4, 6], \
|
||||
[7, 4, 7], [7, 5, 6], [7, 5, 7], [6, 6, 4], [6, 7, 4], \
|
||||
[7, 6, 4], [7, 7, 4]
|
||||
], device='cuda', dtype=torch.int16)
|
||||
|
||||
def test_non_init_private_attr(self, octrees, lengths):
|
||||
"""Check that private placeholder attributes
|
||||
are not initialized after constructor"""
|
||||
spc = Spc(octrees, lengths)
|
||||
assert torch.equal(spc.octrees, octrees)
|
||||
assert torch.equal(spc.lengths, lengths)
|
||||
assert spc._max_level is None
|
||||
assert spc._pyramids is None
|
||||
assert spc._exsum is None
|
||||
assert spc._point_hierarchies is None
|
||||
|
||||
def test_init_private_attr(self, octrees, lengths, expected_max_level,
|
||||
expected_pyramids, expected_exsum, expected_point_hierarchies):
|
||||
"""Check that private placeholder attributes
|
||||
are initialized if specified at constructor"""
|
||||
spc = Spc(octrees, lengths, expected_max_level, expected_pyramids,
|
||||
expected_exsum, expected_point_hierarchies)
|
||||
assert torch.equal(spc.octrees, octrees)
|
||||
assert torch.equal(spc.lengths, lengths)
|
||||
assert spc._max_level == expected_max_level
|
||||
assert torch.equal(spc._pyramids, expected_pyramids)
|
||||
assert torch.equal(spc._exsum, expected_exsum)
|
||||
assert torch.equal(spc._point_hierarchies, expected_point_hierarchies)
|
||||
|
||||
def test_scan_octrees_properties(self, octrees, lengths, expected_max_level,
|
||||
expected_pyramids, expected_exsum):
|
||||
"""Check that properties generated by scan_octrees are accessible"""
|
||||
spc = Spc(octrees, lengths)
|
||||
assert torch.equal(spc.octrees, octrees)
|
||||
assert torch.equal(spc.lengths, lengths)
|
||||
assert spc.max_level == expected_max_level
|
||||
assert torch.equal(spc.pyramids, expected_pyramids)
|
||||
assert torch.equal(spc.exsum, expected_exsum)
|
||||
# This is checking that:
|
||||
# 1) data pointer on properties is the same than on private attributes
|
||||
# 2) private attributes are not recomputed
|
||||
assert spc._pyramids.data_ptr() == spc.pyramids.data_ptr()
|
||||
assert spc._exsum.data_ptr() == spc.exsum.data_ptr()
|
||||
# _point_hierarchies is still not initialized
|
||||
assert spc._point_hierarchies is None
|
||||
|
||||
def test_generate_points_properties(self, octrees, lengths,
|
||||
expected_point_hierarchies):
|
||||
"""Check that properties generated by generate_points are accessible"""
|
||||
spc = Spc(octrees, lengths)
|
||||
assert spc._point_hierarchies is None
|
||||
assert torch.equal(spc.point_hierarchies, expected_point_hierarchies)
|
||||
# point_hierarchies, generated _max_level, _pyramids and _exsum
|
||||
# as they are dependencies
|
||||
assert spc._max_level is not None
|
||||
assert spc._pyramids is not None
|
||||
assert spc._exsum is not None
|
||||
|
||||
# Check that calling point_hierarchies properties again
|
||||
# is not recomputing the private attributes
|
||||
old_pyramids_data_ptr = spc._pyramids.data_ptr()
|
||||
old_exsum_data_ptr = spc._exsum.data_ptr()
|
||||
assert spc._point_hierarchies.data_ptr() == spc.point_hierarchies.data_ptr()
|
||||
assert old_pyramids_data_ptr == spc._pyramids.data_ptr()
|
||||
assert old_exsum_data_ptr == spc._exsum.data_ptr()
|
||||
|
||||
def test_from_list(self, octrees, lengths):
|
||||
octrees_list = []
|
||||
start_idx = 0
|
||||
for length in lengths:
|
||||
octrees_list.append(octrees[start_idx:start_idx + length])
|
||||
start_idx += length
|
||||
spc = Spc.from_list(octrees_list)
|
||||
assert torch.equal(spc.octrees, octrees)
|
||||
assert torch.equal(spc.lengths, lengths)
|
||||
|
||||
def test_cpu_init(self, octrees, lengths, expected_max_level,
|
||||
expected_pyramids, expected_exsum, expected_point_hierarchies):
|
||||
octrees = octrees.cpu()
|
||||
expected_exsum = expected_exsum.cpu()
|
||||
expected_point_hierarchies = expected_point_hierarchies.cpu()
|
||||
spc = Spc(octrees, lengths, expected_max_level, expected_pyramids,
|
||||
expected_exsum, expected_point_hierarchies)
|
||||
assert torch.equal(spc.octrees, octrees)
|
||||
assert torch.equal(spc.lengths, lengths)
|
||||
assert spc.max_level == expected_max_level
|
||||
assert torch.equal(spc.pyramids, expected_pyramids)
|
||||
assert torch.equal(spc.exsum, expected_exsum)
|
||||
assert torch.equal(spc.point_hierarchies, expected_point_hierarchies)
|
||||
|
||||
@pytest.mark.parametrize('using_to', [False, True])
|
||||
def test_to_cpu(self, using_to, octrees, lengths, expected_max_level,
|
||||
expected_pyramids, expected_exsum, expected_point_hierarchies):
|
||||
spc = Spc(octrees, lengths, expected_max_level, expected_pyramids,
|
||||
expected_exsum, expected_point_hierarchies)
|
||||
if using_to:
|
||||
spc = spc.to('cpu')
|
||||
else:
|
||||
spc = spc.cpu()
|
||||
assert torch.equal(spc.octrees, octrees.cpu())
|
||||
assert torch.equal(spc.lengths, lengths)
|
||||
assert spc.max_level == expected_max_level
|
||||
assert torch.equal(spc.pyramids, expected_pyramids)
|
||||
assert torch.equal(spc.exsum, expected_exsum.cpu())
|
||||
assert torch.equal(spc.point_hierarchies, expected_point_hierarchies.cpu())
|
||||
|
||||
@pytest.mark.parametrize('using_to', [False, True])
|
||||
def test_to_cuda(self, using_to, octrees, lengths, expected_max_level,
|
||||
expected_pyramids, expected_exsum, expected_point_hierarchies):
|
||||
spc = Spc(octrees.cpu(), lengths, expected_max_level, expected_pyramids,
|
||||
expected_exsum.cpu(), expected_point_hierarchies.cpu())
|
||||
if using_to:
|
||||
spc = spc.to('cuda')
|
||||
else:
|
||||
spc = spc.cuda()
|
||||
assert torch.equal(spc.octrees, octrees)
|
||||
assert torch.equal(spc.lengths, lengths)
|
||||
assert spc.max_level == expected_max_level
|
||||
assert torch.equal(spc.pyramids, expected_pyramids)
|
||||
assert torch.equal(spc.exsum, expected_exsum)
|
||||
assert torch.equal(spc.point_hierarchies, expected_point_hierarchies)
|
||||
|
||||
def test_to_dict_default(self, octrees, lengths, expected_max_level,
|
||||
expected_pyramids, expected_exsum, expected_point_hierarchies):
|
||||
spc = Spc(octrees, lengths)
|
||||
d = spc.to_dict()
|
||||
assert d.keys() == {'octrees', 'lengths', 'max_level', 'pyramids',
|
||||
'exsum', 'point_hierarchies'}
|
||||
assert torch.equal(d['octrees'], octrees)
|
||||
assert torch.equal(d['lengths'], lengths)
|
||||
assert d['max_level'] == expected_max_level
|
||||
assert torch.equal(d['pyramids'], expected_pyramids)
|
||||
assert torch.equal(d['exsum'], expected_exsum)
|
||||
assert torch.equal(d['point_hierarchies'], expected_point_hierarchies)
|
||||
|
||||
@pytest.mark.parametrize('keys', list(chain(*[combinations(
|
||||
['octrees', 'lengths', 'max_level', 'pyramids', 'exsum', 'point_hierarchies'],
|
||||
i) for i in range(1, 7)])))
|
||||
def test_to_dict_with_keys(self, keys, octrees, lengths,
|
||||
expected_max_level, expected_pyramids,
|
||||
expected_exsum, expected_point_hierarchies):
|
||||
keys = set(keys)
|
||||
spc = Spc(octrees, lengths)
|
||||
d = spc.to_dict(keys)
|
||||
assert d.keys() == keys
|
||||
if 'octrees' in keys:
|
||||
assert torch.equal(d['octrees'], octrees)
|
||||
if 'lengths' in keys:
|
||||
assert torch.equal(d['lengths'], lengths)
|
||||
if 'max_level' in keys:
|
||||
assert d['max_level'] == expected_max_level
|
||||
if 'pyramids' in keys:
|
||||
assert torch.equal(d['pyramids'], expected_pyramids)
|
||||
if 'exsum' in keys:
|
||||
assert torch.equal(d['exsum'], expected_exsum)
|
||||
if 'point_hierarchies' in keys:
|
||||
assert torch.equal(d['point_hierarchies'], expected_point_hierarchies)
|
||||
|
||||
def test_to_dict_kwargs(self, octrees, lengths):
|
||||
spc = Spc(octrees, lengths)
|
||||
_test_func(**spc.to_dict(), another_arg=1)
|
||||
|
||||
def test_typo_to_dict_kwargs(self, octrees, lengths):
|
||||
spc = Spc(octrees, lengths)
|
||||
with pytest.raises(TypeError,
|
||||
match="_test_func got an unexpected keyword argument anotherarg"):
|
||||
#typo on purpose
|
||||
_test_func(**spc.to_dict(), anotherarg=1)
|
||||
|
||||
1181
tests/python/kaolin/rep/test_surface_mesh.py
Normal file
603
tests/python/kaolin/utils/test_testing.py
Normal file
@@ -0,0 +1,603 @@
|
||||
# Copyright (c) 2019,20-21 NVIDIA CORPORATION & AFFILIATES.
|
||||
# All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import copy
|
||||
from collections import namedtuple
|
||||
import logging
|
||||
import numpy as np
|
||||
import pytest
|
||||
import random
|
||||
import torch
|
||||
|
||||
from kaolin.ops.random import random_tensor
|
||||
from kaolin.ops.spc.uint8 import bits_to_uint8
|
||||
|
||||
from kaolin.utils import testing
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
sample_tuple = namedtuple('sample_tuple', ['A', 'B', 'C', 'D'])
|
||||
|
||||
|
||||
class TestCheckTensor:
|
||||
@pytest.fixture(autouse=True)
|
||||
def shape(self):
|
||||
return (4, 4)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def dtype(self):
|
||||
return torch.float
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def device(self):
|
||||
return 'cpu'
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def tensor(self, shape, dtype, device):
|
||||
return random_tensor(0, 256, shape=shape, dtype=dtype, device=device)
|
||||
|
||||
def test_tensor_success(self, tensor, shape, dtype, device):
|
||||
assert testing.check_tensor(tensor, shape, dtype, device)
|
||||
|
||||
@pytest.mark.parametrize("partial_shape", [(4, None), (None, 4)])
|
||||
def test_tensor_partial_shape_success(self, tensor, partial_shape, dtype,
|
||||
device):
|
||||
assert testing.check_tensor(tensor, partial_shape, dtype, device)
|
||||
|
||||
def test_tensor_default_success(self, tensor):
|
||||
assert testing.check_tensor(tensor)
|
||||
|
||||
@pytest.mark.parametrize("wrong_shape", [(3, 3)])
|
||||
def test_tensor_fail1(self, tensor, wrong_shape, dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match=r"tensor shape is torch.Size\(\[4, 4\]\), should be \(3, 3\)"):
|
||||
testing.check_tensor(tensor, wrong_shape, dtype, device)
|
||||
assert not testing.check_tensor(tensor, wrong_shape, dtype, device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_dtype", [torch.long])
|
||||
def test_tensor_fail2(self, tensor, shape, wrong_dtype, device):
|
||||
with pytest.raises(TypeError,
|
||||
match="tensor dtype is torch.float32, should be torch.int64"):
|
||||
testing.check_tensor(tensor, shape, wrong_dtype, device)
|
||||
assert not testing.check_tensor(tensor, shape, wrong_dtype, device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_device", ['cuda'])
|
||||
def test_tensor_fail3(self, tensor, shape, dtype, wrong_device):
|
||||
with pytest.raises(TypeError,
|
||||
match="tensor device is cpu, should be cuda"):
|
||||
testing.check_tensor(tensor, shape, dtype, wrong_device)
|
||||
assert not testing.check_tensor(tensor, shape, dtype, wrong_device,
|
||||
throw=False)
|
||||
|
||||
|
||||
class TestCheckBatchedTensor:
|
||||
@pytest.fixture(autouse=True)
|
||||
def shape_per_tensor(self):
|
||||
return torch.LongTensor([[1, 1], [2, 2]])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def batch_size(self, shape_per_tensor):
|
||||
return shape_per_tensor.shape[0]
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def max_shape(self):
|
||||
return (4, 4)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def last_dim(self):
|
||||
return 2
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def dtype(self):
|
||||
return torch.float
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def device(self):
|
||||
return 'cpu'
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def padding_value(self):
|
||||
return -1.
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def total_numel(self, shape_per_tensor):
|
||||
return torch.sum(torch.prod(shape_per_tensor, dim=1))
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def packed_tensor(self, total_numel, last_dim, dtype, device):
|
||||
return random_tensor(0, 256, shape=(total_numel, last_dim),
|
||||
dtype=dtype, device=device)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def padded_tensor(self, batch_size, max_shape, last_dim, padding_value,
|
||||
dtype, device):
|
||||
output = torch.full((batch_size, *max_shape, last_dim),
|
||||
fill_value=padding_value,
|
||||
dtype=dtype, device=device)
|
||||
output[0, :1, :1] = 0
|
||||
output[1, :2, :2] = 0
|
||||
return output
|
||||
|
||||
def test_packed_success(self, packed_tensor, total_numel, last_dim, dtype,
|
||||
device):
|
||||
assert testing.check_packed_tensor(packed_tensor, total_numel, last_dim,
|
||||
dtype, device)
|
||||
|
||||
def test_packed_default_success(self, packed_tensor):
|
||||
assert testing.check_packed_tensor(packed_tensor)
|
||||
|
||||
@pytest.mark.parametrize("wrong_total_numel", [6])
|
||||
def test_packed_fail1(self, packed_tensor, wrong_total_numel, last_dim,
|
||||
dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match='tensor total number of elements is 5, should be 6'):
|
||||
testing.check_packed_tensor(packed_tensor, wrong_total_numel,
|
||||
last_dim, dtype, device)
|
||||
assert not testing.check_packed_tensor(packed_tensor, wrong_total_numel,
|
||||
last_dim,
|
||||
dtype, device, throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_last_dim", [3])
|
||||
def test_packed_fail2(self, packed_tensor, total_numel, wrong_last_dim,
|
||||
dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match='tensor last_dim is 2, should be 3'):
|
||||
testing.check_packed_tensor(packed_tensor, total_numel,
|
||||
wrong_last_dim, dtype, device)
|
||||
assert not testing.check_packed_tensor(packed_tensor, total_numel,
|
||||
wrong_last_dim,
|
||||
dtype, device, throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_dtype", [torch.long])
|
||||
def test_packed_fail3(self, packed_tensor, total_numel, last_dim,
|
||||
wrong_dtype, device):
|
||||
with pytest.raises(TypeError,
|
||||
match='tensor dtype is torch.float32, should be torch.int64'):
|
||||
testing.check_packed_tensor(packed_tensor, total_numel, last_dim,
|
||||
wrong_dtype, device)
|
||||
assert not testing.check_packed_tensor(packed_tensor, total_numel,
|
||||
last_dim,
|
||||
wrong_dtype, device, throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_device", ['cuda'])
|
||||
def test_packed_fail4(self, packed_tensor, total_numel, last_dim, dtype,
|
||||
wrong_device):
|
||||
with pytest.raises(TypeError,
|
||||
match='tensor device is cpu, should be cuda'):
|
||||
testing.check_packed_tensor(packed_tensor, total_numel, last_dim,
|
||||
dtype, wrong_device)
|
||||
assert not testing.check_packed_tensor(packed_tensor, total_numel,
|
||||
last_dim,
|
||||
dtype, wrong_device, throw=False)
|
||||
|
||||
def test_padded_success(self, padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, max_shape, last_dim, dtype, device):
|
||||
assert testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, max_shape, last_dim,
|
||||
dtype, device)
|
||||
|
||||
@pytest.mark.parametrize("partial_max_shape", [(4, None), (None, 4)])
|
||||
def test_padded_partial_shape_success(self, padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, partial_max_shape,
|
||||
last_dim, dtype, device):
|
||||
assert testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, partial_max_shape,
|
||||
last_dim, dtype, device)
|
||||
|
||||
def test_padded_default_success(self, padded_tensor):
|
||||
assert testing.check_padded_tensor(padded_tensor)
|
||||
|
||||
@pytest.mark.parametrize("wrong_padding_value", [-2])
|
||||
def test_padded_fail1(self, padded_tensor, wrong_padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, max_shape, last_dim, dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match=r'tensor padding at \(0, 0, 1, 0\) is -1.0, should be -2'):
|
||||
testing.check_padded_tensor(padded_tensor, wrong_padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, max_shape, last_dim, dtype,
|
||||
device)
|
||||
assert not testing.check_padded_tensor(padded_tensor,
|
||||
wrong_padding_value,
|
||||
shape_per_tensor, batch_size,
|
||||
max_shape,
|
||||
last_dim, dtype, device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_shape_per_tensor",
|
||||
[torch.LongTensor([[1, 1], [1, 1]])])
|
||||
def test_padded_fail2(self, padded_tensor, padding_value,
|
||||
wrong_shape_per_tensor,
|
||||
batch_size, max_shape, last_dim, dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match=r'tensor padding at \(1, 0, 1, 0\) is 0.0, should be -1.0'):
|
||||
testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
wrong_shape_per_tensor,
|
||||
batch_size, max_shape, last_dim, dtype,
|
||||
device)
|
||||
assert not testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
wrong_shape_per_tensor,
|
||||
batch_size, max_shape,
|
||||
last_dim, dtype, device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_batch_size", [3])
|
||||
def test_padded_fail3(self, padded_tensor, padding_value, shape_per_tensor,
|
||||
wrong_batch_size, max_shape, last_dim, dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match='batch_size is 3, but there are 2 shapes in shape_per_tensor'):
|
||||
testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
wrong_batch_size, max_shape, last_dim,
|
||||
dtype, device)
|
||||
assert not testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
wrong_batch_size, max_shape,
|
||||
last_dim, dtype, device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_batch_size", [3])
|
||||
def test_padded_fail4(self, padded_tensor, padding_value,
|
||||
wrong_batch_size, max_shape, last_dim, dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match='tensor batch size is 2, should be 3'):
|
||||
testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
batch_size=wrong_batch_size,
|
||||
max_shape=max_shape, last_dim=last_dim,
|
||||
dtype=dtype,
|
||||
device=device)
|
||||
assert not testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
batch_size=wrong_batch_size,
|
||||
max_shape=max_shape,
|
||||
last_dim=last_dim, dtype=dtype,
|
||||
device=device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_max_shape", [(4, 4, 4)])
|
||||
def test_padded_fail5(self, padded_tensor, padding_value, shape_per_tensor,
|
||||
batch_size, wrong_max_shape, last_dim, dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match=r'tensor max_shape is torch.Size\(\[4, 4\]\), should be \(4, 4, 4\)'):
|
||||
testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, wrong_max_shape, last_dim,
|
||||
dtype, device)
|
||||
assert not testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor, batch_size,
|
||||
wrong_max_shape,
|
||||
last_dim, dtype, device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_last_dim", [3])
|
||||
def test_padded_fail6(self, padded_tensor, padding_value, shape_per_tensor,
|
||||
batch_size, max_shape, wrong_last_dim, dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match='tensor last_dim is 2, should be 3'):
|
||||
testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, max_shape, wrong_last_dim,
|
||||
dtype, device)
|
||||
assert not testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor, batch_size,
|
||||
max_shape,
|
||||
wrong_last_dim, dtype, device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_dtype", [torch.long])
|
||||
def test_padded_fail7(self, padded_tensor, padding_value, shape_per_tensor,
|
||||
batch_size, max_shape, last_dim, wrong_dtype, device):
|
||||
with pytest.raises(TypeError,
|
||||
match='tensor dtype is torch.float32, should be torch.int64'):
|
||||
testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, max_shape, last_dim,
|
||||
wrong_dtype, device)
|
||||
assert not testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor, batch_size,
|
||||
max_shape,
|
||||
last_dim, wrong_dtype, device,
|
||||
throw=False)
|
||||
|
||||
@pytest.mark.parametrize("wrong_device", ['cuda'])
|
||||
def test_padded_fail8(self, padded_tensor, padding_value, shape_per_tensor,
|
||||
batch_size, max_shape, last_dim, dtype, wrong_device):
|
||||
with pytest.raises(TypeError,
|
||||
match='tensor device is cpu, should be cuda'):
|
||||
testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor,
|
||||
batch_size, max_shape, last_dim, dtype,
|
||||
wrong_device)
|
||||
assert not testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
shape_per_tensor, batch_size,
|
||||
max_shape,
|
||||
last_dim, dtype, wrong_device,
|
||||
throw=False)
|
||||
|
||||
def test_padded_fail9(self, padded_tensor, padding_value, batch_size,
|
||||
max_shape,
|
||||
last_dim, dtype, device):
|
||||
with pytest.raises(ValueError,
|
||||
match='shape_per_tensor should not be None if padding_value is set'):
|
||||
testing.check_padded_tensor(padded_tensor, padding_value,
|
||||
batch_size=batch_size,
|
||||
max_shape=max_shape, last_dim=last_dim,
|
||||
dtype=dtype,
|
||||
device=device)
|
||||
|
||||
class TestCheckSpcOctrees:
|
||||
@pytest.fixture(autouse=True)
|
||||
def device(self):
|
||||
return 'cuda'
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def octrees(self, device):
|
||||
bits_t = torch.flip(torch.tensor(
|
||||
[[0, 0, 0, 0, 1, 0, 0, 0],
|
||||
[0, 0, 0, 1, 0, 0, 0, 1],
|
||||
[0, 0, 0, 0, 0, 0, 1, 1], [1, 1, 1, 1, 0, 0, 0, 0],
|
||||
|
||||
[1, 0, 1, 0, 0, 0, 0, 0],
|
||||
[0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 0],
|
||||
[0, 0, 0, 0, 0, 0, 1, 1], [0, 1, 0, 0, 0, 0, 0, 0]
|
||||
], dtype=torch.bool, device=device), dims=(-1,))
|
||||
return bits_to_uint8(bits_t)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def lengths(self):
|
||||
return torch.tensor([4, 5], dtype=torch.int)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def batch_size(self):
|
||||
return 2
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def level(self):
|
||||
return 3
|
||||
|
||||
def test_spc_success(self, octrees, lengths, batch_size, level, device):
|
||||
assert testing.check_spc_octrees(octrees, lengths, batch_size, level, device)
|
||||
|
||||
def test_spc_default_success(self, octrees, lengths):
|
||||
assert testing.check_spc_octrees(octrees, lengths)
|
||||
|
||||
@pytest.mark.parametrize('wrong_device', ['cpu'])
|
||||
def test_spc_wrong_device(self, octrees, lengths, wrong_device):
|
||||
with pytest.raises(ValueError,
|
||||
match='octrees is on cuda:0, should be on cpu'):
|
||||
testing.check_spc_octrees(octrees, lengths, device=wrong_device)
|
||||
|
||||
@pytest.mark.parametrize('wrong_lengths',
|
||||
[torch.tensor([3, 5], dtype=torch.int)])
|
||||
def test_spc_wrong_lengths(self, octrees, wrong_lengths):
|
||||
with pytest.raises(ValueError,
|
||||
match='lengths at 0 is 3, but level 3 ends at length 4'):
|
||||
testing.check_spc_octrees(octrees, wrong_lengths)
|
||||
|
||||
@pytest.mark.parametrize('wrong_batch_size', [3])
|
||||
def test_spc_wrong_batch_size(self, octrees, lengths, wrong_batch_size):
|
||||
with pytest.raises(ValueError,
|
||||
match=r'lengths is of shape torch.Size\(\[2\]\), '
|
||||
'but batch_size should be 3'):
|
||||
testing.check_spc_octrees(octrees, lengths, batch_size=wrong_batch_size)
|
||||
|
||||
@pytest.mark.parametrize('wrong_level', [4])
|
||||
def test_spc_wrong_level(self, octrees, lengths, wrong_level):
|
||||
with pytest.raises(ValueError,
|
||||
match='octree 0 ends at level 3, should end at 4'):
|
||||
testing.check_spc_octrees(octrees, lengths, level=wrong_level)
|
||||
|
||||
class TestSeedDecorator:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def get_fix_input(self):
|
||||
return torch.tensor([1, 2, 3])
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_seed1(self):
|
||||
torch_expected = torch.tensor([[0.6614, 0.2669, 0.0617]])
|
||||
np_expected = 0.5507979025745755
|
||||
random_expected = 978
|
||||
|
||||
return torch_expected, np_expected, random_expected
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def expected_seed2(self):
|
||||
torch_expected = torch.tensor([[-0.4868, -0.6038, -0.5581]])
|
||||
np_expected = 0.010374153885699955
|
||||
random_expected = 585
|
||||
|
||||
return torch_expected, np_expected, random_expected
|
||||
|
||||
@testing.with_seed(torch_seed=1, numpy_seed=2, random_seed=3)
|
||||
def test_seed1(self, expected_seed1):
|
||||
torch_result = torch.randn((1, 3), dtype=torch.float)
|
||||
np_result = np.random.random_sample()
|
||||
random_result = random.randint(0, 1000)
|
||||
|
||||
torch_expected, np_expected, random_expected = expected_seed1
|
||||
|
||||
assert torch.allclose(torch_result, torch_expected, atol=1e-4, rtol=1e-4)
|
||||
assert np_result == np_expected
|
||||
assert random_result == random_expected
|
||||
|
||||
@testing.with_seed(torch_seed=5, numpy_seed=10, random_seed=9)
|
||||
def test_seed2(self, expected_seed2):
|
||||
torch_result = torch.randn((1, 3), dtype=torch.float)
|
||||
np_result = np.random.random_sample()
|
||||
random_result = random.randint(0, 1000)
|
||||
|
||||
torch_expected, np_expected, random_expected = expected_seed2
|
||||
|
||||
assert torch.allclose(torch_result, torch_expected, atol=1e-4, rtol=1e-4)
|
||||
assert np_result == np_expected
|
||||
assert random_result == random_expected
|
||||
|
||||
@testing.with_seed(torch_seed=1, numpy_seed=2, random_seed=3)
|
||||
def test_nested_decorator(self, expected_seed1, expected_seed2):
|
||||
# The decorator should have no effect on other functions
|
||||
torch_expected1, np_expected1, random_expected1 = expected_seed1
|
||||
|
||||
torch_expected2, np_expected2, random_expected2 = expected_seed2
|
||||
|
||||
@testing.with_seed(torch_seed=5, numpy_seed=10, random_seed=9)
|
||||
def subtest_seed():
|
||||
torch_result = torch.randn((1, 3), dtype=torch.float)
|
||||
np_result = np.random.random_sample()
|
||||
random_result = random.randint(0, 1000)
|
||||
|
||||
assert torch.allclose(torch_result, torch_expected2, atol=1e-4, rtol=1e-4)
|
||||
assert np_result == np_expected2
|
||||
assert random_result == random_expected2
|
||||
|
||||
subtest_seed()
|
||||
|
||||
torch_result = torch.randn((1, 3), dtype=torch.float)
|
||||
np_result = np.random.random_sample()
|
||||
random_result = random.randint(0, 1000)
|
||||
|
||||
assert torch.allclose(torch_result, torch_expected1, atol=1e-4, rtol=1e-4)
|
||||
assert np_result == np_expected1
|
||||
assert random_result == random_expected1
|
||||
|
||||
|
||||
@testing.with_seed(torch_seed=1, numpy_seed=2, random_seed=3)
|
||||
def test_with_fixture(self, get_fix_input):
|
||||
# Test the seed decorator with pytest fixture works.
|
||||
fix_input = get_fix_input
|
||||
|
||||
assert torch.equal(fix_input, torch.tensor([1, 2, 3]))
|
||||
|
||||
torch_result = torch.randn((1, 3), dtype=torch.float)
|
||||
np_result = np.random.random_sample()
|
||||
random_result = random.randint(0, 1000)
|
||||
|
||||
expected_torch = torch.tensor([[0.6614, 0.2669, 0.0617]], dtype=torch.float)
|
||||
expected_np = 0.5507979025745755
|
||||
expected_random = 978
|
||||
|
||||
assert torch.allclose(torch_result, expected_torch, atol=1e-4, rtol=1e-4)
|
||||
assert np_result == expected_np
|
||||
assert random_result == expected_random
|
||||
|
||||
@testing.with_seed(torch_seed=1, numpy_seed=2, random_seed=3)
|
||||
@pytest.mark.parametrize("device", ["cpu"])
|
||||
def test_with_other_decorator(self, device):
|
||||
# Test the seed decorator works with other decorator
|
||||
|
||||
assert device == "cpu"
|
||||
|
||||
torch_result = torch.randn((1, 3), dtype=torch.float, device=device)
|
||||
np_result = np.random.random_sample()
|
||||
random_result = random.randint(0, 1000)
|
||||
|
||||
expected_torch = torch.tensor([[0.6614, 0.2669, 0.0617]], dtype=torch.float, device=device)
|
||||
expected_np = 0.5507979025745755
|
||||
expected_random = 978
|
||||
|
||||
assert torch.allclose(torch_result, expected_torch, atol=1e-4, rtol=1e-4)
|
||||
assert np_result == expected_np
|
||||
assert random_result == expected_random
|
||||
|
||||
|
||||
class TestTensorInfo:
|
||||
@pytest.mark.parametrize('dtype', [torch.uint8, torch.float32])
|
||||
@pytest.mark.parametrize('shape', [[1], [10, 30, 40]])
|
||||
@pytest.mark.parametrize('print_stats', [True, False])
|
||||
@pytest.mark.parametrize('detailed', [True, False])
|
||||
def test_torch_tensor(self, dtype, shape, print_stats, detailed):
|
||||
t = (torch.rand(shape) * 100).to(dtype)
|
||||
tensor_name = 'random_tensor'
|
||||
str = testing.tensor_info(t, tensor_name, print_stats=print_stats, detailed=detailed)
|
||||
logger.debug(str)
|
||||
assert len(str) > len(tensor_name) # Just check that runs and produces output
|
||||
|
||||
@pytest.mark.parametrize('dtype', [np.uint8, np.float32])
|
||||
@pytest.mark.parametrize('shape', [[1], [10, 30, 40]])
|
||||
@pytest.mark.parametrize('print_stats', [True, False])
|
||||
@pytest.mark.parametrize('detailed', [True, False])
|
||||
def test_numpy_array(self, dtype, shape, print_stats, detailed):
|
||||
t = (np.random.rand(*shape) * 10).astype(dtype)
|
||||
tensor_name = 'random_numpy_array'
|
||||
str = testing.tensor_info(t, tensor_name, print_stats=print_stats, detailed=detailed)
|
||||
logger.debug(str)
|
||||
assert len(str) > len(tensor_name) # Just check that runs and produces output
|
||||
|
||||
class TestContainedTorchEqual:
|
||||
def test_true(self):
|
||||
elem = [1, 'a', {'b': torch.rand(3, 3), 'c': 0.1}]
|
||||
other = copy.deepcopy(elem)
|
||||
assert testing.contained_torch_equal(elem, other)
|
||||
|
||||
# Also try on a tuple
|
||||
elem = sample_tuple('hello', torch.rand(3, 3), (torch.rand(10, 3) * 10).to(torch.int32), {'a': torch.rand(5)})
|
||||
other = copy.deepcopy(elem)
|
||||
assert testing.contained_torch_equal(elem, other)
|
||||
|
||||
def test_false(self):
|
||||
elem = [1, 'a', {'b': torch.rand(3, 3), 'c': 0.1}]
|
||||
other = copy.deepcopy(elem)
|
||||
other[2]['b'][1, 1] += 1.
|
||||
assert not testing.contained_torch_equal(elem, other)
|
||||
|
||||
# Also try on a tuple
|
||||
elem = sample_tuple('hello', torch.rand(3, 3), (torch.rand(10, 3) * 10).to(torch.int32), {'a': torch.rand(5)})
|
||||
other = copy.deepcopy(elem)
|
||||
other.B[0, 0] += 0.001
|
||||
assert not testing.contained_torch_equal(elem, other)
|
||||
|
||||
def test_approximate(self):
|
||||
elem = [1, 'a', {'b': torch.rand(3, 3), 'c': 0.1}]
|
||||
other = copy.deepcopy(elem)
|
||||
eps = 0.0001
|
||||
other[2]['b'][1, 1] += eps
|
||||
other[2]['c'] += eps
|
||||
assert not testing.contained_torch_equal(elem, other)
|
||||
assert testing.contained_torch_equal(elem, other, approximate=True, atol=eps*2)
|
||||
|
||||
class TestCheckTensorAttributeShapes:
|
||||
@pytest.mark.parametrize("throw", [True, False])
|
||||
def test_checks_pass(self, throw):
|
||||
container = {'cat': torch.rand((1, 5, 6)), 'dog': torch.rand((5, 5, 6)), 'colors': torch.rand((100, 3))}
|
||||
assert testing.check_tensor_attribute_shapes(container, throw=throw, cat=(1, 5, 6), colors=(None, 3))
|
||||
|
||||
container = sample_tuple('Hello', torch.rand((3, 4, 5)), torch.rand((5, 1, 6)), {})
|
||||
assert testing.check_tensor_attribute_shapes(container, throw=throw, B=(3, None, 5), C=[5, 1, 6])
|
||||
|
||||
def test_checks_fail(self):
|
||||
container = {'cat': torch.rand((1, 5, 6)), 'dog': torch.rand((5, 5, 6)), 'colors': torch.rand((100, 3))}
|
||||
with pytest.raises(ValueError):
|
||||
assert testing.check_tensor_attribute_shapes(container, throw=True, cat=(1, 5, 6), colors=(59, 3))
|
||||
assert not testing.check_tensor_attribute_shapes(container, throw=False, cat=(1, 50, 6), colors=(59, 3))
|
||||
|
||||
class TestPrintDiagnostics:
|
||||
def test_print_namedtuple_attributes(self, capsys):
|
||||
sample1 = sample_tuple('My Name', [1, 2, 3], torch.zeros((5, 5, 5)), {'a': torch.rand(5)})
|
||||
|
||||
testing.print_namedtuple_attributes(sample1)
|
||||
out1, err = capsys.readouterr()
|
||||
assert len(out1) > 10
|
||||
|
||||
testing.print_namedtuple_attributes(sample1, detailed=True)
|
||||
out1_detailed, err = capsys.readouterr()
|
||||
assert len(out1) < len(out1_detailed)
|
||||
|
||||
|
||||
647
tests/python/kaolin/visualize/test_ipython.py
Normal file
@@ -0,0 +1,647 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import copy
|
||||
import math
|
||||
import random
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
import pytest
|
||||
|
||||
import kaolin
|
||||
from kaolin.utils.testing import check_allclose
|
||||
|
||||
class DummyRenderer():
|
||||
def __init__(self, height, width, value, output_dict=False):
|
||||
self.height = height
|
||||
self.width = width
|
||||
self.value = value
|
||||
self.render_count = 0
|
||||
self.event_count = 0
|
||||
self.output_dict = output_dict
|
||||
|
||||
def __call__(self, camera):
|
||||
self.render_count += 1
|
||||
img = torch.full((self.height, self.width, 3), self.value,
|
||||
device=camera.device, dtype=torch.uint8)
|
||||
if self.output_dict:
|
||||
return {
|
||||
'img': img,
|
||||
'a': 1
|
||||
}
|
||||
else:
|
||||
return img
|
||||
|
||||
@pytest.mark.parametrize('height,width', [(16, 16), (32, 32)])
|
||||
@pytest.mark.parametrize('device', ['cpu'])
|
||||
@pytest.mark.parametrize('output_dict', [False, True])
|
||||
class TestVisualizers:
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def camera(self, height, width, device):
|
||||
return kaolin.render.camera.Camera.from_args(
|
||||
eye=(torch.rand((3,)) - 0.5) * 10,
|
||||
at=(torch.rand((3,)) - 0.5) * 10,
|
||||
up=(torch.rand((3,)) - 0.5) * 10,
|
||||
fov=random.uniform(0.1, math.pi - 0.1),
|
||||
height=height,
|
||||
width=width,
|
||||
dtype=torch.float,
|
||||
device=device
|
||||
)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def renderer(self, height, width, output_dict):
|
||||
return DummyRenderer(
|
||||
height, width, 0, output_dict
|
||||
)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def fast_renderer(self, height, width, output_dict):
|
||||
return DummyRenderer(
|
||||
int(height / 4), int(width / 4), 255, output_dict
|
||||
)
|
||||
|
||||
#TODO(cfujitsang): can't find a way to test max_fps
|
||||
@pytest.mark.parametrize('with_fast_renderer', [True, False])
|
||||
@pytest.mark.parametrize('world_up_axis', [0, 1])
|
||||
@pytest.mark.parametrize('with_focus_at', [True, False])
|
||||
@pytest.mark.parametrize('with_sensitivity', [True, False])
|
||||
@pytest.mark.parametrize('with_additional_event', [True, False])
|
||||
@pytest.mark.parametrize('update_only_on_release', [True, False])
|
||||
def test_turntable_visualizer(
|
||||
self, height, width, device, camera, renderer, fast_renderer, world_up_axis,
|
||||
with_focus_at, with_sensitivity, with_additional_event,
|
||||
update_only_on_release, with_fast_renderer):
|
||||
kwargs = {}
|
||||
|
||||
if with_focus_at:
|
||||
focus_at = torch.rand((3,), device=camera.device, dtype=camera.dtype) - 0.5 * 10
|
||||
kwargs['focus_at'] = focus_at
|
||||
else:
|
||||
focus_at = torch.zeros((3,), device=camera.device, dtype=camera.dtype)
|
||||
|
||||
if with_sensitivity:
|
||||
zoom_sensitivity = 0.01
|
||||
forward_sensitivity = 0.01
|
||||
rotation_sensitivity = 2.
|
||||
translation_sensitivity = 2.
|
||||
kwargs['zoom_sensitivity'] = zoom_sensitivity
|
||||
kwargs['forward_sensitivity'] = forward_sensitivity
|
||||
kwargs['rotation_sensitivity'] = rotation_sensitivity
|
||||
kwargs['translation_sensitivity'] = translation_sensitivity
|
||||
else:
|
||||
zoom_sensitivity = 0.001
|
||||
forward_sensitivity = 0.001
|
||||
rotation_sensitivity = 1.5
|
||||
translation_sensitivity = 1.
|
||||
|
||||
|
||||
global event_count
|
||||
event_count = 0
|
||||
if with_additional_event:
|
||||
def additional_event_handler(visualizer, event):
|
||||
with visualizer.out:
|
||||
if event['type'] == 'mousedown' and event['buttons'] == 3:
|
||||
global event_count
|
||||
event_count += 1
|
||||
return False
|
||||
return True
|
||||
kwargs['additional_event_handler'] = additional_event_handler
|
||||
kwargs['additional_watched_events'] = []
|
||||
|
||||
if with_fast_renderer:
|
||||
kwargs['fast_render'] = fast_renderer
|
||||
|
||||
viz = kaolin.visualize.IpyTurntableVisualizer(
|
||||
height,
|
||||
width,
|
||||
copy.deepcopy(camera),
|
||||
renderer,
|
||||
world_up_axis=world_up_axis,
|
||||
update_only_on_release=update_only_on_release,
|
||||
**kwargs
|
||||
)
|
||||
expected_render_count = 0
|
||||
expected_fast_render_count = 0
|
||||
def check_counts():
|
||||
if with_fast_renderer:
|
||||
assert renderer.render_count == expected_render_count
|
||||
assert fast_renderer.render_count == expected_fast_render_count
|
||||
else:
|
||||
assert renderer.render_count == expected_render_count + expected_fast_render_count
|
||||
check_allclose(viz.focus_at, focus_at)
|
||||
check_counts()
|
||||
assert viz.canvas.height == height
|
||||
assert viz.canvas.width == width
|
||||
|
||||
# Test reorientation at ctor
|
||||
check_allclose(viz.camera.cam_pos(), camera.cam_pos(), atol=1e-5, rtol=1e-5), \
|
||||
"After ctor: camera moved"
|
||||
signed_world_up = torch.zeros((3,), device=camera.device)
|
||||
signed_world_distance = float(camera.cam_up().squeeze()[world_up_axis] >= 0) * 2. - 1.
|
||||
signed_world_up[world_up_axis] = signed_world_distance
|
||||
assert torch.dot(signed_world_up, viz.camera.cam_up().squeeze()) >= 0, \
|
||||
"After ctor: camera up is wrong direction"
|
||||
assert torch.dot(signed_world_up, viz.camera.cam_right().squeeze()) == 0, \
|
||||
"After ctor: camera right is not perpendicular to the world up"
|
||||
|
||||
expected_cam_forward = torch.nn.functional.normalize(viz.focus_at - camera.cam_pos().squeeze(), dim=-1)
|
||||
check_allclose(
|
||||
torch.dot(-viz.camera.cam_forward().squeeze(), expected_cam_forward),
|
||||
torch.ones((1,), device=camera.device)
|
||||
), "After ctor: camera is not looking at focus_at"
|
||||
|
||||
ctor_camera = copy.deepcopy(viz.camera)
|
||||
ref_radius = torch.linalg.norm(
|
||||
viz.focus_at - ctor_camera.cam_pos().squeeze(),
|
||||
dim=-1
|
||||
)
|
||||
signed_world_right = torch.zeros((3,), device=camera.device)
|
||||
signed_world_right[world_up_axis - 1] = signed_world_distance
|
||||
signed_world_forward = torch.zeros((3,), device=camera.device)
|
||||
signed_world_forward[world_up_axis - 2] = signed_world_distance
|
||||
ctor_cam_2d_pos = torch.stack([
|
||||
viz.camera.cam_pos().squeeze()[world_up_axis - 1],
|
||||
viz.camera.cam_pos().squeeze()[world_up_axis - 2],
|
||||
], dim=0)
|
||||
|
||||
try:
|
||||
viz.show()
|
||||
except NameError: # show() use "display()" that is builtin only in ipython
|
||||
pass
|
||||
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(ctor_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After .show(): camera have moved"
|
||||
assert torch.equal(ctor_camera.params, viz.camera.params), \
|
||||
"After .show(): camera intrinsics have changed"
|
||||
|
||||
|
||||
from_x = random.randint(0, width)
|
||||
from_y = random.randint(0, height)
|
||||
viz._handle_event({'type': 'mousedown', 'relativeX': from_x, 'relativeY': from_y, 'buttons': 1})
|
||||
check_counts()
|
||||
assert torch.equal(ctor_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After mousedown: camera have moved"
|
||||
assert torch.equal(ctor_camera.params, viz.camera.params), \
|
||||
"After mousedown: camera intrinsics have changed"
|
||||
|
||||
to_x = random.randint(0, width)
|
||||
while to_x != from_x:
|
||||
to_x = random.randint(0, width)
|
||||
|
||||
to_y = random.randint(0, height)
|
||||
while to_y != from_y:
|
||||
to_y = random.randint(0, height)
|
||||
|
||||
viz._handle_event({'type': 'mousemove', 'relativeX': to_x, 'relativeY': to_y, 'buttons': 1})
|
||||
if not update_only_on_release:
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
cur_radius = torch.linalg.norm(
|
||||
viz.focus_at - viz.camera.cam_pos().squeeze(),
|
||||
dim=-1
|
||||
)
|
||||
check_allclose(cur_radius, ref_radius)
|
||||
cur_focus_at = (
|
||||
viz.camera.cam_pos() - viz.camera.cam_forward() * cur_radius
|
||||
).squeeze()
|
||||
check_allclose(viz.focus_at, cur_focus_at, atol=1e-5, rtol=1e-5)
|
||||
|
||||
azimuth_diff = rotation_sensitivity * (to_x - from_x) * math.pi / viz.canvas.width
|
||||
elevation_diff = rotation_sensitivity * (to_y - from_y) * math.pi / viz.canvas.height
|
||||
|
||||
cur_cam_pos = kaolin.visualize.ipython.rotate_around_axis(
|
||||
ctor_camera.cam_pos().squeeze(-1) - focus_at.unsqueeze(0),
|
||||
-azimuth_diff,
|
||||
signed_world_up.unsqueeze(0)
|
||||
)
|
||||
cur_cam_pos = kaolin.visualize.ipython.rotate_around_axis(
|
||||
cur_cam_pos,
|
||||
-elevation_diff,
|
||||
viz.camera.cam_right().squeeze(-1),
|
||||
) + focus_at.unsqueeze(0)
|
||||
check_allclose(cur_cam_pos, viz.camera.cam_pos().squeeze(-1),
|
||||
atol=1e-4, rtol=1e-4)
|
||||
cur_camera = copy.deepcopy(viz.camera)
|
||||
viz._handle_event({'type': 'mouseup', 'button': 0, 'buttons': 1,
|
||||
'relativeX': to_x, 'relativeY': to_y,
|
||||
'boundingRectWidth': width, 'boundingRectHeight': height})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After mouseup: camera have moved"
|
||||
assert torch.equal(cur_camera.params, viz.camera.params), \
|
||||
"After mouseup: camera intrinsics have changed"
|
||||
wheel_amount = 120 * random.randint(1, 10)
|
||||
viz._handle_event({'type': 'wheel', 'deltaY': wheel_amount, 'ctrlKey': False})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After unzoom: camera have moved"
|
||||
assert viz.camera.fov_x > cur_camera.fov_x, \
|
||||
"After unzoom: Didn't unzoom"
|
||||
assert viz.camera.fov_x < 180.
|
||||
cur_camera = copy.deepcopy(viz.camera)
|
||||
viz._handle_event({'type': 'wheel', 'deltaY': -2. * wheel_amount, 'ctrlKey': False})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After zoom: camera have moved"
|
||||
assert viz.camera.fov_x < cur_camera.fov_x, \
|
||||
"After zoom: Didn't zoom"
|
||||
assert viz.camera.fov_x > 0.
|
||||
cur_camera = copy.deepcopy(viz.camera)
|
||||
viz._handle_event({'type': 'wheel', 'deltaY': -wheel_amount, 'ctrlKey': True})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.params, viz.camera.params), \
|
||||
"After move forward: camera intrinsics have changed"
|
||||
normalized_distance = torch.nn.functional.normalize(
|
||||
cur_camera.cam_pos().squeeze() - viz.camera.cam_pos().squeeze(),
|
||||
dim=-1
|
||||
)
|
||||
assert torch.all(torch.abs(torch.cross(
|
||||
cur_camera.cam_forward(), viz.camera.cam_forward())) < 1e-2), \
|
||||
"After move forward: camera have changed cam_forward"
|
||||
assert torch.all(torch.abs(torch.cross(
|
||||
cur_camera.cam_up(), viz.camera.cam_up())) < 1e-2), \
|
||||
"After move forward: camera have changed cam_up"
|
||||
check_allclose(normalized_distance, cur_camera.cam_forward().squeeze(),
|
||||
atol=1e-5, rtol=1e-5), \
|
||||
"After move forward: camera haven't moved forward"
|
||||
assert torch.all(torch.sign(focus_at - cur_camera.cam_pos().squeeze()) *
|
||||
torch.sign(focus_at - viz.camera.cam_pos().squeeze()) >= 0.), \
|
||||
"After move forward: camera have crossed the focusing point"
|
||||
|
||||
assert event_count == 0
|
||||
viz._handle_event({'type': 'mousedown', 'buttons': 3, 'relativeX': 0, 'relativeY': 0})
|
||||
check_counts()
|
||||
if with_additional_event:
|
||||
assert event_count == 1
|
||||
else:
|
||||
assert event_count == 0
|
||||
|
||||
@pytest.mark.parametrize('with_fast_renderer', [True, False])
|
||||
@pytest.mark.parametrize('with_world_up', [True, False])
|
||||
@pytest.mark.parametrize('with_sensitivity', [True, False])
|
||||
@pytest.mark.parametrize('with_additional_event', [True, False])
|
||||
@pytest.mark.parametrize('update_only_on_release', [True, False])
|
||||
def test_first_person_visualizer(
|
||||
self, height, width, device, camera, renderer, fast_renderer,
|
||||
with_fast_renderer, with_world_up, with_sensitivity,
|
||||
with_additional_event, update_only_on_release):
|
||||
kwargs = {}
|
||||
if with_fast_renderer:
|
||||
kwargs['fast_render'] = fast_renderer
|
||||
if with_world_up:
|
||||
world_up = torch.nn.functional.normalize(
|
||||
torch.rand((3,), device=camera.device, dtype=camera.dtype),
|
||||
dim=-1
|
||||
)
|
||||
kwargs['world_up'] = world_up
|
||||
else:
|
||||
world_up = camera.cam_up().squeeze()
|
||||
|
||||
if with_sensitivity:
|
||||
rotation_sensitivity = 0.1
|
||||
translation_sensitivity = 0.1
|
||||
key_move_sensitivity = 0.1
|
||||
zoom_sensitivity = 0.01
|
||||
kwargs['rotation_sensitivity'] = rotation_sensitivity
|
||||
kwargs['translation_sensitivity'] = translation_sensitivity
|
||||
kwargs['key_move_sensitivity'] = key_move_sensitivity
|
||||
kwargs['zoom_sensitivity'] = zoom_sensitivity
|
||||
|
||||
up_key = 'w'
|
||||
down_key = 's'
|
||||
left_key = 'a'
|
||||
right_key = 'd'
|
||||
forward_key = 'e'
|
||||
backward_key = 'q'
|
||||
kwargs['up_key'] = up_key
|
||||
kwargs['down_key'] = down_key
|
||||
kwargs['left_key'] = left_key
|
||||
kwargs['right_key'] = right_key
|
||||
kwargs['forward_key'] = forward_key
|
||||
kwargs['backward_key'] = backward_key
|
||||
else:
|
||||
rotation_sensitivity = 0.4
|
||||
translation_sensitivity = 1.
|
||||
key_move_sensitivity = 0.05
|
||||
zoom_sensitivity= 0.001
|
||||
up_key = 'i'
|
||||
down_key = 'k'
|
||||
left_key = 'j'
|
||||
right_key = 'l'
|
||||
forward_key = 'o'
|
||||
backward_key = 'u'
|
||||
|
||||
global event_count
|
||||
event_count = 0
|
||||
if with_additional_event:
|
||||
def additional_event_handler(visualizer, event):
|
||||
with visualizer.out:
|
||||
if event['type'] == 'mousedown' and event['buttons'] == 3:
|
||||
global event_count
|
||||
event_count += 1
|
||||
return False
|
||||
return True
|
||||
kwargs['additional_event_handler'] = additional_event_handler
|
||||
kwargs['additional_watched_events'] = ['mouseenter']
|
||||
|
||||
viz = kaolin.visualize.IpyFirstPersonVisualizer(
|
||||
height,
|
||||
width,
|
||||
copy.deepcopy(camera),
|
||||
renderer,
|
||||
update_only_on_release=update_only_on_release,
|
||||
**kwargs
|
||||
)
|
||||
expected_render_count = 0
|
||||
expected_fast_render_count = 0
|
||||
def check_counts():
|
||||
if with_fast_renderer:
|
||||
assert renderer.render_count == expected_render_count
|
||||
assert fast_renderer.render_count == expected_fast_render_count
|
||||
else:
|
||||
assert renderer.render_count == expected_render_count + expected_fast_render_count
|
||||
check_counts()
|
||||
assert viz.canvas.height == height
|
||||
assert viz.canvas.width == width
|
||||
|
||||
# Test reorientation at ctor
|
||||
expected_extrinsics = kaolin.render.camera.CameraExtrinsics.from_lookat(
|
||||
eye=camera.cam_pos().squeeze(),
|
||||
at=(camera.cam_pos().squeeze() - camera.cam_forward().squeeze()),
|
||||
up=world_up,
|
||||
device=camera.device,
|
||||
dtype=camera.dtype
|
||||
)
|
||||
check_allclose(expected_extrinsics.view_matrix(), viz.camera.view_matrix(),
|
||||
atol=1e-5, rtol=1e-5)
|
||||
ctor_camera = copy.deepcopy(viz.camera)
|
||||
|
||||
try:
|
||||
viz.show()
|
||||
except NameError: # show() use "display()" that is builtin only in ipython
|
||||
pass
|
||||
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(ctor_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After .show(): camera have moved"
|
||||
assert torch.equal(ctor_camera.params, viz.camera.params), \
|
||||
"After .show(): camera intrinsics have changed"
|
||||
|
||||
from_x = random.randint(0, width)
|
||||
from_y = random.randint(0, height)
|
||||
viz._handle_event({'type': 'mousedown', 'relativeX': from_x, 'relativeY': from_y, 'buttons': 1})
|
||||
check_counts()
|
||||
assert torch.equal(ctor_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After mousedown: camera have moved"
|
||||
assert torch.equal(ctor_camera.params, viz.camera.params), \
|
||||
"After mousedown: camera intrinsics have changed"
|
||||
|
||||
to_x = random.randint(0, width)
|
||||
while to_x != from_x:
|
||||
to_x = random.randint(0, width)
|
||||
|
||||
to_y = random.randint(0, height)
|
||||
while to_y != from_y:
|
||||
to_y = random.randint(0, height)
|
||||
|
||||
ctor_elevation = viz.elevation
|
||||
|
||||
viz._handle_event({'type': 'mousemove', 'relativeX': to_x, 'relativeY': to_y, 'buttons': 1})
|
||||
if not update_only_on_release:
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
|
||||
azimuth_diff = rotation_sensitivity * (to_x - from_x) * math.pi / viz.canvas.width
|
||||
elevation_diff = rotation_sensitivity * (to_y - from_y) * math.pi / viz.canvas.height
|
||||
_elevation = ctor_elevation + elevation_diff
|
||||
if _elevation > math.pi / 2.:
|
||||
elevation_diff = math.pi / 2. - ctor_elevation
|
||||
if _elevation < -math.pi / 2.:
|
||||
elevation_diff = -math.pi / 2. - ctor_elevation
|
||||
assert viz.elevation == ctor_elevation + elevation_diff
|
||||
|
||||
cur_cam_forward = kaolin.visualize.ipython.rotate_around_axis(
|
||||
ctor_camera.cam_forward().squeeze(-1),
|
||||
-azimuth_diff,
|
||||
world_up.unsqueeze(0)
|
||||
)
|
||||
cur_cam_right = kaolin.visualize.ipython.rotate_around_axis(
|
||||
ctor_camera.cam_right().squeeze(-1),
|
||||
-azimuth_diff,
|
||||
world_up.unsqueeze(0)
|
||||
)
|
||||
cur_cam_up = kaolin.visualize.ipython.rotate_around_axis(
|
||||
ctor_camera.cam_up().squeeze(-1),
|
||||
-azimuth_diff,
|
||||
world_up.unsqueeze(0)
|
||||
)
|
||||
|
||||
cur_cam_forward = kaolin.visualize.ipython.rotate_around_axis(
|
||||
cur_cam_forward,
|
||||
-elevation_diff,
|
||||
cur_cam_right,
|
||||
)
|
||||
cur_cam_up = kaolin.visualize.ipython.rotate_around_axis(
|
||||
cur_cam_up,
|
||||
-elevation_diff,
|
||||
cur_cam_right,
|
||||
)
|
||||
|
||||
check_allclose(ctor_camera.cam_pos().squeeze(-1), viz.camera.cam_pos().squeeze(-1),
|
||||
atol=1e-4, rtol=1e-4)
|
||||
check_allclose(cur_cam_right, viz.camera.cam_right().squeeze(-1),
|
||||
atol=1e-4, rtol=1e-4)
|
||||
check_allclose(cur_cam_forward, viz.camera.cam_forward().squeeze(-1),
|
||||
atol=1e-4, rtol=1e-4)
|
||||
check_allclose(cur_cam_up, viz.camera.cam_up().squeeze(-1),
|
||||
atol=1e-4, rtol=1e-4)
|
||||
cur_camera = copy.deepcopy(viz.camera)
|
||||
|
||||
viz._handle_event({'type': 'mouseup', 'button': 0,
|
||||
'relativeX': to_x, 'relativeY': to_y,
|
||||
'boundingRectWidth': width, 'boundingRectHeight': height})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After mouseup: camera have moved"
|
||||
assert torch.equal(cur_camera.params, viz.camera.params), \
|
||||
"After mouseup: camera intrinsics have changed"
|
||||
|
||||
from_x = random.randint(0, width)
|
||||
from_y = random.randint(0, height)
|
||||
|
||||
viz._handle_event({
|
||||
'type': 'mousedown', 'relativeX': from_x, 'relativeY': from_y, 'buttons': 2
|
||||
})
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After mousedown: camera have moved"
|
||||
assert torch.equal(cur_camera.params, viz.camera.params), \
|
||||
"After mousedown: camera intrinsics have changed"
|
||||
|
||||
to_x = random.randint(0, width)
|
||||
while to_x != from_x:
|
||||
to_x = random.randint(0, width)
|
||||
|
||||
to_y = random.randint(0, height)
|
||||
while to_y != from_y:
|
||||
to_y = random.randint(0, height)
|
||||
|
||||
viz._handle_event({
|
||||
'type': 'mousemove', 'relativeX': to_x, 'relativeY': to_y, 'buttons': 2
|
||||
})
|
||||
if not update_only_on_release:
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
|
||||
cur_camera.move_up(translation_sensitivity * (to_y - from_y) / height)
|
||||
cur_camera.move_right(-translation_sensitivity * (to_x - from_x) / width)
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'mouseup', 'button': 1,
|
||||
'relativeX': to_x, 'relativeY': to_y,
|
||||
'boundingRectWidth': width, 'boundingRectHeight': height})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After mouseup: camera have moved"
|
||||
assert torch.equal(cur_camera.params, viz.camera.params), \
|
||||
"After mouseup: camera intrinsics have changed"
|
||||
|
||||
viz._handle_event({'type': 'keydown', 'key': up_key})
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
cur_camera.move_up(key_move_sensitivity)
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keyup', 'key': up_key})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keydown', 'key': down_key})
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
cur_camera.move_up(-key_move_sensitivity)
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keyup', 'key': down_key})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keydown', 'key': left_key})
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
cur_camera.move_right(-key_move_sensitivity)
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keyup', 'key': left_key})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keydown', 'key': right_key})
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
cur_camera.move_right(key_move_sensitivity)
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keyup', 'key': right_key})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keydown', 'key': forward_key})
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
cur_camera.move_forward(-key_move_sensitivity)
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keyup', 'key': forward_key})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keydown', 'key': backward_key})
|
||||
expected_fast_render_count += 1
|
||||
check_counts()
|
||||
cur_camera.move_forward(key_move_sensitivity)
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keyup', 'key': backward_key})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keydown', 'key': 'x'})
|
||||
check_counts()
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
viz._handle_event({'type': 'keyup', 'key': 'x'})
|
||||
check_counts()
|
||||
check_allclose(cur_camera.view_matrix(), viz.camera.view_matrix())
|
||||
check_allclose(cur_camera.params, viz.camera.params)
|
||||
|
||||
wheel_amount = 120 * random.randint(1, 10)
|
||||
viz._handle_event({'type': 'wheel', 'deltaY': wheel_amount})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After unzoom: camera have moved"
|
||||
assert viz.camera.fov_x > cur_camera.fov_x, \
|
||||
"After unzoom: Didn't unzoom"
|
||||
assert viz.camera.fov_x < 180.
|
||||
cur_camera = copy.deepcopy(viz.camera)
|
||||
viz._handle_event({'type': 'wheel', 'deltaY': -2. * wheel_amount})
|
||||
expected_render_count += 1
|
||||
check_counts()
|
||||
assert torch.equal(cur_camera.view_matrix(), viz.camera.view_matrix()), \
|
||||
"After zoom: camera have moved"
|
||||
assert viz.camera.fov_x < cur_camera.fov_x, \
|
||||
"After zoom: Didn't zoom"
|
||||
assert viz.camera.fov_x > 0.
|
||||
|
||||
assert event_count == 0
|
||||
viz._handle_event({'type': 'mousedown', 'buttons': 3, 'relativeX': 0, 'relativeY': 0})
|
||||
check_counts()
|
||||
if with_additional_event:
|
||||
assert event_count == 1
|
||||
else:
|
||||
assert event_count == 0
|
||||
|
||||
320
tests/python/kaolin/visualize/test_timelapse.py
Normal file
@@ -0,0 +1,320 @@
|
||||
# Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import os
|
||||
import shutil
|
||||
|
||||
import torch
|
||||
import pytest
|
||||
|
||||
|
||||
from kaolin import io
|
||||
from kaolin.visualize import timelapse
|
||||
from kaolin.ops.conversions import trianglemeshes_to_voxelgrids
|
||||
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir)
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def instancer_out_dir():
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_instancer_out')
|
||||
os.makedirs(out_dir, exist_ok=True)
|
||||
yield out_dir
|
||||
shutil.rmtree(out_dir)
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def voxelgrid(meshes):
|
||||
resolution = 64
|
||||
voxelgrid = trianglemeshes_to_voxelgrids(meshes[0].vertices.unsqueeze(0), meshes[0].faces,
|
||||
resolution)
|
||||
return voxelgrid[0].bool()
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def pointcloud():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
pointcloud = io.usd.import_pointcloud(os.path.join(cur_dir, os.pardir, os.pardir,
|
||||
os.pardir, 'samples/rocket_pointcloud_GeomPoints.usda'),
|
||||
'/World/pointcloud').points
|
||||
return pointcloud
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def pointcloud_color():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
pointcloud, color, normals = io.usd.import_pointcloud(os.path.join(cur_dir, os.pardir, os.pardir,
|
||||
os.pardir, 'samples/golden/pointcloud_GeomPoints_colors.usda'),
|
||||
'/World/pointcloud')
|
||||
return pointcloud, color
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def meshes():
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
meshes = io.usd.import_meshes(os.path.join(cur_dir, os.pardir, os.pardir,
|
||||
os.pardir, 'samples/rocket_hetero.usd'),
|
||||
with_normals=True,
|
||||
heterogeneous_mesh_handler=io.utils.mesh_handler_naive_triangulate)
|
||||
return meshes
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def material_values():
|
||||
params = {
|
||||
'diffuse_color': (0., 1., 0.),
|
||||
'roughness_value': 0.1,
|
||||
'metallic_value': 1.,
|
||||
'specular_color': (1., 0., 0.),
|
||||
'is_specular_workflow': True,
|
||||
}
|
||||
material = io.materials.PBRMaterial(**params)
|
||||
yield material
|
||||
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def material_textures():
|
||||
params = {
|
||||
'diffuse_texture': torch.rand((3, 256, 256)),
|
||||
'roughness_texture': torch.rand((1, 256, 256)),
|
||||
'metallic_texture': torch.rand((1, 256, 256)),
|
||||
'specular_texture': torch.rand((3, 256, 256)),
|
||||
'is_specular_workflow': True,
|
||||
}
|
||||
material = io.materials.PBRMaterial(**params)
|
||||
yield material
|
||||
|
||||
|
||||
class TestTimelapse:
|
||||
def test_add_mesh_batch(self, out_dir, meshes, material_values, material_textures):
|
||||
writer = timelapse.Timelapse(out_dir)
|
||||
data = {
|
||||
0: {
|
||||
'vertices_list': [m.vertices for m in meshes],
|
||||
'faces_list': [m.faces for m in meshes],
|
||||
'uvs_list': [m.uvs for m in meshes],
|
||||
'face_uvs_idx_list': [m.face_uvs_idx for m in meshes],
|
||||
'face_normals_list': [m.face_normals for m in meshes],
|
||||
'materials_list': [{'values': material_values, 'textures': material_textures}]
|
||||
},
|
||||
10: {
|
||||
'vertices_list': [m.vertices / 2. for m in meshes],
|
||||
'faces_list': [m.faces for m in meshes],
|
||||
'materials_list': [{'values': material_values, 'textures': material_textures}]
|
||||
},
|
||||
}
|
||||
for iteration, params in data.items():
|
||||
writer.add_mesh_batch(iteration=iteration, category='test', **params)
|
||||
|
||||
# Check that category directory is created
|
||||
assert os.path.exists(os.path.join(out_dir, 'test'))
|
||||
|
||||
# Check that data at each iteration is correct
|
||||
texture_dir = os.path.join(out_dir, 'test', 'textures')
|
||||
assert os.path.exists(texture_dir)
|
||||
for iteration in data.keys():
|
||||
filename = os.path.join(out_dir, 'test', 'mesh_0.usd')
|
||||
mesh_in = io.usd.import_mesh(filename, time=iteration, with_normals=True)
|
||||
# Verify mesh properties
|
||||
assert torch.allclose(data[iteration]['vertices_list'][0], mesh_in.vertices)
|
||||
assert torch.equal(data[iteration]['faces_list'][0], mesh_in.faces)
|
||||
if not data[iteration].get('face_uvs_idx_list'):
|
||||
i = 0
|
||||
else:
|
||||
i = iteration
|
||||
assert torch.allclose(data[i]['uvs_list'][0].view(-1, 2), mesh_in.uvs.view(-1, 2))
|
||||
# assert torch.equal(data[i]['face_uvs_idx_list'][0], mesh_in.face_uvs_idx)
|
||||
assert torch.allclose(data[i]['face_normals_list'][0], mesh_in.face_normals)
|
||||
|
||||
materials = data[iteration]['materials_list'][0]
|
||||
# Verify materials textures exist
|
||||
for attr in ['diffuse', 'specular', 'roughness', 'metallic']:
|
||||
assert os.path.exists(os.path.join(texture_dir, f'mesh_0_textures_{iteration}_{attr}.png'))
|
||||
|
||||
# Verify material properties
|
||||
for variant_name, material_data in materials.items():
|
||||
mat = io.materials.PBRMaterial().read_from_usd(filename, f'/mesh_0/{variant_name}', time=iteration)
|
||||
assert pytest.approx(mat.diffuse_color, 1e-5) == material_data.diffuse_color
|
||||
assert pytest.approx(mat.specular_color, 1e-5) == material_data.specular_color
|
||||
assert pytest.approx(mat.roughness_value, 1e-5) == material_data.roughness_value
|
||||
assert pytest.approx(mat.metallic_value, 1e-5) == material_data.metallic_value
|
||||
|
||||
if material_data.diffuse_texture is not None:
|
||||
assert torch.allclose(mat.diffuse_texture, material_data.diffuse_texture, atol=1e-2)
|
||||
assert torch.allclose(mat.specular_texture, material_data.specular_texture, atol=1e-2)
|
||||
assert torch.allclose(mat.roughness_texture, material_data.roughness_texture, atol=1e-2)
|
||||
assert torch.allclose(mat.metallic_texture, material_data.metallic_texture, atol=1e-2)
|
||||
|
||||
def test_add_voxelgrid_batch(self, out_dir, voxelgrid):
|
||||
writer = timelapse.Timelapse(out_dir)
|
||||
|
||||
data = {
|
||||
0: {'voxelgrid_list': [voxelgrid]},
|
||||
10: {'voxelgrid_list': [voxelgrid * (torch.rand_like(voxelgrid.float()) < 0.5)]},
|
||||
}
|
||||
for iteration, params in data.items():
|
||||
writer.add_voxelgrid_batch(iteration=iteration, category='test', **params)
|
||||
|
||||
# Verify
|
||||
filename = os.path.join(out_dir, 'test', 'voxelgrid_0.usd')
|
||||
for iteration, params in data.items():
|
||||
voxelgrid_in = io.usd.import_voxelgrid(filename, scene_path='/voxelgrid_0', time=iteration)
|
||||
|
||||
assert torch.equal(voxelgrid_in, params['voxelgrid_list'][0])
|
||||
|
||||
def test_add_pointcloud_batch(self, out_dir, pointcloud):
|
||||
writer = timelapse.Timelapse(out_dir)
|
||||
|
||||
data = {
|
||||
0: {'pointcloud_list': [pointcloud], 'colors': None, 'points_type': 'usd_geom_points'},
|
||||
10: {'pointcloud_list': [pointcloud + 100.], 'colors': None, 'points_type': 'usd_geom_points'},
|
||||
}
|
||||
for iteration, params in data.items():
|
||||
writer.add_pointcloud_batch(iteration=iteration, category='test', **params)
|
||||
|
||||
# Verify
|
||||
filename = os.path.join(out_dir, 'test', 'pointcloud_0.usd')
|
||||
for iteration, params in data.items():
|
||||
pointcloud_in = io.usd.import_pointcloud(filename, scene_path='/pointcloud_0', time=iteration)[0]
|
||||
|
||||
assert torch.allclose(pointcloud_in, params['pointcloud_list'][0])
|
||||
|
||||
def test_add_pointcloud_batch_color(self, out_dir, pointcloud_color):
|
||||
writer = timelapse.Timelapse(out_dir)
|
||||
|
||||
pointcloud, color = pointcloud_color
|
||||
|
||||
data = {
|
||||
0: {'pointcloud_list': [pointcloud], 'colors': [color], 'points_type': 'usd_geom_points'},
|
||||
10: {'pointcloud_list': [pointcloud + 100.], 'colors': [color], 'points_type': 'usd_geom_points'},
|
||||
}
|
||||
for iteration, params in data.items():
|
||||
writer.add_pointcloud_batch(iteration=iteration, category='test', **params)
|
||||
|
||||
# Verify
|
||||
filename = os.path.join(out_dir, 'test', 'pointcloud_0.usd')
|
||||
for iteration, params in data.items():
|
||||
pointcloud_in, color_in, normals_in = io.usd.import_pointcloud(filename, scene_path='/pointcloud_0', time=iteration)
|
||||
|
||||
assert torch.allclose(pointcloud_in, params['pointcloud_list'][0])
|
||||
|
||||
assert torch.allclose(color_in, params['colors'][0])
|
||||
|
||||
def test_add_pointcloud_batch_instancer(self, instancer_out_dir, pointcloud):
|
||||
writer = timelapse.Timelapse(instancer_out_dir)
|
||||
|
||||
data = {
|
||||
0: {'pointcloud_list': [pointcloud], 'colors': None},
|
||||
10: {'pointcloud_list': [pointcloud + 100.], 'colors': None},
|
||||
}
|
||||
for iteration, params in data.items():
|
||||
writer.add_pointcloud_batch(iteration=iteration, category='test', **params)
|
||||
|
||||
# Verify
|
||||
filename = os.path.join(instancer_out_dir, 'test', 'pointcloud_0.usd')
|
||||
for iteration, params in data.items():
|
||||
pointcloud_in = io.usd.import_pointcloud(filename, scene_path='/pointcloud_0', time=iteration)[0]
|
||||
|
||||
assert torch.allclose(pointcloud_in, params['pointcloud_list'][0])
|
||||
|
||||
|
||||
class TestTimelapseParser:
|
||||
@pytest.fixture(scope='class')
|
||||
def timelapse_sample_dir(self):
|
||||
# To regenerate run:
|
||||
# 'examples/tutorial/visualize_main.py \
|
||||
# --checkpoint_interval=10 --iterations=101 --skip_normalization \
|
||||
# --test_objs=test/samples/rocket.obj,test/samples/model.obj --output_dir=<CLEARED_OUTPUT_DIR>
|
||||
cur_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
return os.path.join(cur_dir, os.pardir, os.pardir, os.pardir, 'samples',
|
||||
'timelapse', 'notexture')
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def output_dir2(self):
|
||||
# Create temporary output directory
|
||||
out_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '_viz_out')
|
||||
if os.path.exists(out_dir):
|
||||
shutil.rmtree(out_dir)
|
||||
yield out_dir
|
||||
# shutil.rmtree(out_dir) # Note: comment to keep output directory
|
||||
|
||||
def test_parsing(self, timelapse_sample_dir, output_dir2, meshes):
|
||||
shutil.copytree(timelapse_sample_dir, output_dir2)
|
||||
|
||||
parser = timelapse.TimelapseParser(output_dir2)
|
||||
expected_keys = [('mesh', 'ground_truth', 0),
|
||||
('mesh', 'ground_truth', 1),
|
||||
('mesh', 'output', 0),
|
||||
('mesh', 'output', 1),
|
||||
('pointcloud', 'input', 0),
|
||||
('pointcloud', 'input', 1),
|
||||
('pointcloud', 'output', 0),
|
||||
('pointcloud', 'output', 1),
|
||||
('voxelgrid', 'output', 0),
|
||||
('voxelgrid', 'output', 1)]
|
||||
expected_keys.sort()
|
||||
assert sorted(parser.filepaths.keys()) == expected_keys
|
||||
for k in expected_keys:
|
||||
assert os.path.exists(parser.filepaths[k])
|
||||
|
||||
assert parser.num_mesh_categories() == 2
|
||||
assert parser.num_pointcloud_categories() == 2
|
||||
assert parser.num_voxelgrid_categories() == 1
|
||||
assert parser.num_mesh_items() == 4
|
||||
assert parser.num_pointcloud_items() == 4
|
||||
assert parser.num_voxelgrid_items() == 2
|
||||
|
||||
expected_categories = {
|
||||
"mesh": [
|
||||
timelapse.TimelapseParser.CategoryInfo(
|
||||
'ground_truth', ids=[0, 1], end_time=0).serializable(),
|
||||
timelapse.TimelapseParser.CategoryInfo(
|
||||
'output', ids=[0, 1], end_time=100).serializable()],
|
||||
"pointcloud": [
|
||||
timelapse.TimelapseParser.CategoryInfo(
|
||||
'input', ids=[0, 1], end_time=0).serializable(),
|
||||
timelapse.TimelapseParser.CategoryInfo(
|
||||
'output', ids=[0, 1], end_time=100).serializable()],
|
||||
"voxelgrid": [
|
||||
timelapse.TimelapseParser.CategoryInfo(
|
||||
'output', ids=[0, 1], end_time=100).serializable()]
|
||||
}
|
||||
assert set(expected_categories.keys()) == set(parser.dir_info.keys())
|
||||
for k, v in expected_categories.items():
|
||||
expected = v
|
||||
actual = parser.dir_info[k]
|
||||
assert len(expected) == len(actual)
|
||||
for i in range(len(expected)):
|
||||
for ck, cv in expected[i].items(): # Only check expected properties
|
||||
assert (ck in actual[i])
|
||||
assert cv == actual[i][ck]
|
||||
|
||||
# Now we add another iteration
|
||||
writer = timelapse.Timelapse(output_dir2)
|
||||
writer.add_mesh_batch(iteration=200, category='output',
|
||||
vertices_list=[m.vertices for m in meshes],
|
||||
faces_list=[m.faces for m in meshes])
|
||||
assert parser.check_for_updates()
|
||||
assert parser.get_category_info('mesh', 'output')['end_time'] == 200
|
||||
|
||||
# Now let's delete a category
|
||||
shutil.rmtree(os.path.join(output_dir2, 'output'))
|
||||
assert parser.check_for_updates()
|
||||
assert parser.num_mesh_categories() == 1
|
||||
assert parser.num_pointcloud_categories() == 1
|
||||
3255
tests/samples/colored_sphere.obj
Normal file
13
tests/samples/colored_sphere.obj.mtl
Normal file
@@ -0,0 +1,13 @@
|
||||
#
|
||||
# Wavefront material file
|
||||
# Converted by Meshlab Group
|
||||
#
|
||||
|
||||
newmtl material_0
|
||||
Ka 0.200000 0.200000 0.200000
|
||||
Kd 0.752941 0.752941 0.752941
|
||||
Ks 1.000000 1.000000 1.000000
|
||||
Tr 1.000000
|
||||
illum 2
|
||||
Ns 0.000000
|
||||
map_Kd sphere_mtl.png
|
||||