提交测试

This commit is contained in:
2024-01-16 17:22:21 +08:00
parent 92862c0372
commit 73635fda01
654 changed files with 178015 additions and 2 deletions

80
Acknowledgements.txt Normal file
View File

@@ -0,0 +1,80 @@
This software includes third-party components under the following licenses:
========================
pyTorchChamferDistance components
https://github.com/chrdiller/pyTorchChamferDistance
MIT License
Copyright (c) 2018 Christian Diller
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
========================
Occupancy Networks components
https://github.com/autonomousvision/occupancy_networks
Copyright 2019 Lars Mescheder, Michael Oechsle, Michael Niemeyer, Andreas Geiger, Sebastian Nowozin
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
========================
Spherical Harmonics and Spherical Gaussians components
https://github.com/TheRealMJP/BakingLab
MIT License
Copyright (c) 2016 MJP
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

272
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,272 @@
# Contributing guidelines
Contributions are welcome!
You can send us pull requests to help improve Kaolin, if you are just getting started, Gitlab has a [how to](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html).
Kaolin team members will be assigned to review your pull requests. Once your change passes the review and the continuous integration checks, a Kaolin member will approve and merge them to the repository.
If you want to contribute, [Gitlab issues](https://gitlab-master.nvidia.com/Toronto_DL_Lab/kaolin-reformat/-/issues) are a good starting point, especially the ones with the label [good first issue](https://gitlab-master.nvidia.com/Toronto_DL_Lab/kaolin-reformat/-/issues?scope=all&utf8=%E2%9C%93&state=opened&label_name[]=good%20first%20issue). If you started working on a issue, leave a comment so other people know that you're working on it, you can also coordinate with others on the issue comment threads.
## Pull Request Checklist
Before sending your pull requests, make sure you followed this list.
1) Read these Guidelines in full.
2) Please take a look at the LICENSE (it's Apache 2.0).
3) Make sure you sign your commits. E.g. use ``git commit -s`` when commiting.
4) Check your changes are consistent with the [Standards and Coding Style](CONTRIBUTING.md#standards-and-coding-style).
5) Make sure all unittests finish successfully before sending PR.
6) Send your Pull Request to the `master` branch
## Guidelines for Contributing Code
### Running tests
In order to verify that your change does not break anything, a number of checks must be
passed. For example, running unit tests, making sure that all example
notebooks and recipes run without error, and docs build correctly. For unix-based
system, we provide a script to execute all of these tests locally:
```
pip install -r tools/ci_requirements.txt
pip install -r tools/doc_requirements.txt
bash tools/linux/run_tests.sh all
```
If you also want to run integration tests, see [tests/integration/](tests/integration/), specifically
[Dash3D tests](tests/integration/experimental/dash3d/README.md).
### Documentation
All new additions to the Kaolin API must be properly documented. Additional information
on documentation is provided in [our guide](docs/README.md).
### Signing your commits
All commits must be signed using ``git commit -s``.
If you forgot to sign previous commits you can amend them as follows:
* ``git commit -s --amend`` for the last commit.
* ``git rebase --signoff`` for all the commits of your pull request.
### Standards and Coding Style
#### General guidelines
* New features must include unit tests which help guarantee correctness in the present and future.
* API changes should be minimal and backward compatible. Any changes that break backward compatibility should be carefully considered and tracked so that they can be included in the release notes.
* New features may not accepted if the cost of maintenance is too high in comparison of its benefit, they may also be integrated to contrib subfolders for minimal support and maintenance before eventually being integrated to the core.
#### Writing Tests
All tests should use [pytest](https://docs.pytest.org/en/latest/) and [pytest-cov](https://pytest-cov.readthedocs.io/en/latest/) frameworks. The tests should be placed in [tests/python directory](tests/python/), which should follows the directory structure of [kaolin](kaolin/). For example,
test for `kaolin/io/obj.py` should be placed into `tests/pyhon/kaolin/io/test_obj.py`.
#### License
Include a license at the top of new files.
##### C/C++/CUDA
```cpp
// Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES.
// All rights reserved.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
```
##### Python
```python
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
When non-trivial changes are made, the license should be changed accordingly. For instance, if the file is originally authored in 2021, a few typos get fixed in 2022, a paragraph or subroutine is added in 2023, and a major rev2.0 is created in 2024, you would in 2024 write:
"Copyright (c) 2021,23-24 NVIDIA CORPORATION & AFFILIATES"
#### Code organization
* [kaolin](kaolin/) - The core Kaolin library, comprised of python modules,
except for code under [csrc](kaolin/csrc) or [experimental](kaolin/experimental).
* [csrc](kaolin/csrc/) - Directory for all the C++ / CUDA implementations of custom ops.
The gpu ops parts will be under the subdirectory [csrc/cuda](kaolin/csrc/cuda)
while the cpu parts will be under the subdirectory [csrc/cpu](kaolin/csrc/cpu).
* [io](kaolin/io/) - Module of all the I/O features of Kaolin, such a importing and exporting 3D models.
* [metrics](kaolin/metrics) - Module of all the metrics that can be used as differentiable loss or distance.
* [ops](kaolin/ops/) - Module of all the core operations of kaolin on different 3D representations.
* [render](kaolin/render/) - Module of all the differentiable renderers modules and advanced implementations.
* [utils](kaolin/utils/) - Module of all the utility features for debugging and testing.
* [visualize](kaolin/visualize/) - Module of all the visualization modules.
* [experimental](kaolin/experimental/) - Contains less thoroughly tested components for early adoption.
* [examples](examples/) - Examples of Kaolin usage
* [tests](tests/) - Tests for all Kaolin
#### C++ coding style
We follow [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html), with exception on naming for functions / methods, where we use **snake_case**.
Files structure should be:
- ``*.cuh`` files are for reusable device functions (using ``static inline __device__`` for definition)
```cpp
// file is kaolin/csrc/ops/add.cuh
#ifndef KAOLIN_OPS_ADD_CUH_
#define KAOLIN_OPS_ADD_CUH_
namespace kaolin {
static inline __device__ float add(float a, float b) {
return a + b;
}
static inline __device__ double add(double a, double b) {
return a + b;
}
} // namespace kaolin
#endif // KAOLIN_OPS_ADD_CUH_
```
- ``*.cpp`` files (except specific files like [bindings.cpp](kaolin/csrc/bindings.cpp)) should be used for defining the functions that will be directly binded to python, those functions should only be responsible for checking inputs device / memory layout / size, generating output (if possible) and call the base function in the ``_cuda.cu`` or ``_cpu.cpp``. The kernel launcher should be declared at the beginning of the file or from an included header (if reused).
```cpp
// file is kaolin/csrc/ops/foo.cpp
#include <ATen/ATen.h>
namespace kaolin {
#if WITH_CUDA
void foo_cuda_impl(
at::Tensor lhs,
at::Tensor rhs,
at::Tensor output);
#endif // WITH_CUDA
at::Tensor foo_cuda(
at::Tensor lhs,
at::Tensor rhs) {
at::TensorArg lhs_arg{lhs, "lhs", 1}, rhs_arg{rhs, "rhs", 2};
at::checkSameGPU("foo_cuda", lhs_arg, rhs_arg);
at::checkAllContiguous("foo_cuda", {lhs_arg, rhs_arg});
at::checkSameSize("foo_cuda", lhs_arg, rhs_arg);
at::checkSameType("foo_cuda", lhs_arg, rhs_arg);
at::Tensor output = at::zeros_like(lhs);
#if WITH_CUDA
foo_cuda_impl(lhs, rhs, output);
#else
KAOLIN_NO_CUDA_ERROR(__func__);
#endif // WITH_CUDA
return output;
}
} // namespace kaolin
```
- ``*_cuda.cu`` files are for dispatching given the inputs types and implementing the operations on GPU, by using the Torch C++ API and/or launching a custom cuda kernel.
```cpp
// file is kaolin/csrc/ops/foo_cuda.cu
#include <ATen/ATen.h>
#include <c10/cuda/CUDAGuard.h>
#include "./add.cuh"
namespace kaolin {
template<typename scalar_t>
__global__
void foo_cuda_kernel(
const scalar_t* __restrict__ lhs,
const scalar_t* __restrict__ rhs,
const int numel,
scalar_t* __restrict__ output) {
for (int i = threadIdx.x + blockIdx.x * blockDim.x;
i < numel; i++) {
output[i] = add(lhs[i], rhs[i]);
}
}
void foo_cuda_impl(
at::Tensor lhs,
at::Tensor rhs,
at::Tensor output) {
const int threads = 1024;
const int blocks = 64;
AT_DISPATCH_FLOATING_TYPES(lhs.scalar_type(), "foo_cuda", [&] {
const at::cuda::OptionalCUDAGuard device_guard(at::device_of(output));
auto stream = at::cuda::getCurrentCUDAStream();
foo_cuda_kernel<<<blocks, threads, 0, stream>>>(
lhs.data_ptr<scalar_t>(),
rhs.data_ptr<scalar_t>(),
lhs.numel(),
output.data_ptr<scalar_t>());
});
}
} // namespace kaolin
```
- ``*.h`` files are for declaring functions that will be binded to Python, those header files are to be included in [bindings.cpp](kaolin/csrc/bindings.cpp).
```cpp
// file is kaolin/csrc/ops/foo.h
#ifndef KAOLIN_OPS_FOO_H_
#define KAOLIN_OPS_FOO_H_
#include <ATen/ATen.h>
namespace kaolin {
at::Tensor foo_cuda(
at::Tensor lhs,
at::Tensor rhs);
} // namespace kaolin
#endif // KAOLIN_OPS_FOO_H_
```
#### Python coding style
We follow [PEP8 Style Guide](https://www.python.org/dev/peps/pep-0008/) with some exceptions listed in [flake8 config file](https://gitlab-master.nvidia.com/Toronto_DL_Lab/kaolin-reformat/.flake8) and generally follow PyTorch naming conventions.
It is enforced using [flake8](https://pypi.org/project/flake8/), with [flake8-bugbear](https://pypi.org/project/flake8-bugbear/), [flake8-comprehensions](https://pypi.org/project/flake8-comprehensions/), [flake8-mypy](https://pypi.org/project/flake8-mypy/) and [flake8-pyi](https://pypi.org/project/flake8-pyi/)
To run flake8 execute ``flake8 --config=.flake8 .`` from the [root of kaolin](https://gitlab-master.nvidia.com/Toronto_DL_Lab/kaolin-reformat).
On top of that we use prefixes (``packed\_``, ``padded\_``) to indicate that a module / op is specific to a layout, an , all ops of the same purpose for different layouts should be in the same file.
[tests/python/kaolin/](tests/python/kaolin) should follows the same directory structure of [kaolin/](kaolin/). E.g. each module kaolin/path/to/mymodule.py should have a corresponding tests/python/kaolin/path/to/test\_mymodule.py.

26
COPYRIGHT Normal file
View File

@@ -0,0 +1,26 @@
Copyright (c) 2019-2023 NVIDIA CORPORATION. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
========================================================================
The non_commercial portions of this project, located under `kaolin/non_commercial`
and `tests/python/kaolin/non_commercial`, are licensed under NSCL
Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
NVIDIA CORPORATION & AFFILIATES and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION & AFFILIATES is strictly prohibited.

174
LICENSE Normal file
View File

@@ -0,0 +1,174 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

90
LICENSE.NSCL Normal file
View File

@@ -0,0 +1,90 @@
Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
NVIDIA Source Code License for Kaolin
=======================================================================
1. Definitions
“Licensor” means any person or entity that distributes its Work.
“Work” means (a) the original work of authorship made available under
this license, which may include software, documentation, or other files,
and (b) any additions to or derivative works thereof that are made
available under this license.
The terms “reproduce,” “reproduction,” “derivative works,” and
“distribution” have the meaning as provided under U.S. copyright law;
provided, however, that for the purposes of this license, derivative works
shall not include works that remain separable from, or merely link
(or bind by name) to the interfaces of, the Work.
Works are “made available” under this license by including in or with
the Work either (a) a copyright notice referencing the applicability of
this license to the Work, or (b) a copy of this license.
2. License Grant
2.1 Copyright Grant. Subject to the terms and conditions of this license,
each Licensor grants to you a perpetual, worldwide, non-exclusive,
royalty-free, copyright license to use, reproduce, prepare derivative
works of, publicly display, publicly perform, sublicense and distribute
its Work and any resulting derivative works in any form.
3. Limitations
3.1 Redistribution. You may reproduce or distribute the Work only if
(a) you do so under this license, (b) you include a complete copy of
this license with your distribution, and (c) you retain without
modification any copyright, patent, trademark, or attribution notices
that are present in the Work.
3.2 Derivative Works. You may specify that additional or different terms
apply to the use, reproduction, and distribution of your derivative
works of the Work (“Your Terms”) only if (a) Your Terms provide that the
use limitation in Section 3.3 applies to your derivative works, and (b)
you identify the specific derivative works that are subject to Your Terms.
Notwithstanding Your Terms, this license (including the redistribution
requirements in Section 3.1) will continue to apply to the Work itself.
3.3 Use Limitation. The Work and any derivative works thereof only may be
used or intended for use non-commercially. Notwithstanding the foregoing,
NVIDIA Corporation and its affiliates may use the Work and any derivative
works commercially. As used herein, “non-commercially” means for research
or evaluation purposes only.
3.4 Patent Claims. If you bring or threaten to bring a patent claim against
any Licensor (including any claim, cross-claim or counterclaim in a lawsuit)
to enforce any patents that you allege are infringed by any Work, then your
rights under this license from such Licensor (including the grant in
Section 2.1) will terminate immediately.
3.5 Trademarks. This license does not grant any rights to use any Licensors
or its affiliates names, logos, or trademarks, except as necessary to
reproduce the notices described in this license.
3.6 Termination. If you violate any term of this license, then your rights
under this license (including the grant in Section 2.1) will terminate
immediately.
4. Disclaimer of Warranty.
THE WORK IS PROVIDED “AS IS” WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NON-INFRINGEMENT.
YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER THIS LICENSE.
5. Limitation of Liability.
EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY,
WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE SHALL ANY
LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL,
INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR RELATED TO THIS LICENSE,
THE USE OR INABILITY TO USE THE WORK (INCLUDING BUT NOT LIMITED TO LOSS OF
GOODWILL, BUSINESS INTERRUPTION, LOST PROFITS OR DATA, COMPUTER FAILURE OR
MALFUNCTION, OR ANY OTHER DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
=======================================================================

2
MANIFEST.in Normal file
View File

@@ -0,0 +1,2 @@
recursive-include kaolin/experimental/dash3d/static *
recursive-include kaolin/experimental/dash3d/templates *

122
README.md
View File

@@ -1,3 +1,121 @@
# kaolin # Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research
深度学习 <p align="center">
<img src="assets/kaolin.png">
</p>
## Overview
NVIDIA Kaolin library provides a PyTorch API for working with a variety of 3D representations and includes a growing collection of GPU-optimized operations such as modular differentiable rendering, fast conversions between representations, data loading, 3D checkpoints, differentiable camera API, differentiable lighting with spherical harmonics and spherical gaussians, powerful quadtree acceleration structure called Structured Point Clouds, interactive 3D visualizer for jupyter notebooks, convenient batched mesh container and more. Visit the [Kaolin Library Documentation](https://kaolin.readthedocs.io/en/latest/) to get started!
Note that Kaolin library is part of the larger [NVIDIA Kaolin effort](https://developer.nvidia.com/kaolin) for 3D deep learning.
## Installation and Getting Started
Starting with v0.12.0, Kaolin supports installation with wheels:
```
# Replace TORCH_VERSION and CUDA_VERSION with your torch / cuda versions
pip install kaolin==0.15.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-{TORCH_VERSION}_cu{CUDA_VERSION}.html
```
For example, to install kaolin 0.15.0 over torch 1.12.1 and cuda 11.3:
```
pip install kaolin==0.15.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-1.12.1_cu113.html
```
## About the Latest Release (0.15.0)
In this version we added a [non commercial section](https://kaolin.readthedocs.io/en/latest/modules/kaolin.non_commercial.html) under [NSCL license](LICENSE.NSCL). See [The license section for more info](#Licenses) for more details.
In this new section we implemented [features for Flexicubes](https://kaolin.readthedocs.io/en/latest/modules/kaolin.non_commercial.html#kaolin.non_commercial.FlexiCubes) a method to extract meshes from scalar fields. See more information in [the official repository](https://github.com/nv-tlabs/FlexiCubes) which is now using Kaolin's implementation.
<a href="https://kaolin.readthedocs.io/en/latest/modules/kaolin.non_commercial.html#kaolin.non_commercial.FlexiCubes"><img src="./assets/flexicubes.png" alt="flexicubes" height="250" /></a>
In addition we implemented a [GLTF mesh loader](https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.gltf.html) that can be used to load models from [Objaverse](https://objaverse.allenai.org/objaverse-1.0) and [Objaverse-XL](https://objaverse.allenai.org/).
<a href="https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.gltf.html"><img src="./assets/gltf.png" alt="gltf" height="250" /></a>
Check our new tutorial:
[**Load and render a GLTF file** interactively into a Jupyter notebook:](https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/gltf_viz.ipynb)
In this file we show how to load a gltf file and fully differentiably render it with [nvdiffrast](https://nvlabs.github.io/nvdiffrast/) and [spherical gaussian for diffuse and specular lighting](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.lighting.html), using displacement mapping and other materials properties from the GLTF file.
<a href="https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/gltf_viz.ipynb"><img src="./assets/avocado.png" alt="gltf notebook" height="250" /></a>
See [change logs](https://github.com/NVIDIAGameWorks/kaolin/releases/tag/v0.15.0) for details.
## Contributing
Please review our [contribution guidelines](CONTRIBUTING.md).
## External Projects using Kaolin
* [NVIDIA Kaolin Wisp](https://github.com/NVIDIAGameWorks/kaolin-wisp):
* Use [Camera API](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.camera.html), [Structured Point Clouds](https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html) and its [rendering capabilities](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.spc.html)
* [gradSim: Differentiable simulation for system identification and visuomotor control](https://github.com/gradsim/gradsim):
* Use [DIB-R rasterizer](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.mesh.html#kaolin.render.mesh.dibr_rasterization), [obj loader](https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.obj.html#kaolin.io.obj.import_mesh) and [timelapse](https://kaolin.readthedocs.io/en/latest/modules/kaolin.visualize.html#kaolin.visualize.Timelapse)
* [Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer](https://github.com/nv-tlabs/DIB-R-Single-Image-3D-Reconstruction/tree/2cfa689881145c8e0647ae8dd077e55b5a578658):
* Use [Kaolin's DIB-R rasterizer](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.mesh.html#kaolin.render.mesh.dibr_rasterization), [camera functions](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.camera.html) and [Timelapse](https://kaolin.readthedocs.io/en/latest/modules/kaolin.visualize.html#kaolin.visualize.Timelapse) for 3D checkpoints.
* [Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces](https://github.com/nv-tlabs/nglod):
* Use [SPC](https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html) conversions and [ray-tracing](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.spc.html#kaolin.render.spc.unbatched_raytrace), yielding 30x memory and 3x training time reduction.
* [Learning Deformable Tetrahedral Meshes for 3D Reconstruction](https://github.com/nv-tlabs/DefTet):
* Use [Kaolin's DefTet volumetric renderer](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.mesh.html#kaolin.render.mesh.deftet_sparse_render), [tetrahedral losses](https://kaolin.readthedocs.io/en/latest/modules/kaolin.metrics.tetmesh.html), [camera_functions](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.camera.html), [mesh operators and conversions](https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.html), [ShapeNet dataset](https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.shapenet.html#kaolin.io.shapenet.ShapeNetV1), [point_to_mesh_distance](https://kaolin.readthedocs.io/en/latest/modules/kaolin.metrics.trianglemesh.html#kaolin.metrics.trianglemesh.point_to_mesh_distance) and [sided_distance](https://kaolin.readthedocs.io/en/latest/modules/kaolin.metrics.pointcloud.html#kaolin.metrics.pointcloud.sided_distance).
* [Text2Mesh](https://github.com/threedle/text2mesh):
* Use [Kaolin's rendering functions](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.mesh.html#), [camera functions](https://kaolin.readthedocs.io/en/latest/modules/kaolin.render.camera.html), and [obj](https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.obj.html#kaolin.io.obj.import_mesh) and [off](https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.off.html#kaolin.io.off.import_mesh) importers.
* [Flexible Isosurface Extraction for Gradient-Based Mesh Optimization (FlexiCubes)
](https://github.com/nv-tlabs/FlexiCubes):
* Use [Flexicube class](https://kaolin.readthedocs.io/en/latest/modules/kaolin.non_commercial.html#kaolin.non_commercial.FlexiCubes), [obj loader](https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.obj.html), [turntable visualizer](https://kaolin.readthedocs.io/en/latest/modules/kaolin.visualize.html#kaolin.visualize.IpyTurntableVisualizer)
## Licenses
Most of Kaolin's repository is under [Apache v2.0 license](LICENSE), except under [kaolin/non_commercial](kaolin/non_commercial/) which is under [NSCL license](LICENSE.NSCL) restricted to non commercial usage for research and evaluation purposes. For example, FlexiCubes method is included under [non_commercial](kaolin/non_commercial/flexicubes/flexicubes.py).
Default `kaolin` import includes Apache-licensed components:
```
import kaolin
```
The non-commercial components need to be explicitly imported as:
```
import kaolin.non_commercial
```
## Citation
If you are using Kaolin library for your research, please cite:
```
@misc{KaolinLibrary,
author = {Fuji Tsang, Clement and Shugrina, Maria and Lafleche, Jean Francois and Takikawa, Towaki and Wang, Jiehan and Loop, Charles and Chen, Wenzheng and Jatavallabhula, Krishna Murthy and Smith, Edward and Rozantsev, Artem and Perel, Or and Shen, Tianchang and Gao, Jun and Fidler, Sanja and State, Gavriel and Gorski, Jason and Xiang, Tommy and Li, Jianing and Li, Michael and Lebaredian, Rev},
title = {Kaolin: A Pytorch Library for Accelerating 3D Deep Learning Research},
year = {2022},
howpublished={\url{https://github.com/NVIDIAGameWorks/kaolin}}
}
```
## Contributors
Current Team:
- Technical Lead: Clement Fuji Tsang
- Manager: Maria (Masha) Shugrina
- Charles Loop
- Or Perel
- Alexander Zook
Other Majors Contributors:
- Wenzheng Chen
- Sanja Fidler
- Jun Gao
- Jason Gorski
- Jean-Francois Lafleche
- Rev Lebaredian
- Jianing Li
- Michael Li
- Krishna Murthy Jatavallabhula
- Artem Rozantsev
- Tianchang (Frank) Shen
- Edward Smith
- Gavriel State
- Towaki Takikawa
- Jiehan Wang
- Tommy Xiang

BIN
assets/avocado.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

BIN
assets/diffuse.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

BIN
assets/flexicubes.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

BIN
assets/gltf.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

BIN
assets/kaolin.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

BIN
assets/optimization.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 720 KiB

BIN
assets/spc_tutorial.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

BIN
assets/specular.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 MiB

BIN
assets/specular.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

BIN
assets/visualizer.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

View File

@@ -0,0 +1,492 @@
#!/usr/bin/env groovy
import groovy.transform.Field
if (gitlabActionType == "MERGE" || gitlabSourceBranch == "master") {
gitlabCommitStatus("launch all builds") {
// Configs for build from pytorch docker images
// (See: https://hub.docker.com/r/pytorch/pytorch/tags)
def ubuntu_from_pytorch_configs = [
[
// python: 3.7
'cudaVer': '11.3', 'cudnnVer': '8', 'torchVer': '1.12.1',
'archsToTest': 'MULTI'
],
[
// python: 3.7
'cudaVer': '11.6', 'cudnnVer': '8', 'torchVer': '1.13.1',
'archsToTest': 'MULTI'
],
[
// python: 3.7
'cudaVer': '12.1', 'cudnnVer': '8', 'torchVer': '2.1.0',
'archsToTest': 'MULTI'
],
[
// python: 3.10
'cudaVer': '11.8', 'cudnnVer': '8', 'torchVer': '2.1.0',
'archsToTest': 'MULTI'
]
]
// Configs for build from NGC pytorch docker images
// (See: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags)
def ubuntu_from_nvcr_configs = [
[
'baseImageTag': '23.10-py3',
'archsToTest': 'MULTI'
],
]
// Configs for build from cuda container
// with custom installation of all the dependencies like PyTorch
// (See: https://hub.docker.com/r/nvidia/cuda/tags)
def ubuntu_from_cuda_configs = [
[
'cudaVer': '11.3.1', 'cudnnVer': '8',
'pythonVer': '3.8', 'torchVer': '1.11.0',
'archsToTest': 'MULTI'
],
[
'cudaVer': '12.1.0', 'cudnnVer': '8',
'pythonVer': '3.10', 'torchVer': '2.1.0',
'archsToTest': 'MULTI'
],
]
// Configs for build for cpu only
// (Use docker image ubuntu:18.04 as a base)
def ubuntu_cpuonly_configs = [
[
'pythonVer': '3.8', 'torchVer': '1.11.0',
],
[
'pythonVer': '3.10', 'torchVer': '2.0.1',
]
]
// Configs for building python wheels
def ubuntu_for_wheels_configs = [
/*
[
'cudaVer': '11.3.1', 'cudnnVer': '8',
'torchVer': '1.12.0', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.6.2', 'cudnnVer': '8',
'torchVer': '1.12.0', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.3.1', 'cudnnVer': '8',
'torchVer': '1.12.1', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.6.2', 'cudnnVer': '8',
'torchVer': '1.12.1', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.6.2', 'cudnnVer': '8',
'torchVer': '1.13.0', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.7.1', 'cudnnVer': '8',
'torchVer': '1.13.0', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.6.2', 'cudnnVer': '8',
'torchVer': '1.13.1', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.7.1', 'cudnnVer': '8',
'torchVer': '1.13.1', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.7.1', 'cudnnVer': '8',
'torchVer': '2.0.0', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.8.0', 'cudnnVer': '8',
'torchVer': '2.0.0', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.7.1', 'cudnnVer': '8',
'torchVer': '2.0.1', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.8.0', 'cudnnVer': '8',
'torchVer': '2.0.1', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.8.0', 'cudnnVer': '8',
'torchVer': '2.1.0', 'archsToTest': 'MULTI'
],
[
'cudaVer': '12.1.0', 'cudnnVer': '8',
'torchVer': '2.1.0', 'archsToTest': 'MULTI'
],
[
'cudaVer': '11.8.0', 'cudnnVer': '8',
'torchVer': '2.1.1', 'archsToTest': 'MULTI'
],
[
'cudaVer': '12.1.0', 'cudnnVer': '8',
'torchVer': '2.1.1', 'archsToTest': 'MULTI'
],
*/
]
def windows_for_wheels_configs = [
/*
[
'cudaVer': '11.3', 'cudnnVer': '8',
'torchVer': '1.12.0', 'archsToTest': ''
],
[
'cudaVer': '11.6', 'cudnnVer': '8',
'torchVer': '1.12.0', 'archsToTest': ''
],
[
'cudaVer': '11.3', 'cudnnVer': '8',
'torchVer': '1.12.1', 'archsToTest': ''
],
[
'cudaVer': '11.6', 'cudnnVer': '8',
'torchVer': '1.12.1', 'archsToTest': ''
],
[
'cudaVer': '11.6', 'cudnnVer': '8',
'torchVer': '1.13.0', 'archsToTest': ''
],
[
'cudaVer': '11.7', 'cudnnVer': '8',
'torchVer': '1.13.0', 'archsToTest': ''
],
[
'cudaVer': '11.6', 'cudnnVer': '8',
'torchVer': '1.13.1', 'archsToTest': ''
],
[
'cudaVer': '11.7', 'cudnnVer': '8',
'torchVer': '1.13.1', 'archsToTest': ''
],
[
'cudaVer': '11.7', 'cudnnVer': '8',
'torchVer': '2.0.0', 'archsToTest': ''
],
[
'cudaVer': '11.8', 'cudnnVer': '8',
'torchVer': '2.0.0', 'archsToTest': ''
],
[
'cudaVer': '11.7', 'cudnnVer': '8',
'torchVer': '2.0.1', 'archsToTest': ''
],
[
'cudaVer': '11.8', 'cudnnVer': '8',
'torchVer': '2.0.1', 'archsToTest': ''
],
[
'cudaVer': '11.8', 'cudnnVer': '8',
'torchVer': '2.1.0', 'archsToTest': ''
],
[
'cudaVer': '12.1', 'cudnnVer': '8',
'torchVer': '2.1.0', 'archsToTest': ''
],
[
'cudaVer': '11.8', 'cudnnVer': '8',
'torchVer': '2.1.1', 'archsToTest': ''
],
[
'cudaVer': '12.1', 'cudnnVer': '8',
'torchVer': '2.1.1', 'archsToTest': ''
]
*/
]
// Configs for build from Windows server docker images
// (See: https://hub.docker.com/_/microsoft-dotnet-framework-sdk)
def windows_from_server_configs = [
// CUDA drivers on test machines only available for CUDA version >= 11.0
// see: https://gitlab-master.nvidia.com/ipp/cloud-infra/blossom/dev/windows-gpu-pods/-/tree/master/ContainerDriverSetup
// test machines currently are the only option, named 'gpu_tester'
// two machines exist, only the TITAN RTX will pass tests
[
'cudaVer': '11.3',
'pythonVer': '3.8', 'torchVer': '1.11.0',
'archsToTest': 'gpu_tester' //'Tesla_V100_PCIE_32GB'
],
[
'cudaVer': '11.8',
'pythonVer': '3.10', 'torchVer': '2.0.1',
'archsToTest': 'gpu_tester' //'Tesla_V100_PCIE_32GB'
]
]
dockerRegistryServer = 'gitlab-master.nvidia.com:5005'
dockerRegistryName = 'toronto_dl_lab/kaolin'
imageBaseTag = "${dockerRegistryServer}/${dockerRegistryName}/kaolin"
// Used for target docker image tag, as it doesn't support all characters (such as /)
branchRef = gitlabSourceBranch.replaceAll("[^a-zA-Z0-9]", "-")
node {
checkout scm
// Sanity check, in case this script fail to launch all builds and tests
// Right now we only apply CI on MR and master branch.
// To enable master branch we have to accept all the push requests
// and prune them here.
sh "echo ${gitlabActionType}"
jobMap = [:]
// Jenkins doesn't parse the commit hash from the webhook.
// So we need to get the commit hash from the last commit in the branch,
// So we unsure that all the build and run are on the same commit
//
// Note:
// If two commits on the same branch are pushed before the first
// run this line then they will both run on the second commit
commitHash = sh(script: "git log -1 --pretty=format:%h",
returnStdout: true).trim()
sh "echo ${commitHash}"
if (gitlabActionType == "MERGE" &&
gitlabMergeRequestTitle.contains("[for wheels]")) {
for (config in ubuntu_for_wheels_configs) {
for (pythonVer in ['3.8', '3.9', '3.10']) {
def configName = "custom-wheels-torch${config['torchVer']}-" + \
"cuda${config['cudaVer']}-" +
"cudnn${config['cudnnVer']}-" +
"py${pythonVer}"
jobMap["${configName}"] = prepareUbuntuFromCUDAJob(
configName,
config['cudaVer'],
config['cudnnVer'],
pythonVer,
config['torchVer'],
config['archsToTest'],
true
)
}
}
for (config in windows_for_wheels_configs) {
for (pythonVer in ['3.8', '3.9', '3.10']) {
def cudaVerLabel = config['cudaVer'].split('\\.').join('')
def torchVerLabel = config['torchVer'].split('\\.').join('')
def pythonVerLabel = pythonVer.split('\\.').join('')
def configName = "windows-wheels-cuda${cudaVerLabel}-py${pythonVerLabel}-torch${torchVerLabel}"
jobMap["${configName}"] = prepareWindowsJob(
configName,
config['cudaVer'],
pythonVer,
config['torchVer'],
config['archsToTest'],
true
)
}
}
} else {
// Check if the last commit message has a [with custom] tag
def hasNoCustomInMess = sh(script: "git log -1 | grep '.*\\[with custom\\].*'",
returnStatus: true)
if (gitlabActionType == "MERGE") {
sh "echo ${gitlabMergeRequestTitle}"
}
// We try to build from cuda docker image if the commit has such tag
// or CI is applied on master
if (hasNoCustomInMess == 0 || gitlabSourceBranch == "master" ||
gitlabMergeRequestTitle.contains("[with custom]")) {
for (config in ubuntu_from_cuda_configs) {
def configName = "custom-torch${config['torchVer']}-" + \
"cuda${config['cudaVer']}-" +
"cudnn${config['cudnnVer']}-" +
"py${config['pythonVer']}"
jobMap["${configName}"] = prepareUbuntuFromCUDAJob(
configName,
config['cudaVer'],
config['cudnnVer'],
config['pythonVer'],
config['torchVer'],
config['archsToTest'],
false
)
}
}
for (config in ubuntu_from_pytorch_configs) {
def configName = "pytorch-torch${config['torchVer']}-" + \
"cuda${config['cudaVer']}-cudnn${config['cudnnVer']}"
def baseImageTag = "pytorch/pytorch:${config['torchVer']}-" + \
"cuda${config['cudaVer']}-" + \
"cudnn${config['cudnnVer']}-devel"
jobMap["${configName}"] = prepareUbuntuFromBaseImageJob(
configName,
baseImageTag,
config['archsToTest']
)
}
for (config in ubuntu_from_nvcr_configs) {
def configName = "nvcr-${config['baseImageTag']}"
def baseImageTag = "nvcr.io/nvidia/pytorch:${config['baseImageTag']}"
jobMap["${configName}"] = prepareUbuntuFromBaseImageJob(
configName,
baseImageTag,
config['archsToTest']
)
}
for (config in ubuntu_cpuonly_configs) {
def torchVerLabel = config['torchVer'].split('\\.').join('')
def pythonVerLabel = config['pythonVer'].split('\\.').join('')
def configName = "cpuonly-py${config['pythonVer']}-torch${config['torchVer']}"
jobMap["${configName}"] = prepareUbuntuCPUOnlyJob(
configName,
config['pythonVer'],
config['torchVer']
)
}
for (config in windows_from_server_configs) {
def cudaVerLabel = config['cudaVer'].split('\\.').join('')
def torchVerLabel = config['torchVer'].split('\\.').join('')
def pythonVerLabel = config['pythonVer'].split('\\.').join('')
def configName = "windows-cuda${cudaVerLabel}-py${pythonVerLabel}-torch${torchVerLabel}"
jobMap["${configName}"] = prepareWindowsJob(
configName,
config['cudaVer'],
config['pythonVer'],
config['torchVer'],
config['archsToTest'],
false
)
}
}
stage('Launch builds') {
parallel jobMap
}
}
} // gitlabCommitStatus
} // if (gitlabActionType == "MERGE" || gitlabSourceBranch == "master")
def prepareUbuntuFromBaseImageJob(configName, baseImageTag, archsToTest) {
return {
stage("${configName}") {
// Notify Gitlab about the build and tests it will be running
// so it doesn't the build successful before it start running them
// and we can also see issue if the build / test is never run.
updateGitlabCommitStatus(name: "build-${configName}", state: "pending")
for (arch in archsToTest.split(';')) {
updateGitlabCommitStatus(name: "test-${configName}-${arch}", state: "pending")
}
build job: "ubuntu_build_template_CI",
parameters: [
string(name: 'configName', value: "${configName}"),
string(name: 'baseImageTag', value: "${baseImageTag}"),
string(name: 'targetImageTag',
value: "${imageBaseTag}:${branchRef}-${BUILD_ID}-${configName}"),
string(name: 'archsToTest', value: "${archsToTest}"),
string(name: 'sourceBranch', value: "${env.gitlabSourceBranch}"),
string(name: 'repoUrl', value: "${scm.userRemoteConfigs[0].url}"),
string(name: 'commitHash', value: "${commitHash}")
],
// This node doesn't need to be held while builds and tests run.
wait: false,
// Success of this script depend only on successful launch,
// Not successful builds and tests.
propagate: false
}
}
}
def prepareUbuntuFromCUDAJob(configName, cudaVer, cudnnVer, pythonVer, torchVer, archsToTest,
buildWheel) {
return {
stage("${configName}") {
// Notify Gitlab about the build and tests it will be running
// so it doesn't the build successful before it start running them
// and we can also see issue if the build / test is never run.
updateGitlabCommitStatus(name: "build-${configName}", state: "pending")
for (arch in archsToTest.split(';')) {
updateGitlabCommitStatus(name: "test-${configName}-${arch}", state: "pending")
}
build job: "ubuntu_custom_build_template_CI",
parameters: [
string(name: 'configName', value: "${configName}"),
string(name: 'cudaVer', value: "${cudaVer}"),
string(name: 'cudnnVer', value: "${cudnnVer}"),
string(name: 'pythonVer', value: "${pythonVer}"),
string(name: 'torchVer', value: "${torchVer}"),
string(name: 'targetImageTag',
value: "${imageBaseTag}:${branchRef}-${BUILD_ID}-${configName}"),
string(name: 'sourceBranch', value: "${env.gitlabSourceBranch}"),
string(name: 'repoUrl', value: "${scm.userRemoteConfigs[0].url}" ),
string(name: 'archsToTest', value: "${archsToTest}"),
string(name: 'commitHash', value: "${commitHash}"),
booleanParam(name: 'buildWheel', value: "${buildWheel}")
],
// This node doesn't need to be held while builds and tests run.
wait: false,
// Success of this script depend only on successful launch,
// Not successful builds and tests.
propagate: false
}
}
}
def prepareUbuntuCPUOnlyJob(configName, pythonVer, torchVer) {
return {
stage("${configName}") {
updateGitlabCommitStatus(name: "build-${configName}", state: "pending")
updateGitlabCommitStatus(name: "test-${configName}", state: "pending")
build job: "ubuntu_cpuonly_template_CI",
parameters: [
string(name: 'configName', value: "${configName}"),
string(name: 'pythonVer', value: "${pythonVer}"),
string(name: 'torchVer', value: "${torchVer}"),
string(name: 'targetImageTag',
value: "${imageBaseTag}:${branchRef}-${BUILD_ID}-${configName}"),
string(name: 'sourceBranch', value: "${env.gitlabSourceBranch}"),
string(name: 'repoUrl', value: "${scm.userRemoteConfigs[0].url}" ),
string(name: 'commitHash', value: "${commitHash}")
],
wait: false,
propagate: false
}
}
}
def prepareWindowsJob(configName, cudaVer, pythonVer, torchVer, archsToTest,
buildWheel) {
return {
stage("${configName}") {
updateGitlabCommitStatus(name: "build-${configName}", state: "pending")
if (buildWheel.toBoolean()) {
updateGitlabCommitStatus(name: "test-${configName}", state: "pending")
} //else {
// for (arch in archsToTest.split(';')) {
// updateGitlabCommitStatus(name: "test-${configName}-${arch}", state: "pending")
// }
//}
build job: "windows_build_template_CI",
parameters: [
string(name: 'configName', value: "${configName}"),
string(name: 'cudaVer', value: "${cudaVer}"),
string(name: 'pythonVer', value: "${pythonVer}"),
string(name: 'torchVer', value: "${torchVer}"),
string(name: 'targetImageTag',
value: "${imageBaseTag}:${branchRef}-${BUILD_ID}-${configName}"),
string(name: 'sourceBranch', value: "${gitlabSourceBranch}"),
string(name: 'repoUrl', value: "${scm.userRemoteConfigs[0].url}" ),
string(name: 'commitHash', value: "${commitHash}"),
string(name: 'archsToTest', value: "${archsToTest}"),
booleanParam(name: 'buildWheel', value: "${buildWheel}")
],
wait: false,
propagate: false
}
}
}

View File

@@ -0,0 +1,117 @@
#!/usr/bin/env groovy
docker_registry_server = targetImageTag.split(':')[0..1].join(':')
// This will be the "RUN" displayed on Blue Ocean
currentBuild.displayName = targetImageTag.split(':')[2]
// This will be the "MESSAGE" displayed on Blue Ocean
currentBuild.description = sourceBranch + ": " + commitHash
gitlabCommitStatus("build-${configName}") {
podTemplate(
cloud:'sc-ipp-blossom-prod',
yaml:'''
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:20.10.23
command:
- sleep
args:
- 1d
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
- name: docker-daemon
image: docker:20.10.23-dind
securityContext:
privileged: true
env:
- name: DOCKER_TLS_CERTDIR
value: ""
resources:
requests:
memory: 32Gi
cpu: 12
limits:
memory: 32Gi
cpu: 12
''') {
node(POD_LABEL) {
container("docker") {
// This is to let the time for the docker-daemon to get initialized.
sleep 10
try {
stage("Checkout") {
checkout([
$class: 'GitSCM',
branches: [[name: "${commitHash}"]],
// We need submodules
extensions: [[
$class: 'SubmoduleOption',
disableSubmodules: false,
parentCredentials: false,
recursiveSubmodules: true,
reference: '',
trackingSubmodules: false
]],
userRemoteConfigs: [[
credentialsId: 'kaolin-gitlab-access-token-as-password',
url: "${repoUrl}"
]]
])
}
docker.withRegistry("https://${docker_registry_server}", 'kaolin-gitlab-access-token-as-password') {
stage("Build") {
targetImage = docker.build(
"${targetImageTag}",
"""--no-cache --network host -f ./tools/linux/Dockerfile.install \
--build-arg BASE_IMAGE=${baseImageTag} \
.
""")
}
stage("Push") {
targetImage.push()
}
}
} catch (e) {
// In case of build failure, we need to update the following tests as we won't run them.
for (arch in archsToTest.split(';')) {
updateGitlabCommitStatus(name: "test-${configName}-${arch}", state: 'canceled')
}
throw e
}
stage("Launch tests") {
jobMap = [:]
for (arch in archsToTest.split(';')) {
jobMap["${arch}"] = prepareUbuntuTestJob(arch)
}
parallel jobMap
}
}
}
}
} // gitlabCommitStatus
def prepareUbuntuTestJob(arch) {
return {
stage("Test ${arch}") {
build job: "ubuntu_test_template_CI",
parameters: [
string(name: 'sourceBranch', value: "${sourceBranch}"),
string(name: 'configName', value: "${configName}"),
string(name: 'imageTag', value: "${targetImageTag}"),
string(name: 'arch', value: "${arch}"),
string(name: 'commitHash', value: "${commitHash}")
],
// This node doesn't need to be held while tests run.
wait: false,
// Success of this script depend only on successful build
// and launch of tests, not successful tests.
propagate: false
}
}
}

View File

@@ -0,0 +1,97 @@
#!/usr/bin/env groovy
gitlabCommitStatus("build-${configName}") {
docker_registry_server = targetImageTag.split(':')[0..1].join(':')
currentBuild.displayName = targetImageTag.split(':')[2]
currentBuild.description = sourceBranch + ": " + commitHash
podTemplate(
cloud:'sc-ipp-blossom-prod',
yaml:'''
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:20.10.23
command:
- sleep
args:
- 1d
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
- name: docker-daemon
image: docker:20.10.23-dind
securityContext:
privileged: true
env:
- name: DOCKER_TLS_CERTDIR
value: ""
resources:
requests:
memory: 32Gi
cpu: 12
limits:
memory: 32Gi
cpu: 12
''') {
node(POD_LABEL) {
container("docker") {
// This is to let the time for the docker-daemon to get initialized.
sleep 10
try {
stage("Checkout") {
checkout([
$class: 'GitSCM',
branches: [[name: "${commitHash}"]],
extensions: [[
$class: 'SubmoduleOption',
disableSubmodules: false,
parentCredentials: false,
recursiveSubmodules: true,
reference: '',
trackingSubmodules: false
]],
userRemoteConfigs: [[
credentialsId: 'kaolin-gitlab-access-token-as-password',
url: "${repoUrl}"
]]
])
}
stage("Build") {
def baseImage = docker.build(
"${targetImageTag}-base",
"""--no-cache --network host -f ./tools/linux/Dockerfile.base_cpuonly \
--build-arg PYTHON_VERSION=${pythonVer} \
--build-arg PYTORCH_VERSION=${torchVer} \
.
""")
targetImage = docker.build(
"${targetImageTag}",
"""--no-cache --network host -f ./tools/linux/Dockerfile.install \
--build-arg BASE_IMAGE=${targetImageTag}-base \
--build-arg FORCE_CUDA=0 \
.
""")
}
} catch (e) {
updateGitlabCommitStatus(name: "test-${configName}", state: 'canceled')
throw e
}
gitlabCommitStatus("test-${configName}") {
stage("Test") {
targetImage.inside {
// Don't know why but it doesn't work from /kaolin with docker plugin
sh 'cd /tmp && python -c "import kaolin"'
}
}
}
}
}
}
} // gitlabCommitStatus

View File

@@ -0,0 +1,167 @@
#!/usr/bin/env groovy
docker_registry_server = targetImageTag.split(':')[0..1].join(':')
// This will be the "RUN" displayed on Blue Ocean
currentBuild.displayName = targetImageTag.split(':')[2]
// This will be the "MESSAGE" displayed on Blue Ocean
currentBuild.description = sourceBranch + ": " + commitHash
podTemplate(
cloud:'sc-ipp-blossom-prod',
//volumes: [persistentVolumeClaim(mountPath: '/mount_binaries', claimName: 'kaolin-pvc', readOnly: false)],
yaml:'''
apiVersion: v1
kind: Pod
spec:
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: 'kaolin-pvc'
containers:
- name: docker
image: docker:19.03.1
command:
- sleep
args:
- 1d
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
volumeMounts:
- mountPath: /mount_binaries
name: pvc-mount
- name: docker-daemon
image: docker:19.03.1-dind
securityContext:
privileged: true
env:
- name: DOCKER_TLS_CERTDIR
value: ""
resources:
requests:
memory: 32Gi
cpu: 12
limits:
memory: 32Gi
cpu: 12
volumeMounts:
- mountPath: /mount_binaries
name: pvc-mount
''') {
node(POD_LABEL) {
container("docker") {
try {
gitlabCommitStatus("build-${configName}") {
stage("Checkout") {
checkout([
$class: 'GitSCM',
branches: [[name: "${commitHash}"]],
extensions: [[
$class: 'SubmoduleOption',
disableSubmodules: false,
parentCredentials: false,
recursiveSubmodules: true,
reference: '',
trackingSubmodules: false
]],
userRemoteConfigs: [[
credentialsId: 'kaolin-gitlab-access-token-as-password',
url: "${repoUrl}"
]]
])
}
docker.withRegistry("https://${docker_registry_server}", 'kaolin-gitlab-access-token-as-password') {
stage("Build") {
baseImage = docker.build(
"${targetImageTag}-base",
"""--no-cache --network host -f ./tools/linux/Dockerfile.base \
--build-arg CUDA_VERSION=${cudaVer} \
--build-arg CUDNN_VERSION=${cudnnVer} \
--build-arg PYTHON_VERSION=${pythonVer} \
--build-arg PYTORCH_VERSION=${torchVer} \
.
""")
targetImage = docker.build(
"${targetImageTag}",
"""--no-cache --network host -f ./tools/linux/Dockerfile.install \
--build-arg BASE_IMAGE=${targetImageTag}-base \
.
""")
}
if (buildWheel.toBoolean()) {
stage("Build wheel") {
cudaTag = cudaVer.split('\\.')[0..<2].join('')
targetImage.inside() {
sh """
ls .
python setup.py bdist_wheel --dist-dir .
"""
}
pythonVerTag = pythonVer.split('\\.').join('')
Integer MinorVal = pythonVer.split('\\.')[1].toInteger()
pythonVerAbiTag = (MinorVal < 8) ? pythonVerTag + 'm' : pythonVerTag
kaolinVer = sh(script: "cat ./version.txt", returnStdout: true).trim()
baseWheelName = "kaolin-${kaolinVer}-cp${pythonVerTag}-cp${pythonVerAbiTag}"
wheelName = "${baseWheelName}-linux_x86_64.whl"
}
stage("Reinstall from wheel") {
targetImage = docker.build(
"${targetImageTag}",
"""--no-cache --network host -f ./tools/linux/Dockerfile.install_wheel \
--build-arg BASE_IMAGE=${targetImageTag}-base \
--build-arg WHEEL_NAME=${wheelName} \
.
""")
}
stage("Push wheel to volume") {
sh """
mkdir -p /mount_binaries/whl/torch-${torchVer}_cu${cudaTag}
cp ./${wheelName} /mount_binaries/whl/torch-${torchVer}_cu${cudaTag}/${wheelName}
"""
}
stage("Push wheel to artifact") {
archiveArtifacts artifacts: "${wheelName}"
}
}
stage("Push") {
targetImage.push()
}
}
}
} catch (e) {
// In case of build failure, we need to update the following tests as we won't run them.
for (arch in archsToTest.split(';')) {
updateGitlabCommitStatus(name: "test-${configName}-${arch}", state: 'canceled')
}
throw e
}
stage("Launch tests") {
jobMap = [:]
for (arch in archsToTest.split(';')) {
jobMap["${arch}"] = prepareUbuntuTestJob(arch)
}
parallel jobMap
}
}
}
}
def prepareUbuntuTestJob(arch) {
return {
stage("Test ${arch}") {
build job: "ubuntu_test_template_CI",
parameters: [
string(name: 'sourceBranch', value: "${sourceBranch}"),
string(name: 'configName', value: "${configName}"),
string(name: 'imageTag', value: "${targetImageTag}"),
string(name: 'arch', value: "${arch}"),
string(name: 'commitHash', value: "${commitHash}")
],
// This node doesn't need to be held while tests run.
wait: false,
// Success of this script depend only on successful build
// and launch of tests, not successful tests.
propagate: false
}
}
}

View File

@@ -0,0 +1,293 @@
#!/usr/bin/env groovy
docker_registry_server = ImageTag.split(':')[0..1].join(':')
currentBuild.displayName = ImageTag.split(':')[2] + "-${arch}"
currentBuild.description = sourceBranch + ": " + commitHash
if (arch == "MULTI") {
gpu_list = """
- "A100_PCIE_40GB"
- "A100_PCIE_80GB"
- "A100_80GB_PCIE"
- "A100_PCIE_100GB"
- "A10"
- "A30"
- "A40"
- "GA100-E4720-HBM2"
- "GA100-E4720-DVT"
- "GA104-300-PG142-SKU10-TS3"
- "GeForce_RTX_4090"
- "H100_80GB_HBM3"
- "GH100_P1010_PCIE_CR"
- "Tesla_V100_PCIE_32GB"
"""
} else {
gpu_list = """
- "${arch}"
"""
}
node_blacklist = """
- "a4u8g-0031.ipp1u1.colossus"
"""
gitlabCommitStatus("test-${configName}-${arch}") {
podTemplate(
cloud:'sc-ipp-blossom-prod',
yaml:"""
apiVersion: v1
kind: Pod
spec:
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: 'kaolin-pvc'
containers:
- name: docker
image: ${imageTag}
command:
- cat
resources:
requests:
nvidia.com/gpu: 1
limits:
nvidia.com/gpu: 1
tty: true
volumeMounts:
- mountPath: /mnt
name: pvc-mount
imagePullSecrets:
- name: gitlabcred
nodeSelector:
kubernetes.io/os: linux
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "nvidia.com/gpu_type"
operator: "In"
values:${gpu_list}
- key: "kubernetes.io/hostname"
operator: "NotIn"
values:${node_blacklist}
- key: "nvidia.com/driver_version"
operator: "NotIn"
values:
- "545.23"
""") {
node(POD_LABEL) {
container("docker") {
timeout(time: 60, unit: 'MINUTES') {
stage("Install deps") {
sh 'pip install -r /kaolin/tools/ci_requirements.txt'
sh 'apt update && apt install -y unzip && unzip /kaolin/examples/samples/rendered_clock.zip -d /kaolin/examples/samples/'
}
def build_passed = true
try {
stage('Disp info') {
sh 'nvidia-smi'
sh 'python --version'
sh 'lscpu'
sh 'free -h --si'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Pytest") {
sh '''
export KAOLIN_TEST_NVDIFFRAST=1
export KAOLIN_TEST_SHAPENETV1_PATH=/mnt/data/ci_shapenetv1
export KAOLIN_TEST_SHAPENETV2_PATH=/mnt/data/ci_shapenetv2
export KAOLIN_TEST_MODELNET_PATH=/mnt/data/ModelNet
export KAOLIN_TEST_SHREC16_PATH=/mnt/data/ci_shrec16
pytest --durations=50 --import-mode=importlib -rs --cov=/kaolin/kaolin \
--log-disable=PIL.PngImagePlugin \
--log-disable=PIL.TiffImagePlugin \
--log-disable=kaolin.rep.surface_mesh \
/kaolin/tests/python/kaolin
'''
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Dash3D") {
sh '''
pytest -s --cov=/kaolin/kaolin /kaolin/tests/integration/
'''
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
if (arch == "TITAN_RTX") {
stage("Doc examples") {
// using wheel you don't have /kaolin/kaolin
sh '''
if [ -f "/kaolin/kaolin" ]; then
pytest --doctest-modules --ignore=/kaolin/kaolin/experimental /kaolin/kaolin
fi
'''
}
}
} catch(e) {
build_passed = false
echo e.toString()
}
// TUTORIALS
try {
stage("BBox Tutorial") {
sh 'cd /kaolin/examples/tutorial && ipython bbox_tutorial.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Camera and Rasterization Tutorial") {
sh 'cd /kaolin/examples/tutorial && ipython camera_and_rasterization.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("DIB-R Tutorial") {
sh 'cd /kaolin/examples/tutorial && ipython dibr_tutorial.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Diffuse lighting Tutorial") {
sh 'cd /kaolin/examples/tutorial && ipython diffuse_lighting.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("DMTet Tutorial") {
sh 'cd /kaolin/examples/tutorial && ipython dmtet_tutorial.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("GLTF Visualizer") {
sh 'cd /kaolin/examples/tutorial && ipython gltf_viz.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Interactive Visualizer") {
sh 'cd /kaolin/examples/tutorial && ipython interactive_visualizer.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Spherical Gaussian lighting Tutorial") {
sh 'cd /kaolin/examples/tutorial && ipython sg_specular_lighting.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Understanding SPCs Tutorial") {
sh 'cd /kaolin/examples/tutorial && ipython understanding_spcs_tutorial.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Working with meshes Tutorial") {
sh 'cd /kaolin/examples/tutorial && ipython working_with_meshes.ipynb'
}
} catch(e) {
build_passed = false
echo e.toString()
}
// RECIPES
try {
stage("SPC from Pointcloud Recipe") {
sh 'cd /kaolin/examples/recipes/dataload/ && python spc_from_pointcloud.py'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("SPC Basics Recipe") {
sh 'cd /kaolin/examples/recipes/spc/ && python spc_basics.py'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Occupancy Sampling Recipe") {
sh 'cd /kaolin/examples/recipes/preprocess/ && python occupancy_sampling.py'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("Fast Mesh Sampling Recipe") {
sh 'cd /kaolin/examples/recipes/preprocess/ && python fast_mesh_sampling.py --shapenet-dir=/mnt/data/ci_shapenetv2/'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("SPC Dual Octree Recipe") {
sh 'cd /kaolin/examples/recipes/spc/ && python spc_dual_octree.py'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("SPC Trilinear Interpolation Recipe") {
sh 'cd /kaolin/examples/recipes/spc/ && python spc_trilinear_interp.py'
}
} catch(e) {
build_passed = false
echo e.toString()
}
try {
stage("SPC Convolution 3D Recipe") {
sh 'cd /kaolin/examples/recipes/spc/ && python spc_conv3d_example.py'
}
} catch(e) {
build_passed = false
echo e.toString()
}
if (build_passed) {
currentBuild.result = "SUCCESS"
} else {
currentBuild.result = "FAILURE"
error "Build failed. See logs..."
}
}
}
}
}
} // gitlabCommitStatus

View File

@@ -0,0 +1,191 @@
#!/usr/bin/env groovy
// Map from CUDA version to URL to obtain windows installer
def cuda_version_url = [
// CUDA drivers on test machines only available for CUDA version >= 11.0
// see: https://gitlab-master.nvidia.com/ipp/cloud-infra/blossom/dev/windows-gpu-pods/-/tree/master/ContainerDriverSetup
// test machines currently are the only option, named 'gpu_tester'
// two machines exist, only the TITAN RTX will pass tests
'11.1': 'http://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_456.81_win10.exe',
'11.3': 'https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.89_win10.exe',
'11.5': 'https://developer.download.nvidia.com/compute/cuda/11.5.2/local_installers/cuda_11.5.2_496.13_windows.exe',
'11.6': 'https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda_11.6.2_511.65_windows.exe',
'11.7': 'https://developer.download.nvidia.com/compute/cuda/11.7.1/local_installers/cuda_11.7.1_516.94_windows.exe',
'11.8': 'https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_522.06_windows.exe',
'12.1': 'https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda_12.1.0_531.14_windows.exe'
]
docker_registry_server = targetImageTag.split(':')[0..1].join(':')
// This will be the "RUN" displayed on Blue Ocean
currentBuild.displayName = targetImageTag.split(':')[2]
// This will be the "MESSAGE" displayed on Blue Ocean
currentBuild.description = sourceBranch + ": " + commitHash
gitlabCommitStatus("build-${configName}") {
podTemplate(
cloud:'sc-ipp-blossom-116',
envVars:[envVar(key:"JENKINS_URL", value:"${env.JENKINS_URL}")],
yaml:'''
apiVersion: v1
kind: Pod
spec:
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: 'kaolin-pvc'
containers:
- name: jnlp
image: urm.nvidia.com/sw-ipp-blossom-sre-docker-local/jnlp-agent:jdk11-windows
env:
- name: JENKINS_AGENT_WORKDIR
value: C:/Jenkins/agent
- name: DOCKER_HOST
value: "win-docker-proxy.blossom-system.svc.cluster.local"
- name: DOCKER_TLS_CERTDIR
value: ""
volumeMounts:
- mountPath: c:/mnt
name: pvc-mount
resources:
requests:
memory: 32Gi
limits:
memory: 32Gi
imagePullSecrets:
- name: gitlabcred
nodeSelector:
kubernetes.io/os: windows
''')
{
node(POD_LABEL) {
try {
timeout(time: 300, unit: 'MINUTES') {
stage("Checkout") {
checkout([
$class: 'GitSCM',
branches: [[name: "${commitHash}"]],
// We need submodules
extensions: [[
$class: 'SubmoduleOption',
disableSubmodules: false,
parentCredentials: false,
recursiveSubmodules: true,
reference: '',
trackingSubmodules: false
]],
userRemoteConfigs: [[
credentialsId: 'kaolin-gitlab-access-token-as-password',
url: "${repoUrl}"
]]
])
}
docker.withRegistry("https://${docker_registry_server}", 'kaolin-gitlab-access-token-as-password') {
stage("Build base") {
cudaUrl = cuda_version_url[cudaVer]
baseImage = docker.build(
"${targetImageTag}-base",
"""-m 32g --no-cache -f ./tools/windows/Dockerfile.base \
--build-arg CUDA_VERSION=${cudaVer} \
--build-arg CUDA_URL=${cudaUrl} \
--build-arg PYTHON_VERSION=${pythonVer} \
--build-arg PYTORCH_VERSION=${torchVer} \
.
""")
}
if (buildWheel.toBoolean()) {
stage("Build with wheel") {
targetImage = docker.build(
"${targetImageTag}",
"""-m 32g --no-cache -f ./tools/windows/Dockerfile.install_wheel \
--build-arg BASE_IMAGE=${targetImageTag}-base \
.
"""
)
}
} else {
stage("Build") {
targetImage = docker.build(
"${targetImageTag}",
"""-m 32g --no-cache -f ./tools/windows/Dockerfile.install \
--build-arg BASE_IMAGE=${targetImageTag}-base \
.
""")
}
}
stage("Push") {
targetImage.push()
}
}
}
} catch (e) {
// In case of build failure, we need to update the following tests as we won't run them.
if (buildWheel.toBoolean()) {
updateGitlabCommitStatus(name: "test-${configName}", state: 'canceled')
} else {
for (arch in archsToTest.split(';')) {
updateGitlabCommitStatus(name: "test-${configName}-${arch}", state: 'canceled')
}
}
throw e
}
stage("Launch tests") {
jobMap = [:]
if (buildWheel.toBoolean()) {
jobMap["test"] = prepareWindowsWheelTestJob()
} //else {
// for (arch in archsToTest.split(';')) {
// jobMap["${arch}"] = prepareWindowsTestJob(arch)
// }
parallel jobMap
}
}
}
} // gitlabCommitStatus
/*
def prepareWindowsTestJob(arch) {
return {
stage("Test ${arch}") {
build job: "windows_test_template_CI",
parameters: [
string(name: 'sourceBranch', value: "${sourceBranch}"),
string(name: 'configName', value: "${configName}"),
string(name: 'imageTag', value: "${targetImageTag}"),
string(name: 'arch', value: "${arch}"),
string(name: 'commitHash', value: "${commitHash}"),
],
// This node doesn't need to be held while tests run.
wait: false,
// Success of this script depends only on successful build
// and launch of tests, not successful tests.
propagate: false
}
}
}
*/
def prepareWindowsWheelTestJob() {
return {
stage("Test") {
build job: "windows_wheels_template_CI",
parameters: [
string(name: 'sourceBranch', value: "${sourceBranch}"),
string(name: 'configName', value: "${configName}"),
string(name: 'imageTag', value: "${targetImageTag}"),
string(name: 'commitHash', value: "${commitHash}"),
string(name: 'torchVer', value: "${torchVer}"),
string(name: 'cudaVer', value: "${cudaVer}"),
],
// This node doesn't need to be held while tests run.
wait: false,
// Success of this script depends only on successful build
// and launch of tests, not successful tests.
propagate: false
}
}
}

View File

@@ -0,0 +1,297 @@
#!/usr/bin/env groovy
docker_registry_server = ImageTag.split(':')[0..1].join(':')
currentBuild.displayName = ImageTag.split(':')[2] + "-${arch}"
currentBuild.description = sourceBranch + ": " + commitHash
// to manage image secrets:
// 1) log into docker
// docker login gitlab-master.nvidia.com:5005
// 2) create secret
// kubectl create secret docker-registry test-secret -n kaolin --docker-server=gitlab-master.nvidia.com:5005 --docker-username azook --docker-password XXX
// 3) add to service account
// https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account
// kubectl patch kaolin-sa default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
// 4) add to pod template
gitlabCommitStatus("test-${configName}-${arch}") {
podTemplate(cloud:'sc-ipp-blossom-prod',
slaveConnectTimeout: 4000,
yaml: """
apiVersion: v1
kind: Pod
spec:
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: 'kaolin-pvc'
containers:
- name: jnlp
image: jenkins/jnlp-agent:latest-windows
env:
- name: JENKINS_AGENT_WORKDIR
value: C:/Jenkins/agent
- name: windows
image: ${imageTag}
resources:
limits:
nvidia.com/gpu: 1
restartPolicy: Never
backoffLimit: 4
tty: true
volumeMounts:
- mountPath: c:/mnt
name: pvc-mount
imagePullSecrets:
- name: gitlabcred
nodeSelector:
kubernetes.io/os: windows
nvidia.com/node_type: ${arch}
"""
)
{
node(POD_LABEL) {
container("windows") {
if (testWheel.toBoolean()) {
stage('Test') {
powershell '''
python -c "import kaolin; print(kaolin.__version__)"
python -c "import torch; print(torch.__version__)"
'''
}
stage('Move wheels') {
powersheel '''
cudaTag = cudaVer.split('\\.')[0..<2].join('')
/tmp/kaolin-*.whl
'''
}
//mv /tmp/mount_binaries/tmp/torch-${torchVer}+cu${cudaTag}
} else {
stage('Enable cuda') {
powershell '''
$Env:driver_store=$(ls $($($(Get-WmiObject Win32_VideoController).InstalledDisplayDrivers | sort -Unique).ToString().Split(',')| sort -Unique).ToString().Replace("\\DriverStore\\", "\\HostDriverStore\\")).Directory.FullName
cp "$Env:driver_store\\nvcuda64.dll" C:\\Windows\\System32\\nvcuda.dll
cp "$Env:driver_store\\nvapi64.dll" C:\\Windows\\System32\\nvapi64.dll
'''
}
stage("Check cuda") {
powershell '''
dir c:\\
dir c:\\kaolin
dir c:\\data
'''
powershell '''
c:\\data\\deviceQuery.exe
c:\\data\\bandwidthTest.exe
'''
}
stage("Check mount") {
catchError(stageResult: "failure") {
powershell '''
dir c:\\
dir c:\\mnt
'''
}
}
stage("Fix paging memory") {
// addresses this error on Windows with pytorch consuming too much paging memory: https://stackoverflow.com/a/69489193
powershell '''
python c:\\data\\fixNvPe.py --input=C:\\Users\\Administrator\\miniconda3\\Lib\\site-packages\\torch\\lib\\*.dll
'''
}
stage("Prepare data") {
powershell '''
python --version
Expand-Archive c:\\kaolin\\examples\\samples\\rendered_clock.zip c:\\kaolin\\examples\\samples
'''
}
stage("DIB-R Tutorial") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\tutorial
ipython dibr_tutorial.ipynb
'''
}
}
stage("DMTet Tutorial") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\tutorial
ipython dmtet_tutorial.ipynb
'''
}
}
stage("Understanding SPCs Tutorial") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\tutorial
ipython understanding_spcs_tutorial.ipynb --matplotlib
'''
}
}
stage("Diffuse lighting Tutorial") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\tutorial
ipython diffuse_lighting.ipynb
'''
}
}
stage("Spherical Gaussian lighting Tutorial") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\tutorial
ipython sg_specular_lighting.ipynb
'''
}
}
// requires nvdiffrast. not currently supported on Windows
// stage("Camera and Rasterization Tutorial") {
// catchError(stageResult: "failure") {
// powershell '''
// cd c:\\kaolin\\examples\\tutorial
// ipython camera_and_rasterization.ipynb
// '''
// }
// }
stage("SPC from Pointcloud Recipe") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\recipes\\dataload
python spc_from_pointcloud.py
'''
}
}
stage("SPC Basics Recipe") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\recipes\\spc
python spc_basics.py
'''
}
}
stage("Occupancy Sampling Recipe") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\recipes\\preprocess
python occupancy_sampling.py
'''
}
}
stage("Fast Mesh Sampling Recipe") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\recipes\\preprocess
python fast_mesh_sampling.py --shapenet-dir=c:/mnt/data/ci_shapenetv2/
'''
}
}
stage("SPC Dual Octree Recipe") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\recipes\\spc
python spc_dual_octree.py
'''
}
}
stage("SPC Trilinear Interpolation Recipe") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\recipes\\spc
python spc_trilinear_interp.py
'''
}
}
stage("SPC Convolution 3D Recipe") {
catchError(stageResult: "failure") {
powershell '''
cd c:\\kaolin\\examples\\recipes\\spc
python spc_conv3d_example.py
'''
}
}
stage("Run pytest - io") {
catchError(stageResult: "failure") {
timeout(time: 5, unit: "MINUTES") {
powershell '''
set CI=true
pytest -s /kaolin/tests/python/kaolin/io
'''
}
}
}
stage("Run pytest - metrics") {
catchError(stageResult: "failure") {
timeout(time: 5, unit: "MINUTES") {
powershell '''
set CI=true
pytest -s /kaolin/tests/python/kaolin/metrics
'''
}
}
}
stage("Run pytest - ops") {
catchError(stageResult: "failure") {
timeout(time: 50, unit: "MINUTES") {
powershell '''
set CI=true
pytest -s /kaolin/tests/python/kaolin/ops
'''
}
}
}
stage("Run pytest - render") {
catchError(stageResult: "failure") {
timeout(time: 50, unit: "MINUTES") {
powershell '''
set CI=true
set KAOLIN_TEST_NVDIFFRAST=0
pytest -s /kaolin/tests/python/kaolin/render
'''
}
}
}
stage("Run pytest - rep") {
catchError(stageResult: "failure") {
timeout(time: 5, unit: "MINUTES") {
powershell '''
set CI=true
pytest -s /kaolin/tests/python/kaolin/rep
'''
}
}
}
stage("Run pytest - utils") {
catchError(stageResult: "failure") {
timeout(time: 5, unit: "MINUTES") {
powershell '''
set CI=true
pytest -s /kaolin/tests/python/kaolin/utils
'''
}
}
}
stage("Run pytest - visualize") {
catchError(stageResult: "failure") {
timeout(time: 5, unit: "MINUTES") {
powershell '''
set CI=true
pytest -s /kaolin/tests/python/kaolin/visualize
'''
}
}
}
stage("Update build status") {
// update build result gitlab status
// catchError only updates the pipeline
if (currentBuild.getCurrentResult() == "FAILURE") {
error "Build failed. See logs..."
}
}
}
}
}
}
} // gitlabCommitStatus

View File

@@ -0,0 +1,84 @@
#!/usr/bin/env groovy
docker_registry_server = ImageTag.split(':')[0..1].join(':')
currentBuild.displayName = ImageTag.split(':')[2]
currentBuild.description = sourceBranch + ": " + commitHash
// to manage image secrets:
// 1) log into docker
// docker login gitlab-master.nvidia.com:5005
// 2) create secret
// kubectl create secret docker-registry test-secret -n kaolin --docker-server=gitlab-master.nvidia.com:5005 --docker-username azook --docker-password XXX
// 3) add to service account
// https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account
// kubectl patch kaolin-sa default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
// 4) add to pod template
gitlabCommitStatus("test-${configName}") {
podTemplate(cloud:'sc-ipp-blossom-116',
slaveConnectTimeout: 4000,
yaml: """
apiVersion: v1
kind: Pod
spec:
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: 'kaolin-pvc'
containers:
- name: jnlp
image: urm.nvidia.com/sw-ipp-blossom-sre-docker-local/jnlp-agent:jdk11-windows
env:
- name: JENKINS_AGENT_WORKDIR
value: C:/Jenkins/agent
- name: windows
image: ${imageTag}
restartPolicy: Never
backoffLimit: 4
tty: true
volumeMounts:
- mountPath: c:/mnt
name: pvc-mount
imagePullSecrets:
- name: gitlabcred
nodeSelector:
kubernetes.io/os: windows
"""
)
{
node(POD_LABEL) {
container("windows") {
stage("Basic test") {
powershell '''
python --version
python -c "import kaolin; print(kaolin.__version__)"
python -c "import torch; print(torch.__version__)"
'''
}
if (currentBuild.getCurrentResult() != "FAILURE") {
stage("Push wheels on volume") {
def cudaTag = cudaVer.split('\\.')[0..<2].join('')
withEnv(["cudaTag=$cudaTag"]) {
powershell '''
New-Item -Path /mnt/whl/torch-"$env:torchVer"_cu"$env:cudaTag" -ItemType "directory" -Force
'''
powershell '''
cp /kaolin/kaolin-*.whl /mnt/whl/torch-"$env:torchVer"_cu"$env:cudaTag"/
'''
}
}
stage("Push wheels on artifacts") {
// archiveArtifacts only take relative path, and the working directory doesn't work in jenkins
// So we copy from /kaolin to current dir
powershell '''
cp /kaolin/kaolin-*.whl .
'''
archiveArtifacts artifacts: "kaolin-*.whl"
}
}
}
}
}
} // gitlabCommitStatus

20
docs/Makefile Normal file
View File

@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

152
docs/README.md Normal file
View File

@@ -0,0 +1,152 @@
# Documenting
This guide is for developers who write API documentation. To build the documentation, run
`pip install -r tools/doc_requirements.txt` to install the dependencies for documentation.
Then, run
```make html``` on Linux
```make.bat html``` on Windows
## Documenting Python API
The best way to document our Python API is to do so directly in the code. That way it's always extracted from a location
where it's closest to the actual code and most likely to be correct.
Instead of using the older and more cumbersome restructredText Docstring specification we have adopted the more
streamlined [Google Python Style Docstring][#5] format. This is how you would document an API function in Python:
```python
def answer_question(question):
"""This function can answer some questions.
It currently only answers a limited set of questions so don't expect it to know everything.
Args:
question (str): The question passed to the function, trailing question mark is not necessary and
casing is not important.
Returns:
answer (str): The answer to the question or ``None`` if it doesn't know the answer.
"""
if question.lower().startswith("what is the answer to life, universe, and everything"):
return str(42)
else:
return None
```
After running the documentation generation system we will get this as the output:
![Answer documentation](img/answer.png)
One note:
The high-level structure is essentially in four parts:
* A one-liner describing the function (without details or corner cases)
* A paragraph that gives more detail on the function behavior (if necessary)
* An `Args:` section (if the function takes arguments, note that `self` is not considered an argument)
* A `Returns:` section (if the function can return somethings other than `None`)
We want to draw you attention to the following:
Indentation is key when writing docstrings. The documentation system is clever enough to remove uniform indentation.
That is, as long as all the lines have the same amount of padding that padding will be ignored and not passed onto the restructured text processor. Fortunately clang-format leaves this funky formatting alone - respecting the raw string qualifier.
Let's now turn our attention to how we document modules and their attributes. We should of course only document
modules that are part of our API (not internal helper modules) and only public attributes. Below is a detailed example:
```python
"""Example of Google style docstrings for module.
This module demonstrates documentation as specified by the `Google Python
Style Guide`_. Docstrings may extend over multiple lines. Sections are created
with a section header and a colon followed by a block of indented text.
Example:
Examples can be given using either the ``Example`` or ``Examples``
sections. Sections support any reStructuredText formatting, including
literal blocks::
$ python example.py
Section breaks are created by resuming unindented text. Section breaks
are also implicitly created anytime a new section starts.
Attributes:
module_level_variable1 (torch.Tensor): Module level variables may be documented in
either the ``Attributes`` section of the module docstring, or in an
inline docstring immediately following the variable.
Either form is acceptable, but the two should not be mixed. Choose
one convention to document module level variables and be consistent
with it.
module_level_variable3 (int, optional): An optional variable.
Todo:
* For module TODOs if you want them
* These can be useful if you want to communicate any shortcomings in the module we plan to address
.. _Google Python Style Guide:
http://google.github.io/styleguide/pyguide.html
"""
import torch
import numpy as np
module_level_variable1 = torch.tensor([12345])
module_level_variable2 = np.array([12345])
"""np.ndarray: Module level variable documented inline. This approach may be preferable since it keeps the documentation closer to the code and the default
assignment is shown. A downside is that the variable will get alphabetically sorted among functions in the module
so won't have the same cohesion as the approach above."""
module_level_variable3 = None
```
This is what the documentation would look like:
![Module documentation](img/module.png)
As we have mentioned we should not mix the `Attributes:` style of documentation with inline documentation of attributes.
Notice how `module_level_variable3` appears in a separate block from all the other attributes that were documented. It
is even after the TODO section. Choose one approach for your module and stick to it. There are valid reasons to pick
one style above the other but don't cross the streams!
For instructions on how to document classes, exceptions, etc please consult the [Sphinx Napoleon Extension Guide][#7].
### Adding New Python Modules
When adding a new python binding module to the core of kaolin, a basic .rst will be automatically generated in docs/module/ when running ```make html``` (the automatic generation is not supported yet on windows).
The resulting .rst file will look like:
```
.. _moduleName:
<moduleName>
============
.. automodule:: <moduleName>
:members:
:undoc-members:
:show-inheritance:
```
If you want the .rst to not be generated, you must add the corresponding python path in [this list][#8].
To add more contents such as an introduction the .rst have to modified following RestructuredText syntax.
[#1]: https://www.python.org/dev/peps/pep-0257/
[#2]: https://www.python.org/dev/peps/pep-0287/
[#3]: https://devguide.python.org/documenting/
[#4]: https://docs.python.org/3/library/typing.html
[#5]: http://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings
[#6]: https://pybind11.readthedocs.io/en/stable/basics.html?highlight=py%3A%3Aarg#keyword-arguments
[#7]: https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html
[#8]: https://github.com/NVIDIAGameWorks/kaolin/tree/master/docs/kaolin_ext.py#L21

76
docs/_templates/layout.html vendored Normal file
View File

@@ -0,0 +1,76 @@
{% extends "!layout.html" %}
{% block footer %} {{ super() }}
<style>
:root {
--nvidia-color: #76B900;
--dark-green: #008564;
}
a, a:visited, a:active {
color: var(--nvidia-color);
}
/* Sidebar header (and topbar for mobile) */
.wy-side-nav-search, .wy-nav-top {
background-color: var(--nvidia-color);
}
/* Sidebar */
.wy-menu-vertical header, .wy-menu-vertical p.caption {
color: #DFDFDF;
}
.rst-content .note, .rst-content .seealso, .rst-content .wy-alert-info.admonition, .rst-content .wy-alert-info.admonition-todo, .rst-content .wy-alert-info.attention, .rst-content .wy-alert-info.caution, .rst-content .wy-alert-info.danger, .rst-content .wy-alert-info.error, .rst-content .wy-alert-info.hint, .rst-content .wy-alert-info.important, .rst-content .wy-alert-info.tip, .rst-content .wy-alert-info.warning, .wy-alert.wy-alert-info {
background: #eaefe0;
}
.rst-content .note .admonition-title, .rst-content .note .wy-alert-title, .rst-content .seealso .admonition-title, .rst-content .seealso .wy-alert-title, .rst-content .wy-alert-info.admonition-todo .admonition-title, .rst-content .wy-alert-info.admonition-todo .wy-alert-title, .rst-content .wy-alert-info.admonition .admonition-title, .rst-content .wy-alert-info.admonition .wy-alert-title, .rst-content .wy-alert-info.attention .admonition-title, .rst-content .wy-alert-info.attention .wy-alert-title, .rst-content .wy-alert-info.caution .admonition-title, .rst-content .wy-alert-info.caution .wy-alert-title, .rst-content .wy-alert-info.danger .admonition-title, .rst-content .wy-alert-info.danger .wy-alert-title, .rst-content .wy-alert-info.error .admonition-title, .rst-content .wy-alert-info.error .wy-alert-title, .rst-content .wy-alert-info.hint .admonition-title, .rst-content .wy-alert-info.hint .wy-alert-title, .rst-content .wy-alert-info.important .admonition-title, .rst-content .wy-alert-info.important .wy-alert-title, .rst-content .wy-alert-info.tip .admonition-title, .rst-content .wy-alert-info.tip .wy-alert-title, .rst-content .wy-alert-info.warning .admonition-title, .rst-content .wy-alert-info.warning .wy-alert-title, .rst-content .wy-alert.wy-alert-info .admonition-title, .wy-alert.wy-alert-info .rst-content .admonition-title, .wy-alert.wy-alert-info .wy-alert-title {
background: #b8d27c;
}
html.writer-html4 .rst-content dl:not(.docutils) dl:not(.field-list)>dt, html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) dl:not(.field-list):not(.simple)>dt.sig {
background-color: #eaefe0;
border-left: 3px solid var(--nvidia-color);
}
html.writer-html4 .rst-content dl:not(.docutils)>dt, html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple)>dt.sig {
background: #eaefe0;
border-top: 3px solid var(--nvidia-color);
}
html.writer-html4 .rst-content dl:not(.docutils)>dt, html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple)>dt {
color: var(--dark-green);
}
.icon, .version, a.icon.icon-home {
color: white;
}
table.center-align-center-col td {
text-align: center
}
.rubric, p.rubric {
margin-bottom: 15px;
font-weight: 700;
font-size: 120%;
color: var(--dark-green);
border-bottom: 1px solid var(--dark-green);
}
</style>
{% endblock %}
{% block menu %}
<p class="caption">
<span class="caption-text">Getting Started:</span>
</p>
<ul>
<li class="toctree-l1"><a href="{{ pathto('index') }}">Welcome</a></li>
<li class="toctree-l1"><a href="{{ pathto('notes/installation') }}">Installation</a></li>
<li class="toctree-l1"><a href="{{ pathto('notes/overview') }}">API Overview</a></li>
</ul>
{{super()}}
{% endblock %}

78
docs/conf.py Normal file
View File

@@ -0,0 +1,78 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(1, os.path.abspath('.'))
# -- Project information -----------------------------------------------------
project = 'Kaolin'
copyright = '2020, NVIDIA'
author = 'NVIDIA'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'kaolin_ext'
]
todo_include_todos = True
autodoc_typehints = "description"
intersphinx_mapping = {
'python': ("https://docs.python.org/3", None),
'numpy': ('https://numpy.org/doc/stable/', None),
'PyTorch': ('https://pytorch.org/docs/master/', None),
}
master_doc = 'index'
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '*.so']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
import sphinx_rtd_theme
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_theme = "sphinx_rtd_theme"
html_theme_options = {
'collapse_navigation': True
}
# html_theme = 'alabaster'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = []

BIN
docs/img/answer.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

BIN
docs/img/clock.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 MiB

BIN
docs/img/dash3d_viz.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

BIN
docs/img/flexicubes.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

BIN
docs/img/koala.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 626 KiB

BIN
docs/img/mesh_to_spc.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

BIN
docs/img/module.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

BIN
docs/img/octants.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

BIN
docs/img/octree.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

BIN
docs/img/ov_viz.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 663 KiB

BIN
docs/img/spcTeapot.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

BIN
docs/img/spc_points.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

35
docs/index.rst Normal file
View File

@@ -0,0 +1,35 @@
Welcome to Kaolin Library Documentation
=======================================
.. image:: ../assets/kaolin.png
`NVIDIA Kaolin library <https://github.com/NVIDIAGameWorks/kaolin>`_ provides a PyTorch API for working with a variety of 3D representations and includes a growing collection of GPU-optimized operations such as modular differentiable rendering, fast conversions between representations, data loading, 3D checkpoints, differentiable camera API, differentiable lighting with spherical harmonics and spherical gaussians, powerful quadtree acceleration structure called Structured Point Clouds, interactive 3D visualizer for jupyter notebooks, convenient batched mesh container and more.
See :ref:`Installation <installation>`, :ref:`API Overview <overview>` and :ref:`Tutorials <tutorial_index>` to get started!
Note that Kaolin library is part of the larger `NVIDIA Kaolin effort <https://developer.nvidia.com/kaolin>`_ for 3D deep learning.
.. toctree::
:titlesonly:
:maxdepth: 1
:caption: Tutorials:
notes/tutorial_index
notes/checkpoints
notes/diff_render
notes/spc_summary
notes/differentiable_camera
.. toctree::
:titlesonly:
:maxdepth: 1
:caption: API Reference:
modules/kaolin.ops
modules/kaolin.metrics
modules/kaolin.io
modules/kaolin.render
modules/kaolin.rep
modules/kaolin.utils
modules/kaolin.visualize
modules/kaolin.non_commercial

92
docs/kaolin_ext.py Normal file
View File

@@ -0,0 +1,92 @@
# Copyright (c) 2019-2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
KAOLIN_ROOT = os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir)
def run_apidoc(_):
# This is runnning sphinx-apidoc which is automatically generating
# .rst files for each python file in kaolin
# This won't override existing .rst files
# Like kaolin.ops.rst where we added an introduction
from sphinx.ext import apidoc
# Those are files are excluded from parsing
# Such as files where the functions are forwarded to the parent namespace
EXCLUDE_PATHS = [
str(os.path.join(KAOLIN_ROOT, path)) for path in [
"setup.py",
"**.so",
"kaolin/version.py",
"kaolin/experimental/",
"kaolin/version.txt",
"kaolin/io/usd/utils.py",
"kaolin/io/usd/mesh.py",
"kaolin/io/usd/voxelgrid.py",
"kaolin/io/usd/pointcloud.py",
"kaolin/ops/conversions/pointcloud.py",
"kaolin/ops/conversions/sdf.py",
"kaolin/ops/conversions/trianglemesh.py",
"kaolin/ops/conversions/voxelgrid.py",
"kaolin/ops/conversions/tetmesh.py",
"kaolin/ops/mesh/check_sign.py",
"kaolin/ops/mesh/mesh.py",
"kaolin/ops/mesh/tetmesh.py",
"kaolin/ops/mesh/trianglemesh.py",
"kaolin/ops/spc/spc.py",
"kaolin/ops/spc/convolution.py",
"kaolin/ops/spc/points.py",
"kaolin/ops/spc/uint8.py",
"kaolin/render/lighting/sg.py",
"kaolin/render/lighting/sh.py",
"kaolin/render/mesh/deftet.py",
"kaolin/render/mesh/dibr.py",
"kaolin/render/mesh/rasterization.py",
"kaolin/render/mesh/utils.py",
"kaolin/render/spc/raytrace.py",
"kaolin/rep/spc.py",
"kaolin/visualize/timelapse.py",
"kaolin/visualize/ipython.py",
"kaolin/framework/*",
"kaolin/render/camera/camera.py",
"kaolin/render/camera/coordinates.py",
"kaolin/render/camera/extrinsics_backends.py",
"kaolin/render/camera/extrinsics.py",
"kaolin/render/camera/intrinsics_ortho.py",
"kaolin/render/camera/intrinsics_pinhole.py",
"kaolin/render/camera/intrinsics.py",
"kaolin/render/camera/legacy.py",
"kaolin/non_commercial/flexicubes/",
"kaolin/non_commercial/flexicubes/flexicubes.py",
"kaolin/non_commercial/flexicubes/tables.py"
]
]
DOCS_MODULE_PATH = os.path.join(KAOLIN_ROOT, "docs", "modules")
argv = [
"-eT",
"-d", "2",
"--templatedir",
DOCS_MODULE_PATH,
"-o", DOCS_MODULE_PATH,
os.path.join(KAOLIN_ROOT, "kaolin"),
*EXCLUDE_PATHS
]
apidoc.main(argv)
os.remove(os.path.join(DOCS_MODULE_PATH, 'kaolin.rst'))
def setup(app):
app.connect("builder-inited", run_apidoc)

49
docs/make.bat Normal file
View File

@@ -0,0 +1,49 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
set KAOLIN_ROOT=%~dp0..
echo "%KAOLIN_ROOT%\docs\modules\"
REM not to be used by end users
set EXCLUDE_PATHS="%KAOLIN_ROOT%\kaolin\packed\ %KAOLIN_ROOT%\kaolin\padded\ %KAOLIN_ROOT%\kaolin\unbatched\ \kaolin\*_cuda.*"
REM Those files are unused since we already have index.rst and conf.py
set EXCLUDE_GEN_RST=%KAOLIN_ROOT%\docs\modules\setup.rst %KAOLIN_ROOT%\docs\modules\kaolin.rst %KAOLIN_ROOT%\docs\modules\kaolin.version.rst
sphinx-apidoc -eT -d 2 --templatedir=%KAOLIN_ROOT%\docs\modules\ -o %KAOLIN_ROOT%\docs\modules\ %KAOLIN_ROOT% %EXCLUDE_PATHS%
echo %EXCLUDE_GEN_RST%
del %EXCLUDE_GEN_RST%
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

View File

@@ -0,0 +1,32 @@
.. _kaolin.io.materials:
kaolin.io.materials
===================
.. currentmodule:: kaolin.io.materials
API
---
Functions
===================
API
---
.. automodule:: kaolin.io.materials
:members:
:exclude-members:
MaterialError,
MaterialLoadError,
MaterialFileError,
MaterialNotFoundError
Exceptions
----------
.. autoclass:: MaterialError
.. autoclass:: MaterialLoadError
.. autoclass:: MaterialFileError
.. autoclass:: MaterialNotFoundError

View File

@@ -0,0 +1,33 @@
.. _kaolin.io.obj:
kaolin.io.obj
=============
.. currentmodule:: kaolin.io.obj
API
---
Functions
---------
.. automodule:: kaolin.io.obj
:members:
:exclude-members:
return_type,
ignore_error_handler,
skip_error_handler,
default_error_handler,
create_missing_materials_error_handler,
MaterialError,
MaterialLoadError,
MaterialFileError,
MaterialNotFoundError
Error Handler
-------------
.. autofunction:: ignore_error_handler
.. autofunction:: skip_error_handler
.. autofunction:: default_error_handler
.. autofunction:: create_missing_materials_error_handler

View File

@@ -0,0 +1,26 @@
.. _kaolin.io:
kaolin.io
=========
IO directory contains all the functionalities to interact with data files.
:ref:`obj module<kaolin.io.obj>` and :ref:`usd module <kaolin.io.usd>` contains importer to .obj and o, importers / exporters to .usd(a) files,
:ref:`dataset module<kaolin.io.dataset>` contains helper features for caching data, and preprocessing whole datasets,
and :ref:`materials module<kaolin.io.materials>` contains Materials definition that should be used throughout Kaolin.
.. toctree::
:maxdepth: 2
:titlesonly:
kaolin.io.dataset
kaolin.io.materials
kaolin.io.gltf
kaolin.io.obj
kaolin.io.off
kaolin.io.render
kaolin.io.shapenet
kaolin.io.usd
kaolin.io.modelnet
kaolin.io.shrec
kaolin.io.utils

View File

@@ -0,0 +1,10 @@
kaolin.io.shapenet
==================
API
---
.. automodule:: kaolin.io.shapenet
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,38 @@
.. _kaolin.io.usd:
kaolin.io.usd
=============
Universal Scene Description
---------------------------
Universal Scene Description (USD) is an open-source 3D scene description file format developed by Pixar and designed to be versatile, extensible and interchangeable between different 3D tools.
Single models and animations as well as large organized scenes composed of any number of assets can be defined in USD, making it suitable for organizing entire datasets into interpretable,
subsets based on tags, class or other metadata label.
Kaolin includes base I/O operations for USD and also leverages this format to export 3D checkpoints. Use kaolin.io.usd to read and write USD files (try :code:`tutorials/usd_kitcheset.py`),
and :code:`kaolin.visualize.Timelapse` to export 3D checkpoints (try :code:`tutorials/visualize_main.py`).
As a first step to familiarizing yourself with USD, we suggest following this `tutorial <https://developer.nvidia.com/usd>`_.
More tutorials and documentation can be found `here <https://graphics.pixar.com/usd/docs/Introduction-to-USD.html>`_.
Viewing USD Files
~~~~~~~~~~~~~~~~~
USD files can be visualized with realtime pathtracing using the [Omniverse Kaolin App](https://docs.omniverse.nvidia.com/app_kaolin/app_kaolin/user_manual.html#training-visualizer).
Alternatively, you may use Pixar's USDView which can be obtained by visiting
`https://developer.nvidia.com/usd <https://developer.nvidia.com/usd>`_ and selecting the
corresponding platform under *USD Pre-Built Libraries and Tools*.
API
---
Functions
---------
.. automodule:: kaolin.io.usd
:members:
:exclude-members:
mesh_return_type

View File

@@ -0,0 +1,20 @@
.. _kaolin.metrics:
kaolin.metrics
==============
Metrics are differentiable operators that can be used to compute loss or accuracy.
We currently provide an IoU for voxelgrid, sided distance based metrics such as chamfer distance,
point_to_mesh_distance and other simple regularization such as uniform_laplacian_smoothing.
For tetrahedral mesh, we support the equivolume and AMIPS losses.
.. toctree::
:maxdepth: 2
:titlesonly:
kaolin.metrics.pointcloud
kaolin.metrics.render
kaolin.metrics.trianglemesh
kaolin.metrics.voxelgrid
kaolin.metrics.tetmesh

View File

@@ -0,0 +1,12 @@
.. _kaolin.metrics.tetmesh:
kaolin.metrics.tetmesh
======================
API
---
.. automodule:: kaolin.metrics.tetmesh
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,23 @@
.. _kaolin.non_commercial:
kaolin.non\_commercial
======================
License
-------
This submodule contains features under `NSCL license <https://github.com/NVIDIAGameWorks/kaolin/blob/master/LICENSE.NSCL>`_ restricted to non commercial usage for research and evaluation purposes.
API
---
.. autoclass:: kaolin.non_commercial.FlexiCubes
:members:
:special-members: __call__
.. automodule:: kaolin.non_commercial
:members:
:undoc-members:
:exclude-members: FlexiCubes
:show-inheritance:

View File

@@ -0,0 +1,83 @@
.. _kaolin.ops.batch:
kaolin.ops.batch
================
.. _batching:
Batching
--------
Batching data in 3D can be tricky due to the heterogeneous sizes.
For instance, point clouds can have different number of points, which means we can't always just concatenate the tensors on a batch axis.
Kaolin supports different batching strategies:
.. _exact:
Exact
~~~~~
Exact batching is the logical representation for homogeneous data.
For instance, if you sample the same numbers of points from a batch of meshes, you would just have a single tensor of shape :math:`(\text{batch_size}, \text{number_of_points}, 3)`.
.. _padded:
Padded
~~~~~~
Heterogeneous tensors are padded to identical dimensions with a constant value so that they can be concatenated on a batch axis. This is similar to padding for the batching of image data of different shapes.
.. note::
The last dimension must always be of the size of the element, e.g. 3 for 3D points (element of point clouds) or 1 for a grayscale pixel (element of grayscale textures).
For instance, for two textures :math:`T_0` and :math:`T_1` of shape :math:`(32, 32, 3)` and :math:`(64, 16, 3)`
the batched tensor will be of shape :math:`(2, max(32, 64), max(32, 16), 3) = (2, 64, 32, 3)` and the padding value will be :math:`0`. :math:`T_0` will be padded on the 1st axis by :math:`32` while :math:`T_1` will be padded on the 2nd axis by :math:`16`.
You can also enforce a specific maximum shape (if you want to have a fix memory consumption or use optimization like cudnn algorithm selection).
For instance, you can force :math:`T_0` and :math:`T_1` to be batched with a maximum shape of :math:`(128, 128)`, the batched tensor will be of shape :math:`(2, 128, 128, 3)`, :math:`T_0` will be padded on the 1st axis and 2nd axis by 96 and :math:`T_1` will be padded on the 1st axis by :math:`64` and on the 2nd axis by :math:`112`.
For more information on how to do padded batching check :func:`kaolin.ops.batch.list_to_padded`
Related attributes:
...................
.. _padded_shape_per_tensor:
* :attr:`shape_per_tensor`: 2D :class:`torch.LongTensor` stores the shape of each sub-tensor except the last dimension in the padded tensor. E.g., in the example above :attr:`shape_per_tensor` would be ``torch.LongTensor([[32, 32], [64, 16]])``. Refer to :func:`kaolin.ops.batch.get_shape_per_tensor` for more information.
.. _packed:
Packed
~~~~~~
Heterogeneous tensors are reshaped to 2D :math:`(-1, \text{last_dimension})` and concatenated on the first axis. This is similar to packed sentences in NLP.
.. note::
The last dimension must always be of the size of the element, e.g. 3 for 3D points (element of point clouds) or 1 for a grayscale pixel (element of grayscale textures).
For instance, for two textures :math:`T_0` and :math:`T_1` of shape :math:`(32, 32, 3)` and :math:`(64, 16, 3)`
The batched tensor will be of shape :math:`(32 * 32 + 64 * 16, 3)`. :math:`T_0` will be reshaped to :math:`(32 * 32, 3)` and :math:`T_1` will be reshaped :math:`(64 * 16, 3)`, before being concatenated on the first axis.
For more information on how to do padded batching check :func:`kaolin.ops.batch.list_to_packed`
Related attributes:
...................
.. _packed_shape_per_tensor:
* :attr:`shape_per_tensor`: 2D :class:`torch.LongTensor` stores the shape of each sub-tensor except the last dimension in the padded tensor. E.g., in the example above :attr:`shape_per_tensor` would be ``torch.LongTensor([[32, 32], [64, 16]])``. Refer to :func:`kaolin.ops.batch.get_shape_per_tensor` for more information.
.. _packed_first_idx:
* :attr:`first_idx`: 1D :class:`torch.LongTensor` stores the first index of each subtensor and the last index + 1 on the first axis in the packed tensor. E.g., in the example above :attr:`first_idx` would be ``torch.LongTensor([0, 1024, 2048])``. This attribute are used for delimiting each subtensor into the packed tensor, for instance, to slice or index. Refer to :func:`kaolin.ops.batch.get_first_idx` for more information.
API
---
.. automodule:: kaolin.ops.batch
:platform: Windows-x86_64, Linux-x86_64
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,12 @@
.. _kaolin.ops.conversions:
kaolin.ops.conversions
======================
API
---
.. automodule:: kaolin.ops.conversions
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,38 @@
.. _kaolin.ops.mesh:
kaolin.ops.mesh
***********************
A mesh is a 3D object representation consisting of a collection of vertices and polygons.
Triangular meshes
==================
Triangular meshes comprise of a set of triangles that are connected by their common edges or corners. In Kaolin, they are usually represented as a set of two tensors:
* ``vertices``: A :class:`torch.Tensor`, of shape :math:`(\text{batch_size}, \text{num_vertices}, 3)`, contains the vertices coordinates.
* ``faces``: A :class:`torch.LongTensor`, of shape :math:`(\text{batch_size}, \text{num_faces}, 3)`, contains the mesh topology, by listing the vertices index for each face.
Both tensors can be combined using :func:`kaolin.ops.mesh.index_vertices_by_faces`, to form ``face_vertices``, of shape :math:`(\text{batch_size}, \text{num_faces}, 3, 3)`, listing the vertices coordinate for each face.
Tetrahedral meshes
==================
A tetrahedron or triangular pyramid is a polyhedron composed of four triangular faces, six straight edges, and four vertex corners. Tetrahedral meshes inside Kaolin are composed of two tensors:
* ``vertices``: A :class:`torch.Tensor`, of shape :math:`(\text{batch_size}, \text{num_vertices}, 3)`, contains the vertices coordinates.
* ``tet``: A :class:`torch.LongTensor`, of shape :math:`(\text{batch_size}, \text{num_tet}, 4)`, contains the tetrahedral mesh topology, by listing the vertices index for each tetrahedron.
Both tensors can be combined, to form ``tet_vertices``, of shape :math:`(\text{batch_size}, \text{num_tet}, 4, 3)`, listing the tetrahedrons vertices coordinates for each face.
API
---
.. automodule:: kaolin.ops.mesh
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,23 @@
.. _kaolin.ops:
kaolin.ops
==========
Operators are primitive processing functions for batched 3D models (:ref:`meshes<kaolin.ops.mesh>`, :ref:`voxelgrids<kaolin.ops.voxelgrid>` and point clouds).
Tensor batching operators are in :ref:`kaolin.ops.batch`, conversions of 3D models between different representations are in :ref:`kaolin.ops.conversions`.
.. toctree::
:maxdepth: 2
:titlesonly:
kaolin.ops.batch
kaolin.ops.coords
kaolin.ops.conversions
kaolin.ops.pointcloud
kaolin.ops.gcn
kaolin.ops.mesh
kaolin.ops.random
kaolin.ops.reduction
kaolin.ops.spc
kaolin.ops.voxelgrid

View File

@@ -0,0 +1,12 @@
.. _kaolin.ops.spc:
kaolin.ops.spc
##############
API
---
.. automodule:: kaolin.ops.spc
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,15 @@
:orphan:
.. _kaolin.render.camera.Camera:
kaolin.render.camera.Camera
===========================
API
---
.. autoclass:: kaolin.render.camera.Camera
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,14 @@
:orphan:
.. _kaolin.render.camera.CameraExtrinsics:
kaolin.render.camera.CameraExtrinsics
=====================================
API
---
.. autoclass:: kaolin.render.camera.CameraExtrinsics
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,14 @@
:orphan:
.. _kaolin.render.camera.CameraIntrinsics:
kaolin.render.camera.CameraIntrinsics
=====================================
API
---
.. autoclass:: kaolin.render.camera.CameraIntrinsics
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,14 @@
:orphan:
.. _kaolin.render.camera.ExtrinsicsRep:
kaolin.render.camera.ExtrinsicsRep
==================================
API
---
.. autoclass:: kaolin.render.camera.ExtrinsicsRep
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,14 @@
:orphan:
.. _kaolin.render.camera.OrthographicIntrinsics:
kaolin.render.camera.OrthographicIntrinsics
===========================================
API
---
.. autoclass:: kaolin.render.camera.OrthographicIntrinsics
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,14 @@
:orphan:
.. _kaolin.render.camera.PinholeIntrinsics:
kaolin.render.camera.PinholeIntrinsics
======================================
API
---
.. autoclass:: kaolin.render.camera.PinholeIntrinsics
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,36 @@
.. _kaolin.render.camera:
kaolin.render.camera
====================
Kaolin provides extensive camera API. For an overview, see the :ref:`Camera class docs <kaolin.render.camera.Camera>`.
API
---
Classes
^^^^^^^
* :ref:`Camera <kaolin.render.camera.Camera>`
* :ref:`CameraExtrinsics <kaolin.render.camera.CameraExtrinsics>`
* :ref:`CameraIntrinsics <kaolin.render.camera.CameraIntrinsics>`
* :ref:`PinholeIntrinsics <kaolin.render.camera.PinholeIntrinsics>`
* :ref:`OrthographicIntrinsics <kaolin.render.camera.OrthographicIntrinsics>`
* :ref:`ExtrinsicsRep <kaolin.render.camera.ExtrinsicsRep>`
Functions
^^^^^^^^^
.. automodule:: kaolin.render.camera
:members:
:exclude-members:
Camera,
CameraExtrinsics,
CameraIntrinsics,
PinholeIntrinsics,
OrthographicIntrinsics,
ExtrinsicsRep
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,13 @@
.. _kaolin.render:
kaolin.render
=============
.. toctree::
:maxdepth: 2
:titlesonly:
kaolin.render.camera
kaolin.render.lighting
kaolin.render.mesh
kaolin.render.spc

View File

@@ -0,0 +1,12 @@
.. _kaolin.render.spc:
kaolin.render.spc
=================
API
---
.. automodule:: kaolin.render.spc
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,27 @@
.. _kaolin.rep:
kaolin.rep
==========
This module includes higher-level Kaolin classes ("representations").
API
---
Classes
^^^^^^^
* :ref:`SurfaceMesh <kaolin.rep.SurfaceMesh>`
* :ref:`Spc <kaolin.rep.Spc>`
Other
^^^^^^^^^
.. automodule:: kaolin.rep
:members:
:exclude-members:
SurfaceMesh,
Spc
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,14 @@
:orphan:
.. _kaolin.rep.Spc:
kaolin.rep.Spc
===========================
API
---
.. autoclass:: kaolin.rep.Spc
:members:
:undoc-members:
:show-inheritance:

View File

@@ -0,0 +1,146 @@
:orphan:
.. _kaolin.rep.SurfaceMesh:
SurfaceMesh
===========================
Tutorial
--------
For a walk-through of :class:`kaolin.rep.SurfaceMesh` features,
see `working_with_meshes.ipynb <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/working_with_meshes.ipynb>`_.
API
---
* :ref:`Overview <rubric mesh overview>`
* :ref:`Supported Attributes <rubric mesh attributes>`
* :ref:`Batching <rubric mesh batching>`
* :ref:`Attribute Access and Auto-Computability <rubric mesh attribute access>`
* :ref:`Inspecting and Copying <rubric mesh inspecting>`
* :ref:`Tensor Operations <rubric mesh tensor ops>`
.. autoclass:: kaolin.rep.SurfaceMesh
:members:
:undoc-members:
:member-order: bysource
:exclude-members: Batching, attribute_info_string, set_batching, to_batched, getattr_batched, cat,
vertices, face_vertices, normals, face_normals, vertex_normals, uvs, face_uvs, faces, face_normals_idx, face_uvs_idx,
material_assignments, materials, cuda, cpu, to, float_tensors_to, detach, get_attributes, has_attribute, has_or_can_compute_attribute,
probably_can_compute_attribute, get_attribute, get_or_compute_attribute, check_sanity, to_string, as_dict, describe_attribute,
unset_attributes_return_none, allow_auto_compute, batching, convert_attribute_batching
.. _rubric mesh batching:
.. rubric:: Supported Batching Strategies
``SurfaceMesh`` can be instantiated with any of the following batching
strategies, and supports conversions between batching strategies. Current
batching strategy of a ``mesh`` object can be read from ``mesh.batching`` or
by running ``print(mesh)``.
For example::
mesh = kaolin.io.obj.load_mesh(path)
print(mesh)
mesh.to_batched()
print(mesh)
.. autoclass:: kaolin.rep.SurfaceMesh.Batching
:members:
.. automethod:: attribute_info_string
.. automethod:: check_sanity
.. automethod:: set_batching
.. automethod:: to_batched
.. automethod:: getattr_batched
.. automethod:: cat
.. automethod:: convert_attribute_batching
.. _rubric mesh attribute access:
.. rubric:: Attribute Access
By default, ``SurfaceMesh`` will attempt to auto-compute missing attributes
on access. These attributes will be cached, unless their ancestors have
``requires_grad == True``. This behavior of the ``mesh`` object can be changed
at construction time (``allow_auto_compute=False``) or by setting
``mesh.allow_auto_compute`` later. In addition to this convenience API,
explicit methods for attribute access are also supported.
For example, using **convenience API**::
# Caching is enabled by default
mesh = kaolin.io.obj.load_mesh(path, with_normals=False)
print(mesh)
print(mesh.has_attribute('face_normals')) # False
fnorm = mesh.face_normals # Auto-computed
print(mesh.has_attribute('face_normals')) # True (cached)
# Caching is disabled when gradients need to flow
mesh = kaolin.io.obj.load_mesh(path, with_normals=False)
mesh.vertices.requires_grad = True # causes caching to be off
print(mesh.has_attribute('face_normals')) # False
fnorm = mesh.face_normals # Auto-computed
print(mesh.has_attribute('face_normals')) # False (caching disabled)
For example, using **explicit API**::
mesh = kaolin.io.obj.load_mesh(path, with_normals=False)
print(mesh.has_attribute('face_normals')) # False
fnorm = mesh.get_or_compute_attribute('face_normals', should_cache=False)
print(mesh.has_attribute('face_normals')) # False
.. automethod:: get_attributes
.. automethod:: has_attribute
.. automethod:: has_or_can_compute_attribute
.. automethod:: probably_can_compute_attribute
.. automethod:: get_attribute
.. automethod:: get_or_compute_attribute
.. _rubric mesh inspecting:
.. rubric:: Inspecting and Copying Meshes
To make it easier to work with, ``SurfaceMesh`` supports detailed print
statements, as well as ``len()``, ``copy()``, ``deepcopy()`` and can be converted
to a dictionary.
Supported operations::
import copy
mesh_copy = copy.copy(mesh)
mesh_copy = copy.deepcopy(mesh)
batch_size = len(mesh)
# Print default attributes
print(mesh)
# Print more detailed attributes
print(mesh.to_string(detailed=True, print_stats=True))
# Print specific attribute
print(mesh.describe_attribute('vertices'))
.. automethod:: to_string
.. automethod:: describe_attribute
.. automethod:: as_dict
.. _rubric mesh tensor ops:
.. rubric:: Tensor Operations
Convenience operations for device and type conversions of some or all member
tensors.
.. automethod:: cuda
.. automethod:: cpu
.. automethod:: to
.. automethod:: float_tensors_to
.. automethod:: detach
.. rubric:: Other

View File

@@ -0,0 +1,10 @@
.. _kaolin.utils:
kaolin.utils
============
.. toctree::
:maxdepth: 2
:titlesonly:
kaolin.utils.testing

View File

@@ -0,0 +1,14 @@
.. _kaolin.visualize:
kaolin.visualize
================
API
---
.. automodule:: kaolin.visualize
:members:
:inherited-members:
:undoc-members:
:show-inheritance:

14
docs/modules/module.rst_t Normal file
View File

@@ -0,0 +1,14 @@
{%- if show_headings -%}
.. _{{ basename }}:
{{ basename | e | heading }}
{%- endif %}
API
---
.. automodule:: {{ qualname }}
{%- for option in automodule_options %}
:{{ option }}:
{%- endfor %}

View File

@@ -0,0 +1,34 @@
{%- macro automodule(modname, options) -%}
API
---
.. automodule:: {{ modname }}
{%- for option in options %}
:{{ option }}:
{%- endfor %}
{%- endmacro %}
{%- macro unfoldtree(docnames) -%}
{%- for docname in docnames %}
{{ docname }}
{%- endfor %}
{%- endmacro -%}
{%- if is_namespace %}
{{- [pkgname, "namespace"] | join(" ") | e | heading }}
{% else -%}
.. _{{ pkgname }}:
{{ pkgname | e | heading }}
{%- endif -%}
{%- if subpackages or submodules %}
.. toctree::
:maxdepth: {{ maxdepth }}
:titlesonly:
{% endif -%}
{{ unfoldtree(subpackages + submodules) }}
{% if not is_namespace -%}
{{ automodule(pkgname, automodule_options) }}
{% endif %}

8
docs/modules/toc.rst_t Normal file
View File

@@ -0,0 +1,8 @@
{{ header | heading }}
.. toctree::
:maxdepth: {{ maxdepth }}
{% for docname in docnames %}
{{ docname }}
{%- endfor %}

149
docs/notes/checkpoints.rst Normal file
View File

@@ -0,0 +1,149 @@
.. _3d_viz:
3D Checkpoint Visualization
===========================
.. image:: ../img/koala.jpg
Visualizing 3D inputs and outputs of your model during training is an
essential diagnostic tool. Kaolin provides a :ref:`simple API to checkpoint<writing checkpoints>` **batches of meshes, pointclouds and voxelgrids**, as well as **colors and
textures**, saving them in :ref:`the USD format<file format>`. These checkpoints can then be visualized locally using :ref:`Kaolin Omniverse App<ov app>` or by launching :ref:`Kaolin Dash3D<dash 3d>` on the commandline, allowing remote visualization through a web browser.
.. _writing checkpoints:
Writing Checkpoints:
--------------------
In a common scenario, model performance is visualized for a
small evaluation batch. Bootstrap 3D checkpoints in your python training
code by configuring a :class:`~kaolin.visualize.Timelapse` object::
import kaolin
timelapse = kaolin.visualize.Timelapse(viz_log_dir)
The ``viz_log_dir`` is the directory where checkpoints will be saved. Timelapse will create files and subdirectories under this path, so providing
a dedicated ``viz_log_dir`` separate from your other logs and configs will help keep things clean. The :class:`~kaolin.visualize.Timelapse` API supports point clouds,
voxel grids and meshes, as well as colors and textures.
Saving Fixed Data
^^^^^^^^^^^^^^^^^
To save any iteration-independent data,
call ``timelapse`` before your training loop
without providing an ``iteration`` parameter, e.g.::
timelapse.add_mesh_batch(category='ground_truth',
faces_list=face_list,
vertices_list=gt_vert_list)
timelapse.add_pointcloud_batch(category='input',
pointcloud_list=input_pt_clouds)
The ``category`` identifies the meaning of the data. In this toy example,
the model learns to turn the ``'input'`` pointcloud into the ``'output'`` mesh. Both the ``'ground_truth'`` mesh and the ``'input'`` pointcloud batches are only saved once for easy visual comparison.
Saving Time-varying Data
^^^^^^^^^^^^^^^^^^^^^^^^
To checkpoint time-varying data during training, simply call :meth:`~kaolin.visualize.Timelapse.add_mesh_batch`, :meth:`~kaolin.visualize.Timelapse.add_pointcloud_batch` or :meth:`~kaolin.visualize.Timelapse.add_voxelgrid_batch`, for example::
if iteration % checkpoint_interval == 0:
timelapse.add_mesh_batch(category='output',
iteration=iteration,
faces_list=face_list,
vertices_list=out_vert_list)
.. Tip::
For any data type, only time-varying data needs to be saved at every iteration. E.g., if your output mesh topology is fixed, only save ``faces_list`` once, and then call ``add_mesh_batch`` with only the predicted ``vertices_list``. This will cut down your checkpoint size.
Saving Colors and Appearance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We are working on adding support for colors and semantic ids to
point cloud and voxel grid checkpoints. Mesh API supports multiple time-varying materials
by specifying a :class:`kaolin.io.PBRMaterial`. For an example
of using materials, see
`test_timelapse.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/tests/python/kaolin/visualize/test_timelapse.py>`_.
Sample Code
^^^^^^^^^^^
We provide a `script <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/visualize_main.py>`_ that writes mock checkpoints, which can be run as follows::
python examples/tutorial/visualize_main.py \
--test_objs=path/to/object1.obj,path/to/object2.obj \
--output_dir=path/to/logdir
In addition, see :ref:`diff_render` tutorial.
.. _file format:
Understanding the File Format:
------------------------------
Kaolin :class:`~kaolin.visualize.Timelapse` writes checkpoints using Universal Scene Descriptor (USD) file format (`Documentation <https://graphics.pixar.com/usd/docs/index.html>`_), developed with wide support for use cases in visual effects, including time-varying data. This allows reducing redundancy in written
data across time.
After checkpointing with :class:`~kaolin.visualize.Timelapse`, the input ``viz_log_dir`` will contain
a similar file structure::
ground_truth/mesh_0.usd
ground_truth/mesh_1.usd
ground_truth/mesh_...
ground_truth/textures
input/pointcloud_0.usd
input/pointcloud_1.usd
input/pointcloud_...
output/mesh_0.usd
output/mesh_1.usd
output/mesh_...
output/pointcloud_0.usd
output/pointcloud_1.usd
output/pointcloud_...
output/textures
Here, the root folder names correspond to the ``category`` parameter
provided to :class:`~kaolin.visualize.Timelapse` functions. Each element
of the batch of every type is saved in its own numbered ``.usd`` file. Each USD file can be viewed on its
own using any USD viewer, such as `NVIDIA Omniverse View <https://www.nvidia.com/en-us/omniverse/apps/view/>`_, or the whole log directory can be visualized
using the tools below.
.. Caution::
Timelapse is designed to only save one visualization batch for every category and type. Saving multiple batches without interleaving the data can be accomplished by creating custom categories.
.. _ov app:
Visualizing with Kaolin Omniverse App:
--------------------------------------
.. image:: ../img/ov_viz.jpg
USD checkpoints can be visualized using a dedicated Omniverse Kaolin App `Training Visualizer <https://docs.omniverse.nvidia.com/app_kaolin/app_kaolin/user_manual.html#training-visualizer>`_.
This extension provides full-featured support and high-fidelity rendering
of all data types and materials that can be exported using :class:`~kaolin.visualize.Timelapse`, and allows creating custom visualization layouts and viewing meshes in multiple time-varying materials. `Download NVIDIA Omniverse <https://www.nvidia.com/en-us/omniverse/>`_ to get started!
.. _dash 3d:
Visualizing with Kaolin Dash3D:
-------------------------------
.. image:: ../img/dash3d_viz.jpg
Omniverse app requires local access to a GPU and to the saved checkpoints, which is not always possible.
We are also developing a lightweight ``kaolin-dash3d`` visualizer,
which allows visualizing local and remote checkpoints without specialized
hardware or applications. This tool is bundled with the latest
builds as a command-line utility
To start Dash3D on the machine that stores the checkpoints, run::
kaolin-dash3d --logdir=$TIMELAPSE_DIR --port=8080
The ``logdir`` is the directory :class:`kaolin.visualize.Timelapse` was configured with. This command will launch a web server that will stream
geometry to web clients. To connect, simply visit ``http://ip.of.machine:8080`` (or `localhost:8080 <http://localhost:8080/>`_ if connecting locally or with ssh port forwarding).
Try it now:
^^^^^^^^^^^^^
See Dash3D in action by running it on our test samples and visiting `localhost:8080 <http://localhost:8080/>`_::
kaolin-dash3d --logdir=$KAOLIN_ROOT/tests/samples/timelapse/notexture/ --port=8080
.. Caution:: Dash3d is still an experimental feature under active development. It only supports **triangle meshes** and **pointclouds** and cannot yet visualize colors, ids or textures. The web client was tested the most on `Google Chrome <https://www.google.com/chrome/>`_. We welcome your early feedback on our `github <https://github.com/NVIDIAGameWorks/kaolin/issues>`_!

View File

@@ -0,0 +1,13 @@
.. _diff_render:
Differentiable Rendering
========================
.. image:: ../img/clock.gif
Differentiable rendering can be used to optimize the underlying 3D properties, like geometry and lighting, by backpropagating gradients from the loss in the image space. We provide an end-to-end tutorial for using the :mod:`kaolin.render.mesh` API in a Jupyter notebook:
`examples/tutorial/dibr_tutorial.ipynb <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/dibr_tutorial.ipynb>`_
In addition to the rendering API, the tutorial uses Omniverse Kaolin App `Data Generator <https://docs.omniverse.nvidia.com/app_kaolin/app_kaolin/user_manual.html#data-generator>`_ to create training data, :class:`kaolin.visualize.Timelapse` to write checkpoints, and
Omniverse Kaolin App `Training Visualizer <https://docs.omniverse.nvidia.com/app_kaolin/app_kaolin/user_manual.html#training-visualizer>`_ to visualize them.

View File

@@ -0,0 +1,237 @@
Differentiable Camera
*********************
.. _differentiable_camera:
Camera class
============
.. _camera_class:
:class:`kaolin.render.camera.Camera` is a one-stop class for all camera related differentiable / non-differentiable transformations.
Camera objects are represented by *batched* instances of 2 submodules:
- :ref:`CameraExtrinsics <camera_extrinsics_class>`: The extrinsics properties of the camera (position, orientation).
These are usually embedded in the view matrix, used to transform vertices from world space to camera space.
- :ref:`CameraIntrinsics <camera_intrinsics_class>`: The intrinsics properties of the lens
(such as field of view / focal length in the case of pinhole cameras).
Intrinsics parameters vary between different lens type,
and therefore multiple CameraIntrinsics subclasses exist,
to support different types of cameras: pinhole / perspective, orthographic, fisheye, and so forth.
For pinehole and orthographic lens, the intrinsics are embedded in a projection matrix.
The intrinsics module can be used to transform vertices from camera space to Normalized Device Coordinates.
.. note::
To avoid tedious invocation of camera functions through
``camera.extrinsics.someop()`` and ``camera.intrinsics.someop()``, kaolin overrides the ``__get_attributes__``
function to forward any function calls of ``camera.someop()`` to
the appropriate extrinsics / intrinsics submodule.
The entire pipeline of transformations can be summarized as (ignoring homogeneous coordinates)::
World Space Camera View Space
V ---CameraExtrinsics.transform()---> V' ---CameraIntrinsics.transform()---
Shape~(B, 3) (view matrix) Shape~(B, 3) |
|
(linear lens: projection matrix) |
+ homogeneus -> 3D |
V
Normalized Device Coordinates (NDC)
Shape~(B, 3)
When using view / projection matrices, conversion to homogeneous coordinates is required.
Alternatively, the `transform()` function takes care of such projections under the hood when needed.
How to apply transformations with kaolin's Camera:
1. Linear camera types, such as the commonly used pinhole camera,
support the :func:`view_projection_matrix()` method.
The returned matrix can be used to transform vertices through pytorch's matrix multiplication, or even be
passed to shaders as a uniform.
2. All Cameras are guaranteed to support a general :func:`transform()` function
which maps coordinates from world space to Normalized Device Coordinates space.
For some lens types which perform non linear transformations,
the :func:`view_projection_matrix()` is non-defined.
Therefore the camera transformation must be applied through
a dedicated function. For linear cameras,
:func:`transform()` may use matrices under the hood.
3. Camera parameters may also be queried directly.
This is useful when implementing camera params aware code such as ray tracers.
How to control kaolin's Camera:
- :class:`CameraExtrinsics`: is packed with useful methods for controlling the camera position and orientation:
:func:`translate() <CameraExtrinsics.translate()>`,
:func:`rotate() <CameraExtrinsics.rotate()>`,
:func:`move_forward() <CameraExtrinsics.move_forward()>`,
:func:`move_up() <CameraExtrinsics.move_up()>`,
:func:`move_right() <CameraExtrinsics.move_right()>`,
:func:`cam_pos() <CameraExtrinsics.cam_pos()>`,
:func:`cam_up() <CameraExtrinsics.cam_up()>`,
:func:`cam_forward() <CameraExtrinsics.cam_forward()>`,
:func:`cam_up() <CameraExtrinsics.cam_up()>`.
- :class:`CameraIntrinsics`: exposes a lens :func:`zoom() <CameraIntrinsics.zoom()>`
operation. The exact functionality depends on the camera type.
How to optimize the Camera parameters:
- Both :class:`CameraExtrinsics`: and :class:`CameraIntrinsics` maintain
:class:`torch.Tensor` buffers of parameters which support pytorch differentiable operations.
- Setting ``camera.requires_grad_(True)`` will turn on the optimization mode.
- The :func:`gradient_mask` function can be used to mask out gradients of specific Camera parameters.
.. note::
:class:`CameraExtrinsics`: supports multiple representions of camera parameters
(see: :func:`switch_backend <CameraExtrinsics.switch_backend()>`).
Specific representations are better fit for optimization
(e.g.: they maintain an orthogonal view matrix).
Kaolin will automatically switch to using those representations when gradient flow is enabled
For non-differentiable uses, the default representation may provide better
speed and numerical accuracy.
Other useful camera properties:
- Cameras follow pytorch in part, and support arbitrary ``dtype`` and ``device`` types through the
:func:`to()`, :func:`cpu()`, :func:`cuda()`, :func:`half()`, :func:`float()`, :func:`double()`
methods and :func:`dtype`, :func:`device` properties.
- :class:`CameraExtrinsics`: and :class:`CameraIntrinsics`: individually support the :func:`requires_grad`
property.
- Cameras implement :func:`torch.allclose` for comparing camera parameters under controlled numerical accuracy.
The operator ``==`` is reserved for comparison by ref.
- Cameras support batching, either through construction, or through the :func:`cat()` method.
.. note::
Since kaolin's cameras are batched, the view/projection matrices are of shapes :math:`(\text{num_cameras}, 4, 4)`,
and some operations, such as :func:`transform()` may return values as shapes of :math:`(\text{num_cameras}, \text{num_vectors}, 3)`.
Concluding remarks on coordinate systems and other confusing conventions:
- kaolin's Cameras assume column major matrices, for example, the inverse view matrix (cam2world) is defined as:
.. math::
\begin{bmatrix}
r1 & u1 & f1 & px \\
r2 & u2 & f2 & py \\
r3 & u3 & f3 & pz \\
0 & 0 & 0 & 1
\end{bmatrix}
This sometimes causes confusion as the view matrix (world2cam) uses a transposed 3x3 submatrix component,
which despite this transposition is still column major (observed through the last `t` column):
.. math::
\begin{bmatrix}
r1 & r2 & r3 & tx \\
u1 & u2 & u3 & ty \\
f1 & f2 & f3 & tz \\
0 & 0 & 0 & 1
\end{bmatrix}
- kaolin's cameras do not assume any specific coordinate system for the camera axes. By default, the
right handed cartesian coordinate system is used. Other coordinate systems are supported through
:func:`change_coordinate_system() <CameraExtrinsics.change_coordinate_system()>`
and the ``coordinates.py`` module::
Y
^
|
|---------> X
/
Z - kaolin's NDC space is assumed to be left handed (depth goes inwards to the screen).
The default range of values is [-1, 1].
CameraExtrinsics class
======================
.. _camera_extrinsics_class:
:class:`kaolin.render.camera.CameraExtrinsics` holds the extrinsics parameters of a camera: position and orientation in space.
This class maintains the view matrix of camera, used to transform points from world coordinates
to camera / eye / view space coordinates.
This view matrix maintained by this class is column-major, and can be described by the 4x4 block matrix:
.. math::
\begin{bmatrix}
R & t \\
0 & 1
\end{bmatrix}
where **R** is a 3x3 rotation matrix and **t** is a 3x1 translation vector for the orientation and position
respectively.
This class is batched and may hold information from multiple cameras.
:class:`CameraExtrinsics` relies on a dynamic representation backend to manage the tradeoff between various choices
such as speed, or support for differentiable rigid transformations.
Parameters are stored as a single tensor of shape :math:`(\text{num_cameras}, K)`,
where K is a representation specific number of parameters.
Transformations and matrices returned by this class support differentiable torch operations,
which in turn may update the extrinsic parameters of the camera::
convert_to_mat
Backend ---- > Extrinsics
Representation R View Matrix M
Shape (num_cameras, K), Shape (num_cameras, 4, 4)
< ----
convert_from_mat
.. note::
Unless specified manually with :func:`switch_backend`,
kaolin will choose the optimal representation backend depending on the status of ``requires_grad``.
.. note::
Users should be aware, but not concerned about the conversion from internal representations to view matrices.
kaolin performs these conversions where and if needed.
Supported backends:
- **"matrix_se3"**\: A flattened view matrix representation, containing the full information of
special euclidean transformations (translations and rotations).
This representation is quickly converted to a view matrix, but differentiable ops may cause
the view matrix to learn an incorrect, non-orthogonal transformation.
- **"matrix_6dof_rotation"**\: A compact representation with 6 degrees of freedom, ensuring the view matrix
remains orthogonal under optimizations. The conversion to matrix requires a single Gram-Schmidt step.
.. seealso::
`On the Continuity of Rotation Representations in Neural Networks, Zhou et al. 2019
<https://arxiv.org/abs/1812.07035>`_
Unless stated explicitly, the definition of the camera coordinate system used by this class is up to the
choice of the user.
Practitioners should be mindful of conventions when pairing the view matrix managed by this class with a projection
matrix.
CameraIntrinsics class
======================
.. _camera_intrinsics_class:
:class:`kaolin.render.camera.CameraIntrinsics` holds the intrinsics parameters of a camera:
how it should project from camera space to normalized screen / clip space.
The instrinsics are determined by the camera type, meaning parameters may differ according to the lens structure.
Typical computer graphics systems commonly assume the intrinsics of a pinhole camera (see: :class:`PinholeIntrinsics` class).
One implication is that some camera types do not use a linear projection (i.e: Fisheye lens).
There are therefore numerous ways to use CameraIntrinsics subclasses:
1. Access intrinsics parameters directly.
This may typically benefit use cases such as ray generators.
2. The :func:`transform()` method is supported by all CameraIntrinsics subclasses,
both linear and non-linear transformations, to project vectors from camera space to normalized screen space.
This method is implemented using differential pytorch operations.
3. Certain CameraIntrinsics subclasses which perform linear projections, may expose the transformation matrix
via dedicated methods.
For example, :class:`PinholeIntrinsics` exposes a :func:`projection_matrix()` method.
This may typically be useful for rasterization based rendering pipelines (i.e: OpenGL vertex shaders).
This class is batched and may hold information from multiple cameras.
Parameters are stored as a single tensor of shape :math:`(\text{num_cameras}, K)` where K is the number of
intrinsic parameters.
currently there are two subclasses of intrinsics: :class:`kaolin.render.camera.OrthographicIntrinsics` and
:class:`kaolin.render.camera.PinholeIntrinsics`.
API Documentation:
------------------
* Check all the camera classes and functions at the :ref:`API documentation<kaolin.render.camera>`.

158
docs/notes/installation.rst Normal file
View File

@@ -0,0 +1,158 @@
:orphan:
.. _installation:
Installation
============
Most functions in Kaolin use PyTorch with custom high-performance code in C++ and CUDA. For this reason,
full Kaolin functionality is only available for systems with an NVIDIA GPU, supporting CUDA. While it is possible to install
Kaolin on other systems, only a fraction of operations will be available for a CPU-only install.
Requirements
------------
* Linux, Windows, or macOS (CPU-only)
* Python >= 3.8, <= 3.10
* `CUDA <https://developer.nvidia.com/cuda-toolkit>`_ >= 10.0 (with 'nvcc' installed) See `CUDA Toolkit Archive <https://developer.nvidia.com/cuda-toolkit-archive>`_ to install older version.
* torch >= 1.8, <= 2.1.1
Quick Start (Linux, Windows)
----------------------------
| Make sure any of the supported CUDA and torch versions below are pre-installed.
| The latest version of Kaolin can be installed with pip:
.. code-block:: bash
$ pip install kaolin==0.15.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-{TORCH_VER}_cu{CUDA_VER}.html
.. Note::
Replace *TORCH_VER* and *CUDA_VER* with any of the compatible options below.
.. rst-class:: center-align-center-col
+------------------+-----------+-----------+-----------+-----------+-----------+
| **torch / CUDA** | **cu113** | **cu116** | **cu117** | **cu118** | **cu121** |
+==================+===========+===========+===========+===========+===========+
| **torch-2.1.1** | | | | ✓ | ✓ |
+------------------+-----------+-----------+-----------+-----------+-----------+
| **torch-2.1.0** | | | | ✓ | ✓ |
+------------------+-----------+-----------+-----------+-----------+-----------+
| **torch-2.0.1** | | | ✓ | ✓ | |
+------------------+-----------+-----------+-----------+-----------+-----------+
| **torch-2.0.0** | | | ✓ | ✓ | |
+------------------+-----------+-----------+-----------+-----------+-----------+
| **torch-1.13.1** | | ✓ | ✓ | | |
+------------------+-----------+-----------+-----------+-----------+-----------+
| **torch-1.13.0** | | ✓ | ✓ | | |
+------------------+-----------+-----------+-----------+-----------+-----------+
| **torch-1.12.1** | ✓ | ✓ | | | |
+------------------+-----------+-----------+-----------+-----------+-----------+
| **torch-1.12.0** | ✓ | ✓ | | | |
+------------------+-----------+-----------+-----------+-----------+-----------+
For example, to install kaolin for torch 1.12.1 and CUDA 11.3:
.. code-block:: bash
$ pip install kaolin==0.15.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-1.12.1_cu113.html
You can check https://nvidia-kaolin.s3.us-east-2.amazonaws.com/index.html to see all the wheels available.
Installation from source
------------------------
.. Note::
We recommend installing Kaolin into a virtual environment. For instance to create a new environment with `Anaconda <https://www.anaconda.com/>`_:
.. code-block:: bash
$ conda create --name kaolin python=3.8
$ conda activate kaolin
1. Clone Repository
^^^^^^^^^^^^^^^^^^^
Clone and optionally check out an `official release <https://github.com/NVIDIAGameWorks/kaolin/tags>`_:
.. code-block:: bash
$ git clone --recursive https://github.com/NVIDIAGameWorks/kaolin
$ cd kaolin
$ git checkout v0.15.0 # optional
2. Install dependencies
^^^^^^^^^^^^^^^^^^^^^^^
You can install the dependencies running:
.. code-block:: bash
$ pip install -r tools/build_requirements.txt -r tools/viz_requirements.txt -r tools/requirements.txt
2. Test CUDA
^^^^^^^^^^^^
You can verify that CUDA is properly installed at the desired version with nvcc by running the following:
.. code-block:: bash
$ nvidia-smi
$ nvcc --version
3. Install Pytorch
^^^^^^^^^^^^^^^^^^
Follow `official instructions <https://pytorch.org>`_ to install PyTorch of a supported version.
Kaolin may be able to work with other PyTorch versions, but we only explicitly test within the version range 1.10.0 to 2.1.1.
See below for overriding PyTorch version check during install.
Here is how to install the latest Pytorch version supported by Kaolin for cuda 11.8:
.. code-block:: bash
$ pip install torch==2.1.1 --extra-index-url https://download.pytorch.org/whl/cu118
4. Optional Environment Variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* If trying Kaolin with an unsupported PyTorch version, set: ``export IGNORE_TORCH_VER=1``
* If using heterogeneous GPU setup, set the architectures for which to compile the CUDA code, e.g.: ``export TORCH_CUDA_ARCH_LIST="7.0 7.5"``
* In some setups, there may be a conflict between cub available with cuda install > 11 and ``third_party/cub`` that kaolin includes as a submodule. If conflict occurs or cub is not found, set ``CUB_HOME`` to the cuda one, e.g. typically on Linux: ``export CUB_HOME=/usr/local/cuda-*/include/``
5. Install Kaolin
^^^^^^^^^^^^^^^^^
.. code-block:: bash
$ python setup.py develop
.. Note::
Kaolin can be installed without GPU, however, CPU support is limited and many CUDA-only functions will be missing.
Testing your installation
-------------------------
Run a quick test of your installation and version:
.. code-block:: bash
$ python -c "import kaolin; print(kaolin.__version__)"
Running tests
^^^^^^^^^^^^^
For an exhaustive check, install testing dependencies and run tests as follows:
.. code-block:: bash
$ pip install -r tools/ci_requirements.txt
$ export CI='true' # on Linux
$ set CI='true' # on Windows
$ pytest --import-mode=importlib -s tests/python/
.. Note::
These tests rely on CUDA operations and will fail if you installed on CPU only, where not all functionality is available.

98
docs/notes/overview.rst Normal file
View File

@@ -0,0 +1,98 @@
:orphan:
.. _overview:
API Overview
============
Below is a summary of Kaolin functionality. Refer to :ref:`tutorial_index` for specific use cases, examples
and recipes that use these building blocks.
Operators for 3D Data:
^^^^^^^^^^^^^^^^^^^^^^
:ref:`kaolin/ops<kaolin.ops>` contains operators for efficient processing functions of batched 3d models and tensors. We provide, conversions between 3d representations, primitives batching of heterogenenous data, and efficient mainstream functions on meshes and voxelgrids.
.. toctree::
:maxdepth: 2
../modules/kaolin.ops
I/O:
^^^^
:ref:`kaolin/io<kaolin.io>` contains functionality to interact with files.
We provide, importers and exporters to popular format such as .obj and .usd, but also utility functions and classes to preprocess and cache datasets with specific transforms.
.. toctree::
:maxdepth: 2
../modules/kaolin.io
Metrics:
^^^^^^^^
:ref:`kaolin/metrics<kaolin.metrics>` contains functions to compute distance and losses such as point_to_mesh distance, chamfer distance, IoU, or laplacian smoothing.
.. toctree::
:maxdepth: 2
../modules/kaolin.metrics
Differentiable Rendering:
^^^^^^^^^^^^^^^^^^^^^^^^^
:ref:`kaolin/render<kaolin.render>` provide functions related to differentiable rendering, such a DIB-R rasterization, application of camera projection / translation / rotation, lighting, and textures.
.. toctree::
:maxdepth: 2
../modules/kaolin.render
3D Checkpoints and Visualization:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
:ref:`kaolin/visualize<kaolin.visualize>` contains utilities for writing 3D checkpoints for visualization. Currently we provide timelapse exporter that can be quickly picked up by the `Omniverse Kaolin App <https://docs.omniverse.nvidia.com/app_kaolin/app_kaolin/user_manual.html#training-visualizer>`_.
.. toctree::
:maxdepth: 2
../modules/kaolin.visualize
Utilities:
^^^^^^^^^^
:ref:`kaolin/utils<kaolin.utils>` contains utility functions to help development of application or research scripts. We provide functions to display and check informations about tensors, and features to fix seed.
.. toctree::
:maxdepth: 2
../modules/kaolin.utils
Non Commercial
^^^^^^^^^^^^^^
:ref:`kaolin/non_commercial<kaolin.non_commercial>` contains features under `NSCL license <https://github.com/NVIDIAGameWorks/kaolin/blob/master/LICENSE.NSCL>`_ restricted to non commercial usage for research and evaluation purposes.
.. toctree::
:maxdepth: 2
../modules/kaolin.non_commercial
Licenses
========
Most of Kaolin's repository is under `Apache v2.0 license <https://github.com/NVIDIAGameWorks/kaolin/blob/master/LICENSE>`_, except under :ref:`kaolin/non_commercial<kaolin.non_commercial>` which is under `NSCL license <https://github.com/NVIDIAGameWorks/kaolin/blob/master/LICENSE.NSCL>`_ restricted to non commercial usage for research and evaluation purposes. For example, FlexiCubes method is included under :ref:`non_commercial<kaolin.non_commercial>`.
Default `kaolin` import includes Apache-licensed components:
.. code-block:: python
import kaolin
The non-commercial components need to be explicitly imported as:
.. code-block:: python
import kaolin.non_commercial

279
docs/notes/spc_summary.rst Normal file
View File

@@ -0,0 +1,279 @@
Structured Point Clouds (SPCs)
******************************
.. _spc:
Structured Point Clouds (SPC) is a sparse octree-based representation that is useful to organize and
compress 3D geometrically sparse information.
They are also known as sparse voxelgrids, quantized point clouds, and voxelized point clouds.
.. image:: ../img/mesh_to_spc.png
Kaolin supports a number of operations to work with SPCs,
including efficient ray-tracing and convolutions.
The SPC data structure is very general. In the SPC data structure, octrees provide a way to store and efficiently retrieve coordinates of points at different levels of the octree hierarchy. It is also possible to associate features to these coordinates using point ordering in memory. Below we detail the low-level representations that comprise SPCs and allow corresponding efficient operations. We also provide a :ref:`convenience container<kaolin.rep>` for these low-level attributes.
Some of the conventions are also defined in `Neural Geometric Level of Detail: Real-time Rendering with
Implicit 3D Surfaces <https://nv-tlabs.github.io/nglod/>`_ which uses SPC as an internal representation.
.. warning::
Structured Point Clouds internal layout and structure is still experimental and may be modified in the future.
Octree
======
.. _spc_octree:
Core to SPC is the `octree <https://en.wikipedia.org/wiki/Octree>`_, a tree data
structure where each node have up to 8 childrens.
We use this structure to do a recursive three-dimensional space partitioning,
i.e: each node represents a partitioning of its 3D space (partition) of :math:`(2, 2, 2)`.
The octree then contains the information necessary to find the sparse coordinates.
In SPC, a batch of octrees is represented as a tensor of bytes. Each bit in the byte array ``octrees`` represents
the binary occupancy of an octree bit sorted in `Morton Order <https://en.wikipedia.org/wiki/Z-order_curve>`_.
The Morton order is a type of space-filling curve which gives a deterministic ordering of
integer coordinates on a 3D grid. That is, for a given non-negative 1D integer coordinate, there exists a
bijective mapping to 3D integer coordinates.
Since a byte is a collection of 8 bits, a single byte ``octrees[i]``
represents an octree node where each bit indicate the binary occupancy of a child node / partition as
depicted below:
.. image:: ../img/octants.png
:width: 600
For each octree, the nodes / bytes are following breadth-first-search order (with Morton order for
childrens order), and the octree bytes are then :ref:`packed` to form ``octrees``. This ordering
allows efficient tree access without having to explicilty store indirection pointers.
.. figure:: ../img/octree.png
:scale: 30 %
:alt: An octree 3D partitioning
Credit: https://en.wikipedia.org/wiki/Octree
The binary occupancy values in the bits of ``octrees`` implicitly encode position data due to the bijective
mapping from Morton codes to 3D integer coordinates. However, to provide users a more straight
forward interface to work with these octrees, SPC provides auxilary information such as
``points`` which is a :ref:`packed` tensor of 3D coordinates. Refer to the :ref:`spc_attributes` section
for more details.
Currently SPCs are primarily used to represent 3D surfaces,
and so all the leaves are at the same ``level`` (depth).
This allow very efficient processing on GPU, with custom CUDA kernels, for ray-tracing and convolution.
The structure contains finer details as you go deeper in to the tree.
Below are the Levels 0 through 8 of a SPC teapot model:
.. image:: ../img/spcTeapot.png
Additional Feature Data
=======================
The nodes of the ``octrees`` can contain information beyond just the 3D coordinates of the nodes,
such as RGB color, normals, feature maps, or even differentiable activation maps processed by a
convolution.
We follow a `Structure of Arrays <https://en.wikipedia.org/wiki/AoS_and_SoA>`_ approach to store
additional data for maximum user extensibility.
Currently the features would be tensors of shape :math:`(\text{num_nodes}, \text{feature_dim})`
with ``num_nodes`` being the number of nodes at a specific ``level`` of the ``octrees``,
and ``feature_dim`` the dimension of the feature set (for instance 3 for RGB color).
Users can freely define their own feature data to be stored alongside SPC.
Conversions
===========
Structured point clouds can be derived from multiple sources.
We can construct ``octrees``
from unstructured point cloud data, from sparse voxelgrids
or from the level set of an implicit function :math:`f(x, y, z)`.
.. _spc_attributes:
Related attributes
==================
.. note::
If you just wanna use the structured point clouds without having to go through the low level details, take a look at :ref:`the high level classes <kaolin.rep>`.
.. _spc_lengths:
``lengths:``
------------
Since ``octrees`` use :ref:`packed` batching, we need ``lengths`` a 1D tensor of size ``batch_size`` that contains the size of each individual octree. Note that ``lengths.sum()`` should equal the size of ``octrees``. You can use :func:`kaolin.ops.batch.list_to_packed` to pack octrees and generate ``lengths``
.. _spc_pyramids:
``pyramids:``
-------------
:class:`torch.IntTensor` of shape :math:`(\text{batch_size}, 2, \text{max_level} + 2)`. Contains layout information for each octree ``pyramids[:, 0]`` represent the number of points in each level of the ``octrees``, ``pyramids[:, 1]`` represent the starting index of each level of the octree.
.. _spc_exsum:
``exsum:``
----------
:class:`torch.IntTensor` of shape :math:`(\text{octrees_num_bytes} + \text{batch_size})` is the exclusive sum of the bit counts of each ``octrees`` byte.
.. note::
To generate ``pyramids`` and ``exsum`` see :func:`kaolin.ops.spc.scan_octrees`
.. _spc_points:
``point_hierarchies:``
----------------------
:class:`torch.ShortTensor` of shape :math:`(\text{num_nodes}, 3)` correspond to the sparse coordinates at all levels. We refer to this :ref:`packed` tensor as the **structured point hierarchies**.
The image below show an analogous 2D example.
.. image:: ../img/spc_points.png
:width: 400
the corresponding ``point_hierarchies`` would be:
>>> torch.ShortTensor([[0, 0], [1, 1],
[1, 0], [2, 2],
[2, 1], [3, 1], [5, 5]
])
.. note::
To generate ``points`` see :func:`kaolin.ops.generate_points`
.. note::
the tensors ``pyramid``, ``exsum`` and ``points`` are used by many Structured Point Cloud functions; avoiding their recomputation will improve performace.
Convolutions
============
We provide several sparse convolution layers for structured point clouds.
Convolutions are characterized by the size of the input and output channels,
an array of ``kernel_vectors``, and possibly the number of levels to ``jump``, i.e.,
the difference in input and output levels.
.. _kernel-text:
An example of how to create a :math:`3 \times 3 \times 3` kernel follows:
>>> vectors = []
>>> for i in range(-1, 2):
>>> for j in range(-1, 2):
>>> for k in range(-1, 2):
>>> vectors.append([i, j, k])
>>> Kvec = torch.tensor(vectors, dtype=torch.short, device=device)
>>> Kvec
tensor([[-1, -1, -1],
[-1, -1, 0],
[-1, -1, 1],
...
...
[ 1, 1, -1],
[ 1, 1, 0],
[ 1, 1, 1]], device='cuda:0', dtype=torch.int16)
.. _neighborhood-text:
The kernel vectors determine the shape of the convolution kernel.
Each kernel vector is added to the position of a point to determine
the coordinates of points whose corresponding input data is needed for the operation.
We formalize this notion using the following neighbor function:
.. math::
n(i,k) = \text{ID}\left(P_i+\overrightarrow{K}_k\right)
that returns the index of the point within the same level found by adding
kernel vector :math:`\overrightarrow{K}_k` to point :math:`P_i`.
Given the sparse nature of SPC data, it may be the case that no such point exists. In such cases, :math:`n(i,k)`
will return an invalid value, and data accesses will be treated like zero padding.
Transposed convolutions are defined by the transposed neighbor function
.. math::
n^T(i,k) = \text{ID}\left(P_i-\overrightarrow{K}_k\right)
The value **jump** is used to indicate the difference in levels between the iput features
and the output features. For convolutions, this is the number of levels to downsample; while
for transposed convolutions, **jump** is the number of levels to upsample. The value of **jump** must
be positive, and may not go beyond the highest level of the octree.
Examples
--------
You can create octrees from sparse feature_grids
(of shape :math:`(\text{batch_size}, \text{feature_dim}, \text{height}, \text{width}, \text{depth})`):
>>> octrees, lengths, features = kaolin.ops.spc.feature_grids_to_spc(features_grids)
or from point cloud (of shape :math:`(\text{num_points, 3})`):
>>> qpc = kaolin.ops.spc.quantize_points(pc, level)
>>> octree = kaolin.ops.spc.unbatched_points_to_octree(qpc, level)
To use convolution, you can use the functional or the torch.nn.Module version like torch.nn.functional.conv3d and torch.nn.Conv3d:
>>> max_level, pyramids, exsum = kaolin.ops.spc.scan_octrees(octrees, lengths)
>>> point_hierarchies = kaolin.ops.spc.generate_points(octrees, pyramids, exsum)
>>> kernel_vectors = torch.tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1],
[1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]],
dtype=torch.ShortTensor, device='cuda')
>>> conv = kaolin.ops.spc.Conv3d(in_channels, out_channels, kernel_vectors, jump=1, bias=True).cuda()
>>> # With functional
>>> out_features, out_level = kaolin.ops.spc.conv3d(octrees, point_hierarchies, level, pyramids,
... exsum, coalescent_features, weight,
... kernel_vectors, jump, bias)
>>> # With nn.Module and container class
>>> input_spc = kaolin.rep.Spc(octrees, lengths)
>>> conv
>>> out_features, out_level = kaolin.ops.spc.conv_transpose3d(
... **input_spc.to_dict(), input=out_features, level=level,
... weight=weight, kernel_vectors=kernel_vectors, jump=jump, bias=bias)
To apply ray tracing we currently only support non-batched version, for instance here with RGB values as per point features:
>>> max_level, pyramids, exsum = kaolin.ops.spc.scan_octrees(
... octree, torch.tensor([len(octree)], dtype=torch.int32, device='cuda')
>>> point_hierarchy = kaolin.ops.spc.generate_points(octrees, pyramids, exsum)
>>> ridx, pidx, depth = kaolin.render.spc.unbatched_raytrace(octree, point_hierarchy, pyramids[0], exsum,
... origin, direction, max_level)
>>> first_hits_mask = kaolin.render.spc.mark_pack_boundaries(ridx)
>>> first_hits_point = pidx[first_hits_mask]
>>> first_hits_rgb = rgb[first_hits_point - pyramids[max_level - 2]]
Going further with SPC:
=======================
Examples:
----------------------------
See our Jupyter notebook for a walk-through of SPC features:
`examples/tutorial/understanding_spcs_tutorial.ipynb <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/understanding_spcs_tutorial.ipynb>`_
And also our recipes for simple examples of how to use SPC:
* `spc_basics.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/spc/spc_basics.py>`_: showing attributes of an SPC object
* `spc_dual_octree.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/spc/spc_dual_octree.py>`_: computing and explaining the dual of an SPC octree
* `spc_trilinear_interp.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/spc/spc_trilinear_interp.py>`_: computing trilinear interpolation of a point cloud on an SPC
SPC Documentation:
------------------
Functions useful for working with SPCs are available in the following modules:
* :ref:`kaolin.ops.spc<kaolin.ops.spc>` - general explanation and operations
* :ref:`kaolin.render.spc<kaolin.render.spc>` - rendering utilities
* :class:`kaolin.rep.Spc` - high-level wrapper

View File

@@ -0,0 +1,101 @@
.. _tutorial_index:
Tutorial Index
==============
Kaolin provides tutorials as ipython notebooks, docs pages and simple scripts. Note that the links
point to master.
Detailed Tutorials
------------------
* `Camera and Rasterization <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/camera_and_rasterization.ipynb>`_: Rasterize ShapeNet mesh with nvdiffrast and camera:
* Load ShapeNet mesh
* Preprocess mesh and materials
* Create a camera with ``from_args()`` general constructor
* Render a mesh with multiple materials with nvdiffrast
* Move camera and see the resulting rendering
* `Optimizing Diffuse Lighting <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/diffuse_lighting.ipynb>`_: Optimize lighting parameters with spherical gaussians and spherical harmonics:
* Load an obj mesh with normals and materials
* Rasterize the diffuse and specular albedo
* Render and optimize diffuse lighting:
* Spherical harmonics
* Spherical gaussian with inner product implementation
* Spherical gaussian with fitted approximation
* `Optimize Diffuse and Specular Lighting with Spherical Gaussians <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/sg_specular_lighting.ipynb>`_:
* Load an obj mesh with normals and materials
* Generate view rays from camera
* Rasterize the diffuse and specular albedo
* Render and optimize diffuse and specular lighting with spherical gaussians
* `Working with Surface Meshes <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/working_with_meshes.ipynb>`_:
* loading and constructing :class:`kaolin.rep.SurfaceMesh` objects
* batching of meshes
* auto-computing common attributes (like ``face_normals``)
* `Deep Marching Tetrahedra <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/dmtet_tutorial.ipynb>`_: reconstructs a tetrahedral mesh from point clouds with `DMTet <https://nv-tlabs.github.io/DMTet/>`_, covering:
* generating data with Omniverse Kaolin App
* loading point clouds from a ``.usd`` file
* chamfer distance as a loss function
* differentiable marching tetrahedra
* using Timelapse API for 3D checkpoints
* visualizing 3D results of training
* `Understanding Structured Point Clouds (SPCs) <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/understanding_spcs_tutorial.ipynb>`_: walks through SPC features, covering:
* under-the-hood explanation of SPC, why it's useful and key ops
* loading a mesh
* sampling a point cloud
* converting a point cloud to SPC
* setting up camera
* rendering SPC with ray tracing
* storing features in an SPC
* `Differentiable Rendering <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/dibr_tutorial.ipynb>`_: optimizes a triangular mesh from images using `DIB-R <https://github.com/nv-tlabs/DIB-R-Single-Image-3D-Reconstruction>`_ renderer, covering:
* generating data with Omniverse Kaolin App, and loading this synthetic data
* loading a mesh
* computing mesh laplacian
* DIB-R rasterization
* differentiable texture mapping
* computing mask intersection-over-union loss (IOU)
* using Timelapse API for 3D checkpoints
* visualizing 3D results of training
* `Fitting a 3D Bounding Box <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/bbox_tutorial.ipynb>`_: fits a 3D bounding box around an object in images using `DIB-R <https://github.com/nv-tlabs/DIB-R-Single-Image-3D-Reconstruction>`_ renderer, covering:
* generating data with Omniverse Kaolin App, and loading this synthetic data
* loading a mesh
* DIB-R rasterization
* computing mask intersection-over-union loss (IOU)
* :ref:`3d_viz`: explains saving 3D checkpoints and visualizing them, covering:
* using Timelapse API for writing 3D checkpoints
* understanding output file format
* visualizing 3D checkpoints using Omniverse Kaolin App
* visualizing 3D checkpoints using bundled ``kaolin-dash3d`` commandline utility
* `Reconstructing Point Cloud with DMTet <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/dmtet_tutorial.ipynb>`_: Trains an SDF estimator to reconstruct a mesh from a point cloud covering:
* using point clouds data generated with Omniverse Kaolin App
* loading point clouds from an USD file.
* defining losses and regularizer for a mesh with point cloud ground truth
* applying marching tetrahedra
* using Timelapse API for 3D checkpoints
* visualizing 3D checkpoints using ``kaolin-dash3d``
Simple Recipes
--------------
* I/O and Data Processing:
* `usd_kitchenset.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/usd_kitchenset.py>`_: loading multiple meshes from a ``.usd`` file and saving
* `spc_from_pointcloud.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/dataload/spc_from_pointcloud.py>`_: converting a point cloud to SPC object
* `occupancy_sampling.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/preprocess/occupancy_sampling.py>`_: computing occupancy function of points in a mesh using ``check_sign``
* `spc_basics.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/spc/spc_basics.py>`_: showing attributes of an SPC object
* `spc_dual_octree.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/spc/spc_dual_octree.py>`_: computing and explaining the dual of an SPC octree
* `spc_trilinear_interp.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/spc/spc_trilinear_interp.py>`_: computing trilinear interpolation of a point cloud on an SPC
* Visualization:
* `visualize_main.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/visualize_main.py>`_: using Timelapse API to write mock 3D checkpoints
* `fast_mesh_sampling.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/preprocess/fast_mesh_sampling.py>`_: Using CachedDataset to preprocess a ShapeNet dataset we can sample point clouds efficiently at runtime
* Camera:
* `cameras_differentiable.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/cameras_differentiable.py>`_: optimize a camera position
* `camera_transforms.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/camera_transforms.py>`_: using :func:`Camera.transform()` function
* `camera_ray_tracing.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/camera_ray_tracing.py>`_: how to design a ray generating function using :class:`Camera` objects
* `camera_properties.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/camera_properties.py>`_: exposing some the camera attributes and properties
* `camera_opengl_shaders.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/camera_opengl_shaders.py>`_: Using the camera with glumpy
* `camera_movement.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/camera_movement.py>`_: Manipulating a camera position and zoom
* `camera_init_simple.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/camera_init_simple.py>`_: Making Camera objects with the flexible :func:`Camera.from_args()` constructor
* `camera_init_explicit.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/camera_init_explicit.py>`_: Making :class:`CameraIntrinsics` and :class:`CameraExtrinsics` with all the different constructors available
* `camera_coordinate_systems.py <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/camera/camera_coordinate_systems.py>`_: Changing coordinate system in a :class:`Camera` object

View File

@@ -0,0 +1,4 @@
numpy<1.27.0,>=1.19.5
scipy==1.10.1
-f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
torch==1.8.2+cpu

View File

@@ -0,0 +1,18 @@
# Quick Start: Kaolin Recipes
<hr>
For a quick start with Kaolin, see the example snippets included below. <br>
In depth guides are available in the [tutorials](https://kaolin.readthedocs.io/en/latest/notes/tutorial_index.html) section.
## Data
### Converting Data
<hr>
* [Point cloud to SPC]("https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/dataload/spc_from_pointcloud.py")
## 3D Formats
### SPC / Octree based Ops
<hr>
* [SPC: Basic Usage]("https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/recipes/spc/spc_basics.py")

View File

@@ -0,0 +1,25 @@
# ==============================================================================================================
# The following snippet demonstrates how to change the coordinate system of the camera.
# ==============================================================================================================
import math
import torch
import numpy as np
from kaolin.render.camera import Camera, blender_coords
device = 'cuda'
camera = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
fov=30 * np.pi / 180, # In radians
width=800, height=800,
device=device
)
print(camera.basis_change_matrix)
camera.change_coordinate_system(blender_coords())
print(camera.basis_change_matrix)
camera.reset_coordinate_system()
print(camera.basis_change_matrix)

View File

@@ -0,0 +1,88 @@
# ==============================================================================================================
# The following snippet demonstrates how to initialize instances of kaolin's pinhole / ortho cameras
# explicitly.
# Also review `camera_init_simple` which greatly simplifies the construction methods shown here.
# ==============================================================================================================
import math
import torch
from kaolin.render.camera import Camera, CameraExtrinsics, PinholeIntrinsics, OrthographicIntrinsics
#################################################################
# Camera 1: from eye, at, up and focal length (Perspective) #
#################################################################
# Build the camera extrinsics object from lookat
eye = torch.tensor([0.0, 0.0, -1.0], device='cuda') # Camera positioned here in world coords
at = torch.tensor([0.0, 0.0, 0.0], device='cuda') # Camera observing this world point
up = torch.tensor([0.0, 1.0, 0.0], device='cuda') # Camera up direction vector
extrinsics = CameraExtrinsics.from_lookat(eye, at, up)
# Build a pinhole camera's intrinsics. This time we use focal length (other useful args: focal_y, x0, y0)
intrinsics = PinholeIntrinsics.from_focal(width=800, height=600, focal_x=1.0, device='cuda')
# Combine extrinsics and intrinsics to obtain the full camera object
camera_1 = Camera(extrinsics=extrinsics, intrinsics=intrinsics)
print('--- Camera 1 ---')
print(camera_1)
########################################################################
# Camera 2: from camera position, orientation and fov (Perspective) #
########################################################################
# Build the camera extrinsics object from lookat
cam_pos = torch.tensor([0.0, 0.0, -1.0], device='cuda')
cam_dir = torch.tensor([[1.0, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.0, 1.0]], device='cuda') # 3x3 orientation within the world
extrinsics = CameraExtrinsics.from_camera_pose(cam_pos=cam_pos, cam_dir=cam_dir)
# Use pinhole camera intrinsics, construct using field-of-view (other useful args: camera_fov_direction, x0, y0)
intrinsics = PinholeIntrinsics.from_fov(width=800, height=600, fov=math.radians(45.0), device='cuda')
camera_2 = Camera(extrinsics=extrinsics, intrinsics=intrinsics)
print('--- Camera 2 ---')
print(camera_2)
####################################################################
# Camera 3: camera view matrix, (Orthographic) #
####################################################################
# Build the camera extrinsics object from lookat
world2cam = torch.tensor([[1.0, 0.0, 0.0, 0.5],
[0.0, 1.0, 0.0, 0.5],
[0.0, 0.0, 1.0, 0.5],
[0.0, 0.0, 0.0, 1.0]], device='cuda') # 3x3 orientation within the world
extrinsics = CameraExtrinsics.from_view_matrix(view_matrix=world2cam)
# Use pinhole camera intrinsics, construct using field-of-view (other useful args: camera_fov_direction, x0, y0)
intrinsics = OrthographicIntrinsics.from_frustum(width=800, height=600, near=-800, far=800,
fov_distance=1.0, device='cuda')
camera_3 = Camera(extrinsics=extrinsics, intrinsics=intrinsics)
print('--- Camera 3 ---')
print(camera_3)
##########################################################
# Camera 4: Combining cameras #
##########################################################
# Must be of the same intrinsics type, and non params fields such as width, height, near, far
# (currently we don't perform validation)
camera_4 = Camera.cat((camera_1, camera_2))
print('--- Camera 4 ---')
print(camera_4)
##########################################################
# Camera 5: constructing a batch of cameras together #
##########################################################
# Extrinsics are created using batched tensors. The intrinsics are automatically broadcasted.
camera_5 = Camera.from_args(
eye=torch.tensor([[4.0, 4.0, 4.0], [4.0, 4.0, 4.0]]),
at=torch.tensor([[0.0, 0.0, 0.0], [4.0, 4.0, 4.0]]),
up=torch.tensor([[0.0, 1.0, 0.0], [4.0, 4.0, 4.0]]),
width=800, height=600, focal_x=300.0
)
print('--- Camera 5 ---')
print(camera_5)

View File

@@ -0,0 +1,65 @@
# ==============================================================================================================
# The following snippet demonstrates how to initialize instances of kaolin's pinhole / ortho cameras.
# ==============================================================================================================
import math
import torch
import numpy as np
from kaolin.render.camera import Camera
device = 'cuda'
perspective_camera_1 = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
fov=30 * np.pi / 180, # In radians
x0=0.0, y0=0.0,
width=800, height=800,
near=1e-2, far=1e2,
dtype=torch.float64,
device=device
)
print('--- Perspective Camera 1 ---')
print(perspective_camera_1)
perspective_camera_2 = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
fov=30 * np.pi / 180, # In radians
width=800, height=800,
device=device
)
print('--- Perspective Camera 2 ---')
print(perspective_camera_2)
ortho_camera_1 = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
width=800, height=800,
near=-800, far=800,
fov_distance=1.0,
dtype=torch.float64,
device=device
)
print('--- Orthographic Camera 1 ---')
print(ortho_camera_1)
ortho_camera_2 = Camera.from_args(
view_matrix=torch.tensor([[1.0, 0.0, 0.0, 0.5],
[0.0, 1.0, 0.0, 0.5],
[0.0, 0.0, 1.0, 0.5],
[0.0, 0.0, 0.0, 1.0]]),
width=800, height=800,
dtype=torch.float64,
device=device
)
print('--- Orthographic Camera 2 ---')
print(ortho_camera_2)

View File

@@ -0,0 +1,27 @@
# ==============================================================================================================
# The following snippet demonstrates how to manipulate kaolin's camera.
# ==============================================================================================================
import torch
from kaolin.render.camera import Camera
camera = Camera.from_args(
eye=torch.tensor([0.0, 0.0, -1.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
width=800, height=600,
fov=1.0,
device='cuda'
)
# Extrinsic rigid transformations managed by CameraExtrinsics
camera.move_forward(amount=10.0) # Translate forward in world coordinates (this is wisp's mouse zoom)
camera.move_right(amount=-5.0) # Translate left in world coordinates
camera.move_up(amount=5.0) # Translate up in world coordinates
camera.rotate(yaw=0.1, pitch=0.02, roll=1.0) # Rotate the camera
# Intrinsic lens transformations managed by CameraIntrinsics
# Zoom in to decrease field of view - for Orthographic projection the internal implementation differs
# as there is no acual fov or depth concept (hence we use a "made up" fov distance parameter, see the projection matrix)
camera.zoom(amount=0.5)

View File

@@ -0,0 +1,57 @@
# ==============================================================================================================
# The following snippet demonstrates how to use the camera for generating a view-projection matrix
# as used in opengl shaders.
# ==============================================================================================================
import torch
import numpy as np
from kaolin.render.camera import Camera
# !!! This example is not runnable -- it is minimal to contain integration between the opengl shader and !!!
# !!! the camera matrix !!!
try:
from glumpy import gloo
except:
class DummyGloo(object):
def Program(self, vertex, fragment):
# see: https://glumpy.readthedocs.io/en/latest/api/gloo-shader.html#glumpy.gloo.Program
return dict([])
gloo = DummyGloo()
device = 'cuda'
camera = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
fov=30 * np.pi / 180, # In radians
x0=0.0, y0=0.0,
width=800, height=800,
near=1e-2, far=1e2,
dtype=torch.float64,
device=device
)
vertex = """
uniform mat4 u_viewprojection;
attribute vec3 position;
attribute vec4 color;
varying vec4 v_color;
void main()
{
v_color = color;
gl_Position = u_viewprojection * vec4(position, 1.0f);
} """
fragment = """
varying vec4 v_color;
void main()
{
gl_FragColor = v_color;
} """
# Compile GL program
gl_program = gloo.Program(vertex, fragment)
gl_program["u_viewprojection"] = camera.view_projection_matrix()[0].cpu().numpy().T

View File

@@ -0,0 +1,47 @@
# ==============================================================================================================
# The following snippet demonstrates various camera properties
# ==============================================================================================================
import math
import torch
import numpy as np
from kaolin.render.camera import Camera
device = 'cuda'
camera = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
fov=30 * np.pi / 180, # In radians
width=800, height=800,
dtype=torch.float32,
device=device
)
print(camera.width)
print(camera.height)
print(camera.lens_type)
print(camera.device)
camera = camera.cpu()
print(camera.device)
# Create a batched camera and view single element
camera = Camera.cat((camera, camera))
print(camera)
camera = camera[0]
print(camera)
print(camera.dtype)
camera = camera.half()
print(camera.dtype)
camera = camera.double()
print(camera.dtype)
camera = camera.float()
print(camera.dtype)
print(camera.extrinsics.requires_grad)
print(camera.intrinsics.requires_grad)
print(camera.to('cuda', torch.float64))

View File

@@ -0,0 +1,71 @@
# ==============================================================================================================
# The following snippet demonstrates how to use the camera for implementing a ray-generation function
# for ray based applications.
# ==============================================================================================================
import torch
import numpy as np
from typing import Tuple
from kaolin.render.camera import Camera, CameraFOV
def generate_pixel_grid(res_x=None, res_y=None, device='cuda'):
h_coords = torch.arange(res_x, device=device)
w_coords = torch.arange(res_y, device=device)
pixel_y, pixel_x = torch.meshgrid(h_coords, w_coords)
pixel_x = pixel_x + 0.5
pixel_y = pixel_y + 0.5
return pixel_y, pixel_x
def generate_perspective_rays(camera: Camera, pixel_grid: Tuple[torch.Tensor, torch.Tensor]):
# coords_grid should remain immutable (a new tensor is implicitly created here)
pixel_y, pixel_x = pixel_grid
pixel_x = pixel_x.to(camera.device, camera.dtype)
pixel_y = pixel_y.to(camera.device, camera.dtype)
# Account for principal point offset from canvas center
pixel_x = pixel_x - camera.x0
pixel_y = pixel_y + camera.y0
# pixel values are now in range [-1, 1], both tensors are of shape res_y x res_x
pixel_x = 2 * (pixel_x / camera.width) - 1.0
pixel_y = 2 * (pixel_y / camera.height) - 1.0
ray_dir = torch.stack((pixel_x * camera.tan_half_fov(CameraFOV.HORIZONTAL),
-pixel_y * camera.tan_half_fov(CameraFOV.VERTICAL),
-torch.ones_like(pixel_x)), dim=-1)
ray_dir = ray_dir.reshape(-1, 3) # Flatten grid rays to 1D array
ray_orig = torch.zeros_like(ray_dir)
# Transform from camera to world coordinates
ray_orig, ray_dir = camera.extrinsics.inv_transform_rays(ray_orig, ray_dir)
ray_dir /= torch.linalg.norm(ray_dir, dim=-1, keepdim=True)
ray_orig, ray_dir = ray_orig[0], ray_dir[0] # Assume a single camera
return ray_orig, ray_dir, camera.near, camera.far
camera = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
fov=30 * np.pi / 180, # In radians
x0=0.0, y0=0.0,
width=800, height=800,
near=1e-2, far=1e2,
dtype=torch.float64,
device='cuda'
)
pixel_grid = generate_pixel_grid(200, 200)
ray_orig, ray_dir, near, far = generate_perspective_rays(camera, pixel_grid)
print('Ray origins:')
print(ray_orig)
print('Ray directions:')
print(ray_dir)
print('Near clipping plane:')
print(near)
print('Far clipping plane:')
print(far)

View File

@@ -0,0 +1,59 @@
# ==============================================================================================================
# The following snippet demonstrates how to use the camera transform directly on vectors
# ==============================================================================================================
import math
import torch
import numpy as np
from kaolin.render.camera import Camera
device = 'cuda'
camera = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
fov=30 * np.pi / 180, # In radians
width=800, height=800,
dtype=torch.float32,
device=device
)
print('View projection matrix')
print(camera.view_projection_matrix())
print('View matrix: world2cam')
print(camera.view_matrix())
print('Inv View matrix: cam2world')
print(camera.inv_view_matrix())
print('Projection matrix')
print(camera.projection_matrix())
vectors = torch.randn(10, 3).to(camera.device, camera.dtype) # Create a batch of points
# For ortho and perspective: this is equivalent to multiplying camera.projection_matrix() @ vectors
# and then dividing by the w coordinate (perspective division)
print(camera.transform(vectors))
# For ray tracing we have camera.inv_transform_rays which performs multiplication with inv_view_matrix()
# (just for the extrinsics part)
# Can also access properties directly:
# --
# View matrix components (camera space)
print(camera.R)
print(camera.t)
# Camera axes and position in world coordinates
print(camera.cam_pos())
print(camera.cam_right())
print(camera.cam_pos())
print(camera.cam_forward())
print(camera.focal_x)
print(camera.focal_y)
print(camera.x0)
print(camera.y0)

View File

@@ -0,0 +1,65 @@
# ====================================================================================================================
# The following snippet demonstrates how cameras can be used for optimizing specific extrinsic / intrinsic parameters
# ====================================================================================================================
import torch
import torch.optim as optim
from kaolin.render.camera import Camera
# Create simple perspective camera
cam = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
width=800, height=600, focal_x=300.0
)
# When requires_grad is on, the camera will automatically switch to differentiation friendly backend
# (implicitly calling cam.switch_backend('matrix_6dof_rotation') )
cam.requires_grad_(True)
# Constraint camera to optimize only fov and camera position (cannot rotate)
ext_mask, int_mask = cam.gradient_mask('t', 'focal_x', 'focal_y')
ext_params, int_params = cam.parameters()
ext_params.register_hook(lambda grad: grad * ext_mask.float())
grad_scale = 1e5 # Used to move the projection matrix elements faster
int_params.register_hook(lambda grad: grad * int_mask.float() * grad_scale)
# Make the camera a bit noisy
# Currently can't copy the camera here after requires_grad is true because we're still missing a camera.detach() op
target = Camera.from_args(
eye=torch.tensor([4.0, 4.0, 4.0]),
at=torch.tensor([0.0, 0.0, 0.0]),
up=torch.tensor([0.0, 1.0, 0.0]),
width=800, height=600, focal_x=300.0
)
target.t = target.t + torch.randn_like(target.t)
target.focal_x = target.focal_x + torch.randn_like(target.focal_x)
target.focal_y = target.focal_y + torch.randn_like(target.focal_y)
target_mat = target.view_projection_matrix()
# Save for later so we have some comparison of what changed
initial_view = cam.view_matrix().detach().clone()
initial_proj = cam.projection_matrix().detach().clone()
# Train a few steps
optimizer = optim.SGD(cam.parameters(), lr=0.1, momentum=0.9)
for idx in range(10):
view_proj = cam.view_projection_matrix()
optimizer.zero_grad()
loss = torch.nn.functional.mse_loss(target_mat, view_proj)
loss.backward()
optimizer.step()
print(f'Iteration {idx}:')
print(f'Loss: {loss.item()}')
print(f'Extrinsics: {cam.extrinsics.parameters()}')
print(f'Intrinsics: {cam.intrinsics.parameters()}')
# Projection matrix grads are much smaller as they're scaled by the view-frustum dimensions..
print(f'View matrix before: {initial_view}')
print(f'View matrix after: {cam.view_matrix()}')
print(f'Projection matrix before: {initial_proj}')
print(f'Projection matrix after: {cam.projection_matrix()}')
print('Did the camera change?')
print(not torch.allclose(cam, target))

View File

@@ -0,0 +1,52 @@
# ==============================================================================================================
# The following snippet demonstrates how to build kaolin's compressed octree,
# "Structured Point Cloud (SPC)", from raw point cloud data.
# ==============================================================================================================
# See also:
#
# - Tutorial: Understanding Structured Point Clouds (SPCs)
# https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/understanding_spcs_tutorial.ipynb
#
# - Documentation: Structured Point Clouds
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html?highlight=spc#kaolin-ops-spc
# ==============================================================================================================
import torch
import kaolin
# Create some point data with features
# Point coordinates are expected to be normalized to the range [-1, 1].
points = torch.tensor([
[-1.0, -1.0, -1.0],
[-0.9, -0.95, -1.0],
[1.0, 0.0, 0.0],
[0.0, -0.1, 0.3],
[1.0, 1.0, 1.0]
], device='cuda')
features = torch.tensor([
[0.1, 1.1, 2.1],
[0.2, 1.2, 2.2],
[0.3, 1.3, 2.3],
[0.4, 1.4, 2.4],
[0.5, 1.5, 2.5],
], device='cuda')
# Structured Point Cloud will be using 3 levels of detail
level = 3
# In kaolin, operations are batched by default
# Here, in contrast, we use a single point cloud and therefore invoke an unbatched conversion function.
# For more information about batched operations, see:
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.batch.html#kaolin-ops-batch
spc = kaolin.ops.conversions.pointcloud.unbatched_pointcloud_to_spc(pointcloud=points,
level=level,
features=features)
# SPC is an object which keep tracks of the various octree component
print(spc)
print(f'SPC keeps track of the following cells in {level} levels of detail (parents + leaves):\n'
f' {spc.point_hierarchies}\n')
# Note that the point cloud coordinates are quantized to integer coordinates.
# During conversion, when points fall within the same cell, their features are averaged
print(f'Features for leaf cells:\n {spc.features}')

View File

@@ -0,0 +1,140 @@
# ==============================================================================================================
# The following snippet shows how to use kaolin to preprocess a shapenet dataset
# To quickly sample point clouds from the mesh at runtime
# ==============================================================================================================
# See also:
# - Documentation: ShapeNet dataset
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.shapenet.html#kaolin.io.shapenet.ShapeNetV2
# - Documentation: CachedDataset
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.dataset.html#kaolin.io.dataset.CachedDataset
# - Documentation: Mesh Ops:
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.mesh.html
# - Documentation: Obj loading:
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.io.obj.html
# ==============================================================================================================
import argparse
import os
import torch
import kaolin as kal
parser = argparse.ArgumentParser(description='')
parser.add_argument('--shapenet-dir', type=str, default=os.getenv('KAOLIN_TEST_SHAPENETV2_PATH'),
help='Path to shapenet (v2)')
parser.add_argument('--cache-dir', type=str, default='/tmp/dir',
help='Path where output of the dataset is cached')
parser.add_argument('--num-samples', type=int, default=10,
help='Number of points to sample on the mesh')
parser.add_argument('--cache-at-runtime', action='store_true',
help='run the preprocessing lazily')
parser.add_argument('--num-workers', type=int, default=0,
help='Number of workers during preprocessing (not used with --cache-at-runtime)')
args = parser.parse_args()
def preprocessing_transform(inputs):
"""This the transform used in shapenet dataset __getitem__.
Three tasks are done:
1) Get the areas of each faces, so it can be used to sample points
2) Get a proper list of RGB diffuse map
3) Get the material associated to each face
"""
mesh = inputs['mesh']
vertices = mesh.vertices.unsqueeze(0)
faces = mesh.faces
# Some materials don't contain an RGB texture map, so we are considering the single value
# to be a single pixel texture map (1, 3, 1, 1)
# we apply a modulo 1 on the UVs because ShapeNet follows GL_REPEAT behavior (see: https://open.gl/textures)
uvs = torch.nn.functional.pad(mesh.uvs.unsqueeze(0) % 1, (0, 0, 0, 1)) * 2. - 1.
uvs[:, :, 1] = -uvs[:, :, 1]
face_uvs_idx = mesh.face_uvs_idx
face_material_idx = mesh.material_assignments
materials = [m['map_Kd'].permute(2, 0, 1).unsqueeze(0).float() / 255. if 'map_Kd' in m else
m['Kd'].reshape(1, 3, 1, 1)
for m in mesh.materials]
mask = face_uvs_idx == -1
face_uvs_idx[mask] = 0
face_uvs = kal.ops.mesh.index_vertices_by_faces(
uvs, face_uvs_idx
)
face_uvs[:, mask] = 0.
outputs = {
'vertices': vertices,
'faces': faces,
'face_areas': kal.ops.mesh.face_areas(vertices, faces),
'face_uvs': face_uvs,
'materials': materials,
'face_material_idx': face_material_idx,
'name': inputs['name']
}
return outputs
class SamplePointsTransform(object):
def __init__(self, num_samples):
self.num_samples = num_samples
def __call__(self, inputs):
coords, face_idx, feature_uvs = kal.ops.mesh.sample_points(
inputs['vertices'],
inputs['faces'],
num_samples=self.num_samples,
areas=inputs['face_areas'],
face_features=inputs['face_uvs']
)
coords = coords.squeeze(0)
face_idx = face_idx.squeeze(0)
feature_uvs = feature_uvs.squeeze(0)
# Interpolate the RGB values from the texture map
point_materials_idx = inputs['face_material_idx'][face_idx]
all_point_colors = torch.zeros((self.num_samples, 3))
for i, material in enumerate(inputs['materials']):
mask = point_materials_idx == i
point_color = torch.nn.functional.grid_sample(
material,
feature_uvs[mask].reshape(1, 1, -1, 2),
mode='bilinear',
align_corners=False,
padding_mode='border')
all_point_colors[mask] = point_color[0, :, 0, :].permute(1, 0)
outputs = {
'coords': coords,
'face_idx': face_idx,
'colors': all_point_colors,
'name': inputs['name']
}
return outputs
# Make ShapeNet dataset with preprocessing transform
ds = kal.io.shapenet.ShapeNetV2(root=args.shapenet_dir,
categories=['dishwasher'],
train=True,
split=0.1,
with_materials=True,
output_dict=True,
transform=preprocessing_transform)
# Cache the result of the preprocessing transform
# and apply the sampling at runtime
pc_ds = kal.io.dataset.CachedDataset(ds,
cache_dir=args.cache_dir,
save_on_disk=True,
num_workers=args.num_workers,
transform=SamplePointsTransform(args.num_samples),
cache_at_runtime=args.cache_at_runtime,
force_overwrite=True)
for data in pc_ds:
print("coords:\n", data['coords'])
print("face_idx:\n", data['face_idx'])
print("colors:\n", data['colors'])
print("name:\n", data['name'])

View File

@@ -0,0 +1,48 @@
# ==============================================================================================================
# The following snippet shows how to use kaolin to test sampled values of an occupancy function
# against a watertight mesh.
# ==============================================================================================================
# See also:
# - Documentation: Triangular meshes
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.mesh.html#triangular-meshes
# ==============================================================================================================
import os
import torch
import kaolin
FILE_DIR = os.path.abspath(os.path.dirname(os.path.abspath(__file__)))
mesh_path = os.path.join(FILE_DIR, os.pardir, os.pardir, "samples", "sphere.obj") # Path to some .obj file with textures
num_samples = 100000 # Number of sample points
# 1. Load a watertight mesh from obj file
mesh = kaolin.io.obj.import_mesh(mesh_path)
print(f'Loaded mesh with {len(mesh.vertices)} vertices and {len(mesh.faces)} faces.')
# 2. Preprocess mesh:
# Move tensors to CUDA device
vertices = mesh.vertices.cuda()
faces = mesh.faces.cuda()
# Kaolin assumes an exact batch format, we make sure to convert from: (V, 3) to (1, V, 3), where 1 is the batch size
vertices = vertices.unsqueeze(0)
# 3. Sample random points uniformly in space, from the bounding box of the mesh + 10% margin
min_bound, _ = vertices.min(dim=1)
max_bound, _ = vertices.max(dim=1)
margin = (max_bound - min_bound) * 0.1
max_bound += margin
min_bound -= margin
occupancy_coords = (max_bound - min_bound) * torch.rand(1, num_samples, 3, device='cuda') + min_bound
# 4. Calculate occupancy value
occupancy_value = kaolin.ops.mesh.check_sign(vertices, faces, occupancy_coords)
# Unbatch to obtain a torch.Tensor of (V, 3) and (V, 1)
occupancy_coords = occupancy_coords.squeeze(0)
occupancy_value = occupancy_value.squeeze(0)
percent_in_mesh = torch.count_nonzero(occupancy_value) / len(occupancy_value)
print(f'Sampled a tensor of points uniformly in space '
f'with {occupancy_coords.shape[0]} points of {occupancy_coords.shape[1]}D coordinates.')
print(f'{"{:.3f}".format(percent_in_mesh)}% of the sampled points are inside the mesh volume.')

View File

@@ -0,0 +1,54 @@
# ==============================================================================================================
# The following snippet demonstrates the basic usage of kaolin's compressed octree,
# termed "Structured Point Cloud (SPC)".
# Note this is a low level structure: practitioners are encouraged to visit the references below.
# ==============================================================================================================
# See also:
#
# - Code: kaolin.ops.spc.SPC
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.rep.html?highlight=SPC#kaolin.rep.Spc
#
# - Tutorial: Understanding Structured Point Clouds (SPCs)
# https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/understanding_spcs_tutorial.ipynb
#
# - Documentation: Structured Point Clouds
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html?highlight=spc#kaolin-ops-spc
# ==============================================================================================================
import torch
import kaolin
# Construct SPC from some points data. Point coordinates are expected to be normalized to the range [-1, 1].
points = torch.tensor([[-1.0, -1.0, -1.0], [-0.9, -0.95, -1.0], [1.0, 1.0, 1.0]], device='cuda')
# In kaolin, operations are batched by default
# Here, in contrast, we use a single point cloud and therefore invoke an unbatched conversion function.
# The Structured Point Cloud will be using 3 levels of detail
spc = kaolin.ops.conversions.pointcloud.unbatched_pointcloud_to_spc(pointcloud=points, level=3)
# SPC is a batched object, and most of its fields are packed.
# (see: https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.batch.html#kaolin-ops-batch )
# spc.length defines the boundaries between different batched SPC instances the same object holds.
# Here we keep track of a single entry batch, which has 8 octree non-leaf cells.
print(f'spc.batch_size: {spc.batch_size}')
print(f'spc.lengths (cells per batch entry): {spc.lengths}')
# SPC is hierarchical and keeps information for every level of detail from 0 to 3.
# spc.point_hierarchies keeps the sparse, zero indexed coordinates of each occupied cell, per level.
print(f'SPC keeps track of total of {spc.point_hierarchies.shape[0]} parent + leaf cells:')
# To separate the boundaries, the spc.pyramids field is used.
# This field is not-packed, unlike the other SPC fields.
pyramid_of_first_entry_in_batch = spc.pyramids[0]
cells_per_level = pyramid_of_first_entry_in_batch[0]
cumulative_cells_per_level = pyramid_of_first_entry_in_batch[1]
for i, lvl_cells in enumerate(cells_per_level[:-1]):
print(f'LOD #{i} has {lvl_cells} cells.')
# The spc.octrees field keeps track of the fundamental occupancy information of each cell in the octree.
print('The occupancy of each octant parent cell, in Morton / Z-curve order is:')
print(['{0:08b}'.format(octree_byte) for octree_byte in spc.octrees])
# Since SPCs are low level objects, they require bookkeeping of multiple fields.
# For ease of use, these fields are collected and tracked within a single class: kaolin.ops.spc.SPC
# See references at the header for elaborate information on how to use this object.

View File

@@ -0,0 +1,113 @@
# ==============================================================================================================
# The following code demonstrates the usage of kaolin's "Structured Point Cloud (SPC)" 3d convolution
# functionality. Note that this sample does NOT demonstrate how to use Kaolin's Pytorch 3d convolution layers.
# Rather, 3d convolutions are used to 'filter' color data useful for level-of-detail management during
# rendering. This can be thought of as the 3d analog of generating a 2d mipmap.
#
# Note this is a low level interface: practitioners are encouraged to visit the references below.
# ==============================================================================================================
# See also:
#
# - Code: kaolin.ops.spc.SPC
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.rep.html?highlight=SPC#kaolin.rep.Spc
#
# - Tutorial: Understanding Structured Point Clouds (SPCs)
# https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/understanding_spcs_tutorial.ipynb
#
# - Documentation: Structured Point Clouds
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html?highlight=spc#kaolin-ops-spc
# ==============================================================================================================
import torch
import kaolin
# The following function applies a series of SPC convolutions to encode the entire hierarchy into a single tensor.
# Each step applies a convolution on the "highest" level of the SPC with some averaging kernel.
# Therefore, each step locally averages the "colored point hierarchy", where each "colored point"
# corresponds to a point in the SPC point hierarchy.
# For a description of inputs 'octree', 'point_hierachy', 'level', 'pyramids', and 'exsum', as well a
# detailed description of the mathematics of SPC convolutions, see:
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html?highlight=SPC#kaolin.ops.spc.Conv3d
# The input 'color' is Pytorch tensor containing color features corresponding to some 'level' of the hierarchy.
def encode(colors, octree, point_hierachy, pyramids, exsum, level):
# SPC convolutions are characterized by a set of 'kernel vectors' and corresponding 'weights'.
# kernel_vectors is the "kernel support" -
# a listing of 3D coordinates where the weights of the convolution are non-null,
# in this case a it's a simple dense 2x2x2 grid.
kernel_vectors = torch.tensor([[0,0,0],[0,0,1],[0,1,0],[0,1,1],
[1,0,0],[1,0,1],[1,1,0],[1,1,1]],
dtype=torch.short, device='cuda')
# The weights specify how the input colors 'under' the kernel are mapped to an output color,
# in this case a simple average.
weights = torch.diag(torch.tensor([0.125, 0.125, 0.125, 0.125],
dtype=torch.float32, device='cuda')) # Tensor of (4, 4)
weights = weights.repeat(8,1,1).contiguous() # Tensor of (8, 4, 4)
# Storage for the output color hierarchy is allocated. This includes points at the bottom of the hierarchy,
# as well as intermediate SPC levels (which may store different features)
color_hierarchy = torch.empty((pyramids[0,1,level+1],4), dtype=torch.float32, device='cuda')
# Copy the input colors into the highest level of color_hierarchy. pyramids is used here to select all leaf
# points at the bottom of the hierarchy and set them to some pre-sampled random color. Points at intermediate
# levels are left empty.
color_hierarchy[pyramids[0,1,level]:pyramids[0,1,level+1]] = colors[:]
# Performs the 3d convolutions in a bottom up fashion to 'filter' colors from the previous level
for l in range(level,0,-1):
# Apply the 3d convolution. Note that jump=1 means the inputs and outputs differ by 1 level
# This is analogous to to a stride=2 in grid based convolutions
colors, ll = kaolin.ops.spc.conv3d(octree,
point_hierachy,
l,
pyramids,
exsum,
colors,
weights,
kernel_vectors,
jump=1)
# Copy the output colors into the color hierarchy
color_hierarchy[pyramids[0,1,ll]:pyramids[0,1,l]] = colors[:]
print(f"At level {l}, output feature shape is:\n{colors.shape}")
# Normalize the colors.
color_hierarchy /= color_hierarchy[:,3:]
# Normalization is needed here due to the sparse nature of SPCs. When a point under a kernel is not
# present in the point hierarchy, the corresponding data is treated as zeros. Normalization is equivalent
# to having the filter weights sum to one. This may not always be desirable, e.g. alpha blending.
return color_hierarchy
# Highest level of SPC
level = 3
# Construct a fully occupied Structured Point Cloud with N levels of detail
# See https://kaolin.readthedocs.io/en/latest/modules/kaolin.rep.html?highlight=SPC#kaolin.rep.Spc
spc = kaolin.rep.Spc.make_dense(level, device='cuda')
# In kaolin, operations are batched by default, the spc object above contains a single item batch, hence [0]
num_points_last_lod = spc.num_points(level)[0]
# Create tensor of random colors for all points in the highest level of detail
colors = torch.rand((num_points_last_lod, 4), dtype=torch.float32, device='cuda')
# Set 4th color channel to one for subsequent color normalization
colors[:,3] = 1
print(f'Input SPC features: {colors.shape}')
# Encode color hierarchy by invoking a series of convolutions, until we end up with a single tensor.
color_hierarchy = encode(colors=colors,
octree=spc.octrees,
point_hierachy=spc.point_hierarchies,
pyramids=spc.pyramids,
exsum=spc.exsum,
level=level)
# Print root node color
print(f'Final encoded value (average of averages):')
print(color_hierarchy[0])
# This will be the average of averages, over the entire spc hierarchy. Since the initial random colors
# came from a uniform distribution, this should approach [0.5, 0.5, 0.5, 1.0] as 'level' increases

View File

@@ -0,0 +1,73 @@
# ==============================================================================================================
# The following snippet demonstrates the basic usage of kaolin's dual octree, an octree which keeps features
# at the 8 corners of each cell (the primary octree keeps a single feature at each cell center).
# The implementation is realized through kaolin's "Structured Point Cloud (SPC)".
# Note this is a low level structure: practitioners are encouraged to visit the references below.
# ==============================================================================================================
# See also:
#
# - Code: kaolin.ops.spc.SPC
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.rep.html?highlight=SPC#kaolin.rep.Spc
#
# - Tutorial: Understanding Structured Point Clouds (SPCs)
# https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/understanding_spcs_tutorial.ipynb
#
# - Documentation: Structured Point Clouds
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html?highlight=spc#kaolin-ops-spc
# ==============================================================================================================
import torch
import kaolin
# Construct SPC from some points data. Point coordinates are expected to be normalized to the range [-1, 1].
# To keep the example readable, by default we set the SPC level to 1: root + 8 cells
# (note that with a single LOD, only 2 cells should be occupied due to quantization)
level = 1
points = torch.tensor([[-1.0, -1.0, -1.0], [-0.9, -0.95, -1.0], [1.0, 1.0, 1.0]], device='cuda')
spc = kaolin.ops.conversions.pointcloud.unbatched_pointcloud_to_spc(pointcloud=points, level=level)
# Construct the dual octree with an unbatched operation, each cell is now converted to 8 corners
# More info about batched / packed tensors at:
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.batch.html#kaolin-ops-batch
pyramid = spc.pyramids[0] # The pyramids field is batched, we select the singleton entry, #0
point_hierarchy = spc.point_hierarchies # point_hierarchies is a packed tensor, so no need to unbatch
point_hierarchy_dual, pyramid_dual = kaolin.ops.spc.unbatched_make_dual(point_hierarchy=point_hierarchy,
pyramid=pyramid)
# Let's compare the primary and dual octrees.
# The function 'unbatched_get_level_points' yields a tensor which lists all points / sparse cell coordinates occupied
# at a certain level.
# [Primary octree] [Dual octree]
# . . . . . . . . X . . .X. . . X
# | . X . X | . | . . | .
# | . . . . . . . . ===> | X . . X . . . X
# | | . X . | X . X | . . | .
# | | . . . . . . . . | | X . . .X. . . X
# | | | | | | X | | |
# . .|. . | . . . | ===> X .|. . X . . X |
# .| X |. X . | .| |. . X
# . . | . . . . . | X . | . X . . X |
# . | X . X . | . | . . |
# . . . . . . . . X . . X . . . X
#
primary_lod0 = kaolin.ops.spc.unbatched_get_level_points(point_hierarchy, pyramid, level=0)
primary_lod1 = kaolin.ops.spc.unbatched_get_level_points(point_hierarchy, pyramid, level=1)
dual_lod0 = kaolin.ops.spc.unbatched_get_level_points(point_hierarchy_dual, pyramid_dual, level=0)
dual_lod1 = kaolin.ops.spc.unbatched_get_level_points(point_hierarchy_dual, pyramid_dual, level=1)
print(f'Primary octree: Level 0 (root cells): \n{primary_lod0}')
print(f'Dual octree: Level 0 (root corners): \n{dual_lod0}')
print(f'Primary octree: Level 1 (cells): \n{primary_lod1}')
print(f'Dual octree: Level 1 (corners): \n{dual_lod1}')
# kaolin allows for interchangeable usage of the primary and dual octrees.
# First we have to create a mapping between them:
trinkets, _ = kaolin.ops.spc.unbatched_make_trinkets(point_hierarchy, pyramid, point_hierarchy_dual, pyramid_dual)
# trinkets are indirection pointers (in practice, indices) from the nodes of the primary octree
# to the nodes of the dual octree. The nodes of the dual octree represent the corners of the voxels
# defined by the primary octree.
print(f'point_hierarchy is of shape {point_hierarchy.shape}')
print(f'point_hierarchy_dual is of shape {point_hierarchy_dual.shape}')
print(f'trinkets is of shape {trinkets.shape}')
print(f'Trinket indices are multilevel: {trinkets}')
# See also spc_trilinear_interp.py for a practical application which uses the dual octree & trinkets

View File

@@ -0,0 +1,69 @@
# ==============================================================================================================
# The following snippet demonstrates the basic usage of kaolin's dual octree, an octree which keeps features
# at the 8 corners of each cell (the primary octree keeps a single feature at each cell center).
# In this example we sample an interpolated value according to the 8 corners of a cell.
# The implementation is realized through kaolin's "Structured Point Cloud (SPC)".
# Note this is a low level structure: practitioners are encouraged to visit the references below.
# ==============================================================================================================
# See also:
#
# - Code: kaolin.ops.spc.SPC
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.rep.html?highlight=SPC#kaolin.rep.Spc
#
# - Tutorial: Understanding Structured Point Clouds (SPCs)
# https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/understanding_spcs_tutorial.ipynb
#
# - Documentation: Structured Point Clouds
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.spc.html?highlight=spc#kaolin-ops-spc
# ==============================================================================================================
import torch
import kaolin
# Construct SPC from some points data. Point coordinates are expected to be normalized to the range [-1, 1].
# To keep the example readable, by default we set the SPC level to 1: root + 8 cells
# (note that with a single LOD, only 2 cells should be occupied due to quantization)
level = 1
points = torch.tensor([[-1.0, -1.0, -1.0], [-0.9, -0.95, -1.0], [1.0, 1.0, 1.0]], device='cuda')
spc = kaolin.ops.conversions.pointcloud.unbatched_pointcloud_to_spc(pointcloud=points, level=level)
# Construct the dual octree with an unbatched operation, each cell is now converted to 8 corners
# More info about batched / packed tensors at:
# https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.batch.html#kaolin-ops-batch
pyramid = spc.pyramids[0] # The pyramids field is batched, we select the singleton entry, #0
point_hierarchy = spc.point_hierarchies # point_hierarchies is a packed tensor, so no need to unbatch
point_hierarchy_dual, pyramid_dual = kaolin.ops.spc.unbatched_make_dual(point_hierarchy=point_hierarchy,
pyramid=pyramid)
# kaolin allows for interchangeable usage of the primary and dual octrees via the "trinkets" mapping
# trinkets are indirection pointers (in practice, indices) from the nodes of the primary octree
# to the nodes of the dual octree. The nodes of the dual octree represent the corners of the voxels
# defined by the primary octree.
trinkets, _ = kaolin.ops.spc.unbatched_make_trinkets(point_hierarchy, pyramid, point_hierarchy_dual, pyramid_dual)
# We'll now apply the dual octree and trinkets to perform trilinaer interpolation.
# First we'll generate some features for the corners.
# The first dimension of pyramid / pyramid_dual specifies how many unique points exist per level.
# For the pyramid_dual, this means how many "unique corners" are in place (as neighboring cells may share corners!)
num_of_corners_at_last_lod = pyramid_dual[0, level]
feature_dims = 32
feats = torch.rand([num_of_corners_at_last_lod, feature_dims], device='cuda')
# Create some query coordinate with normalized values in the range [-1, 1], here we pick (0.5, 0.5, 0.5).
# We'll also modify the dimensions of the query tensor to match the interpolation function api:
# batch dimension refers to the unique number of spc cells we're querying.
# samples_count refers to the number of interpolations we perform per cell.
query_coord = points.new_tensor((0.5, 0.5, 0.5)).unsqueeze(0) # Tensor of (batch, 3), in this case batch=1
sampled_query_coords = query_coord.unsqueeze(1) # Tensor of (batch, samples_count, 3), in this case samples_count=1
# unbatched_query converts from normalized coordinates to the index of the cell containing this point.
# The query_index can be used to pick the point from point_hierarchy
query_index = kaolin.ops.spc.unbatched_query(spc.octrees, spc.exsum, query_coord, level, with_parents=False)
# The unbatched_interpolate_trilinear function uses the query coordinates to perform trilinear interpolation.
# Here, unbatched specifies this function supports only a single SPC at a time.
# Per single SPC, we may interpolate a batch of coordinates and samples
interpolated = kaolin.ops.spc.unbatched_interpolate_trilinear(coords=sampled_query_coords,
pidx=query_index.int(),
point_hierarchy=point_hierarchy,
trinkets=trinkets, feats=feats, level=level)
print(f'Interpolated a tensor of shape {interpolated.shape} with values: {interpolated}')

Some files were not shown because too many files have changed in this diff Show More