Neural image representations offer the possibility of high fidelity, compact storage, and resolution-independent accuracy, providing an attractive alternative to traditional pixel- and grid-based representations. However, coordinate neural networks fail to capture discontinuities present in the image and tend to blur across them; we aim to address this challenge. In many cases, such as rendered images, vector graphics, diffusion curves, or solutions to partial differential equations, the locations of the discontinuities are known. We take those locations as input, represented as linear, quadratic, or cubic \bez curves, and construct a feature field that is discontinuous across these locations and smooth everywhere else. Finally, we use a shallow multi-layer perceptron to decode the features into the signal value. To construct the feature field, we develop a new data structure based on a curved triangular mesh, with features stored on the vertices and on a subset of the edges that are marked as discontinuous. We show that our method can be used to compress a 100,000^2-pixel rendered image into a 25MB file; can be used as a new diffusion-curve solver by combining with Monte-Carlo-based methods or directly supervised by the diffusion-curve energy; or can be used for compressing 2D physics simulation data.
@article{Belhe:2023:DiscontinuityAwareNeuralFields, author = {Yash Belhe and Micha\"{e}l Gharbi and Matthew Fisher and Iliyan Georgiev and Ravi Ramamoorthi and Tzu-Mao Li}, title = {Discontinuity-aware 2D neural fields}, journal = {ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)}, year = {2023}, volume = {41}, number = {6}, doi = {10.1145/3550454.3555484} }