log in  |  register  |  feedback?  |  help  |  web accessibility
Logo
SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks by Fuchs et al. (2020)
Josh McClellan - UMD
Tuesday, November 8, 2022, 5:00-6:00 pm Calendar
  • You are subscribed to this talk through .
  • You are watching this talk through .
  • You are subscribed to this talk. (unsubscribe, watch)
  • You are watching this talk. (unwatch, subscribe)
  • You are not subscribed to this talk. (watch, subscribe)
Abstract

Paper: SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks by Fuchs et al. (2020)

Paper URL: https://proceedings.neurips.cc//paper/2020/hash/15231a7ce4ba789d13b722cc5c955834-Abstract.html

Paper Abstract: "We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point-clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model. The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention."

For more information and our full schedule, see our website (https://leesharma.com/physics-ai-reading-group/)

 

This talk is organized by Lee Sharma