Events2Join

CNNs and Equivariance


CNNs and Equivariance - Part 1/2 - Fabian Fuchs

In this post, we start by introducing the concept of equivariance, from both a practical and a mathematical point of view.

Convolutional networks & translation equivariance - Maurice Weiler

The goal of the current post is to clarify the mutual relation between equivariance and the convolutional network design.

Deep Learning – Equivariance and Invariance

So how are these achieved in CNNs. Basically we have convolutional layers that are supposed to be shift equivariant and pooling layers that are approximately ...

Translation Invariance & Equivariance in Convolutional Neural ...

Speaking contextually with regards to convolutional neural networks, translation equivariance implies that even when the position of an object in an image is ...

All you should know about translation equivariance/invariance in CNN

Translation invariance means that a CNN is able to recognise an object in an image regardless of its location or translation within the image.

Equivariance vs Invariance in Convolutional Neural Networks

Scalar and Convolution operators tend to be equivariant, max/min or range are more invariant, and subsampling/pooling can somewhat link those behavior.

What's the difference between shift equivariance and translation ...

Translation and shift are the same thing here. As far as invariance vs. equivariance, there is often a combination of both at play in CNNs.

[D] Why does convolution lead to translation equivariance? - Reddit

It's really not that CNNs have translational equivariance, unless they are a fully convolutional semantic segmentation architecture. The ...

Equivariant neural networks - what, why and how? | Maurice Weiler

This post is the first in a series on equivariant deep learning and coordinate independent CNNs. The goal of the current post is to give a first introduction ...

Group CNNs - Equivariance Part 2/2 - Fabian Fuchs

In this post, we'll see how filling in the details of this outline leads to Group Equivariant Convolutional Networks, as developed by Taco Cohen and Max ...

Geometric Deep Learning: Group Equivariant Convolutional Networks

In words, it means that first transforming x and map it is equivalent to first map x and than transforming it. So the idea is to find a group, ...

Translation Invariance and Equivariance in Computer Vision

The translation equivariance is obtained by means of the convolutional layers. In fact, if the input image is translated to the right by a ...

[2111.14157] Implicit Equivariance in Convolutional Networks - arXiv

We propose Implicitly Equivariant Networks (IEN) which induce equivariance in the different layers of a standard CNN model by optimizing a multi-objective loss ...

Naturally Occurring Equivariance in Neural Networks - Distill.pub

We sometimes call this phenomenon “equivariance,” since it means that switching the neurons is equivalent to transforming the input.

A General Theory of Equivariant CNNs on Homogeneous Spaces

We present a general theory of Group equivariant Convolutional Neural Networks. (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere.

[1602.07576] Group Equivariant Convolutional Networks - arXiv

We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity.

Lecture 4: Equivariant CNNs I (Euclidean Spaces) - Maurice Weiler

Video recording of the First Italian School on Geometric Deep Learning held in Pescara in July 2022.

Gauge Equivariant Convolutional Networks and the Icosahedral CNN

We implement gauge equivariant CNNs for signals defined on the surface of the icosahedron, which provides a reasonable approximation of the sphere.

rotational equivariance in Convolutional Neural Network?

The rotational equivariance can be achieved by rotating the input image for training. Do I really need to do that? How big is the rotation degree?

Implementing rotation equivariance: Group-equivariant CNN ... - Posit

The idea is to train two convnets, a “normal” CNN and a group-equivariant one, on the usual MNIST training set. Then, both are evaluated on an ...