The reasons are optimizations only.

If the compiler can assume that all loops without side effects terminate, it does not have to prove that.

If the non-terminating loops were allowed, the compiler would be allowed to perform certain optimizations only if it could prove termination, that is impossible in general so it would turn into pattern recognition game. And for what benefit?

**The underlying issue is that non-termination is a kind of side effect on itself.** Any observable effect which is certain to happen after the loop terminates is observed if and only if the loop terminates, even though the loop doesn’t have any effects.

Of course exactly the same reasoning can be done for `if(expr) return;`

. The compiler is not allowed to move stuff after the `if`

before `if`

unless it can prove `expr`

is false. But `if`

is the fundamental control flow mechanism while non-terminating loops are not (IMHO).

Take the following code.

```
int collatz_conjecture(int i){
while(i!=1){
if ((i%2)==0)
i/=2;
else
i=i*3+1;
}
return i;
}
int main(){
collatz_conjecture(10);
return 5;
}
```

With -O3, gcc compiles it to:

```
collatz_conjecture(int):
mov eax, 1
ret
main:
mov eax, 5
ret
```

So, did the compiler prove the Collatz conjecture in order to determine that it should return `1`

for all numbers? Of course not, that is just one of the optimizations allowed by termination assumption (and where UB can happen). The only way the loop can terminate is if `i==1`

therefore it can assume `i==1`

after the loop and use it for further optimizations -> the function always return 1 and can thus be reduced to it.

More useful example can be interleaved copy. If you have

```
loop A
loop B
```

the compiler is allowed to interleave them even without knowing that `A`

terminates. Many vectorization operations rely on this assumption.

Similarly, reordering of some independent after-loop operations before the loop assumes the loop will terminate.