Flow of Control in Programming Languages

In this article i will talk about common flow-of-control statements aswell as conditional expressions.
There may be mistakes in this article so please write me an email if you notice any.

Control Flow Basics

At the Assembly Language Level, Control Flow is realized usually through comparisons, which set Flags or Flag Registers, and then the content of those Flags is then used to decide whether to jump to an address which is not the next instruction, or not.

Expressions compute values, Statements execute some code. Functional Programming Languages like haskell rely mostly on conditionals in the Form of Expressions, while imperative and object-oriented languages compute Values by using flow-of-control statements.

So, by using conditionals in the form of expressions, and recursion, we can tell the compiler the rules about the computation of the value. In this way, there is more freedom for the language to decide how the value is computed. Improvements and optimizations are easier. But (in practise, not in theory) we lose some of the optimizations which could have been thought of by the programmer in a more imperative language.

Explicit Flow Control Statements (imperative)

Branching
			if(condition){
				//statements
			}
		
			if(condition){
				//statements
			}else{
				//statements
			}
		
			switch (primitive_value){
				case const1:
					//statements
					break;
				case const2:
					//statements
					break;
				default:
					//statements
					break;
			}
		
Loops
			while(condition){
				//statements
			}
		
			do{
				//statements
			}while(condition)
		
			for(element in collection){
				//statements
			}
		
			for(int i=0;i<n;i++){
				//statements
			}
		

loop in Rust

			loop {
				//statements run in infinite loop
				//until some terminating statement is reached
			}
		
Loop Statement in Dragon
			//i can be any expression that evaluates to an integer
			loop i {
				//statements run exactly i times
			}
		

The property of interest is, that instead of evaluating the condition on each loop iteration, it is just evaluated once. Tradeoff is, that it has to be an integer value.

Conditionals in Expressions
(Control Flow is mostly up to the compiler)

Ternary Operator
			x = if condition then value1 else value2
		
			x = (condition)?value1:value2;
		
Pattern matching (Haskell)
			subr:: Integer -> [Char]
			subr x 
			  | x>=2 = "hi"
			  | otherwise = "hii"
		
With higher-order functions
			subr2 x = map  (\y->y+1) [1,2,3,x]
		

Tradeoffs

When computing a value, it makes sense to use conditionals as part of Expressions, and higher-order functions. It is dumb to waste time with low-level details of a for-loop, when mapping the values of an array to their successors. Or when summing the values of an array. One would use an implicit/ambiguous/does-not-concern-me flow of control 'map' and 'reduce' in such cases.

In other cases, when low level details are important, such when using dynamic programming or other approaches, it can be beneficial to use explicit flow of control, as the order in which things happen is clearer to the reader.

In cases where the order of execution does not matter, i consider it a mistake to specify this order. I would go as far as to recommend to anyone implementing for-each or higher-level constructs such as map, to explicitly state that no order is guaranteed. Maybe to even flip a coin on which end of the collection to start. This is important to be able to use the properties of functional programming (pure functions) to be able to parallelize execution without explicit use of threads.
In my opinion, Haskell and Java are making a mistake by guaranteeing an order of evaluation / computation in some higher-order functions and pattern matching. Example: Haskell Guards Order of Evaluation

Automatic multithreading of pure functions in higher-order functions

This idea has been around for some time. Basically, there could be a function mt_map which stands for multithreaded-map, which is the same as map, except that it uses a new thread for each execution of the subroutine being passed to it. This would be helpful in batch processing. Java already has such a feature with parallel Streams.

Control Flow with try-catch

In the past, and with some stubborn programmers even today, errors which occurred during execution were checked for manually, for example with errno in C errno man page
The benefit of checking for errors manually is, that you don't have to do it. For example, when learning C, you don't have to check whether your malloc(100); was successful or not. You know that it is very likely it was successful, as you only wrote a simple program.

For bigger programs, and when working in Teams, it is nice to have a built-in mechanism in the language to enforce that errors are handled properly. For example, a small program (in pseudo code).

				try{
					int x=subroutine_may_throw();
				}catch(Exception e){
					//it would be great if we were unable to access x anymore
					//as it may not be initialized
				}
				//also we should be unable to access it here.
			
Compare that to a language, in which exceptions / errors are handled by return codes:
				int x=subroutine_may_throw();
				
				//error handling code here may or may not exist
				
				//usage of x here
			
try-catch statements also compose well when the language allows passing exceptions through subroutine calls. Example:
				main() throws Exception{
					try{
						x=subr();
					}catch(Exception e){
						
					}
				}
				
				subr() throws Exception{
					subr2();
				}
				
				subr2() throws Exception{
					throw new Exception();
				}
			
In this case, subr does not need to know about the specific exception or how to handle or return it to it's caller.
When writing larger Programs in a language without such try-catch, one has 2 choices (maybe even more, i don't know about):

Control Flow with Generics

When having 2 objects which both implement an interface, a subroutine receiving a parameter of the type of the interface, may (in some languages) have to look up which type it is dealing with, and call the appropriate method for the specific type.
This can (and should) be circumvented by using Monomorphization, a technique for filling in the concrete type for every call site, and using that to pay no runtime cost for using generics. This is done in rust Monomorphization in Rust.

Possible Future Expansions of this Article