The Environment is the surrounding world around the agent which is not part of the agent itself. It’s important to understand the nature of the environment when solving a problem using artificial intelligence. For example, program a chess bot, the environment is a chessboard and creating a room cleaner robot, the environment is Room.

Each environment has its own properties and agents should be designed such as it can explore environment states using sensors and act accordingly using actuators. In this guide, we’re going to understand all types of environments with real-life examples. 

Fully Observable vs Partially-Observable

In a fully observable environment, The Agent is familiar with the complete state of the environment at a given time. There will be no portion of the environment that is hidden for the agent. 

Real-life Example: While running a car on the road ( Environment ), The driver ( Agent ) is able to see road conditions, signboard and pedestrians on the road at a given time and drive accordingly. So  Road is a fully observable environment for a driver while driving the car.

in a partially observable environment, The agent is not familiar with the complete environment at a given time. 

Real-life Example: Playing card games is a perfect example of a partially-observable environment where a player is not aware of the card in the opponent’s hand. Why partially-observable? Because the other parts of the environment, e.g opponent, game name, etc are known for the player (Agent). 

Deterministic vs Stochastic

Deterministic are the environments where the next state is observable at a given time. So there is no uncertainty in the environment. 

Real-life Example: The traffic signal is a deterministic environment where the next signal is known for a pedestrian (Agent)

The Stochastic environment is the opposite of a deterministic environment. The next state is totally unpredictable for the agent. So randomness exists in the environment. 

Real-life Example: The radio station is a stochastic environment where the listener is not aware about the next song or playing a soccer is stochastic environment.

Episodic vs Sequential

Episodic is an environment where each state is independent of each other. The action on a state has nothing to do with the next state. 

Real-life Example: A support bot (agent) answer to a question and then answer to another question and so on. So each question-answer is a single episode.

The sequential environment is an environment where the next state is dependent on the current action. So agent current action can change all of the future states of the environment. 

Real-life Example: Playing tennis is a perfect example where a player observes the opponent’s shot and takes action.

Static vs Dynamic 

The Static environment is completely unchanged while an agent is precepting the environment.

Real-life Example: Cleaning a room (Environment) by a dry-cleaner reboot (Agent ) is an example of a static environment where the room is static while cleaning.  

Dynamic Environment could be changed while an agent is precepting the environment. So agents keep looking at the environment while taking action.

Real-life Example: Playing soccer is a dynamic environment where players’ positions keep changing throughout the game. So a player hit the ball by observing the opposite team.

Discrete vs Continuous

Discrete Environment consists of a finite number of states and agents have a finite number of actions. 

Real-life Example:  Choices of a move (action) in a tic-tac game are finite on a finite number of boxes on the board (Environment).

While in a Continuous environment, the environment can have an infinite number of states. So the possibilities of taking an action are also infinite.

Real-life Example: In a basketball game, the position of players (Environment) keeps changing continuously and hitting (Action) the ball towards the basket can have different angles and speed so infinite possibilities.

Single Agent vs Multi-Agent

Single agent environment where an environment is explored by a single agent. All actions are performed by a single agent in the environment. 

Real-life Example: Playing tennis against the ball is a single agent environment where there is only one player.

If two or more agents are taking actions in the environment, it is known as a multi-agent environment. 

Real-life Example: Playing a soccer match is a multi-agent environment.

Conclusion

There are mainly six groups of environment and an environment can be in multiple groups. Below are 10 more real-life examples and categories into environment groups. 


Fully vs Partially Observable Deterministic vs StochasticEpisodic vs SequentialStatic vs Dynamic Discrete vs ContinuousSingle vs Multi Agents
Brushing Your TeethFullyStochasticSequentialStaticContinuousSingle
Playing ChessPartiallyStochasticSequentialDynamicContinuousMulti-Agent
Playing CardsPartiallyStochasticSequentialDynamicContinuousMulti-Agent
PlayingPartiallyStochasticSequentialDynamicContinuouMulti Agent
Autonomous VehiclesFullyStochasticSequentialDynamicContinuousMulti-Agent
Order in RestaurantFullyDeterministicEpisodicStaticDiscreteSingle Agent