If you want to develop and test Reinforcement Learning(RL) algorithms, it’s very handy to use a Python library. In this guide, we’ll introduce you to OpenAI Gymnasium. It is the standard for RL experimentation. It’s easy to integrate with Python, and popular in research and industry. After completing this guide, you’ll run your first classic RL environment – CartPole-v1 using OpenAI Gymnasium and Python.
OpenAI Gymnasium provides a wide range of pre-built environments that help you understand and apply reinforcement learning concepts.
You can start with simple control problems, move toward more dynamic robotic simulations, and finally experiment with complex physics and multi-agent scenarios.
OpenAI Gymnasium is the successor of OpenAI Gym. Gym was first released in 2016 to provide a standard set of RL environments.
Below, you’ll find a few project examples categorized by difficulty level to help you choose where to begin.
Beginner projects you can try with OpenAI Gymnasium
- CartPole-v1: keep a pole balanced on a mobile cart.
- MountainCar-v0: move a car to the top of a hill.
- Acrobot-v1: balance a two-segment robotic arm.
- Taxi-v3: transport a passenger to their destination in a simplified city.
- Pendulum-v1: control a pendulum to keep it vertical.
Intermediate projects
- Atari Games (ex: Breakout-v0): train an agent to play Atari games using Deep Q-Networks (DQN).
- LunarLander-v2: land a lunar module safely on the ground.
- BipedalWalker-v3: control a two-legged robot to walk on varied terrain.
- FetchReach-v1: manipulate a robotic arm to reach a target object.
- Minigrid: navigate an agent in a simplified 2D environment with obstacles and objects.
Advanced projects
- Humanoid-v2: control a humanoid robot to walk or run.
- ControlGym: complex industrial environments for testing RL in process control.
- Multi-agent environments: collaborate or compete between multiple agents in complex scenarios.
- PyBullet environments: realistic physics simulations for controlling robots in 3D.
- Autonomous Driving: train agents for autonomous navigation in simulated traffic environments.
Compatibility
It is compatible with Windows, Mac OS X, and Linux. It’s easy to install, so you shouldn’t have problems with the installation process.
Why use OpenAI Gymnasium?
- Standardization and interoperability: Gymnasium facilitates the comparison and evaluation of RL algorithms. It’s using a unified interface for diverse environments
- Support for diverse environments: the library includes environments for classical control (e.g. CartPole), Atari games, robotic simulations, and others. It covers a wide range of RL applications
- Compatibility with popular libraries: Gymnasium could be integrated with libraries such as Stable Baselines3, RLlib, and others. It facilitates the implementation and testing of RL algorithms
- Extensibility: providing flexibility to create customized environments for researchers and developers
- Active support and community: it is an open-source project. It benefits from an active community that contributes to the development and improvement of the library
Installing OpenAI Gymnasium
Prerequisites
Before installing OpenAI Gymnasium, make sure your system meets the following requirements:
- Tested on: Windows 11
- Anaconda (recommended): Provides an isolated environment for Python and simplifies package management. You can download it from https://www.anaconda.com/download
Now that we’ve reviewed the prerequisites, let’s proceed with the installation of OpenAI Gymnasium.
In the following steps, you’ll learn how to set up a Python environment using Anaconda and install the Gymnasium library on Windows 11.
1. Open a terminal Anaconda Prompt
2. Run the following commands one-by-one
# Step 1: search all channels configured in Anaconda for all available versions of Python 3.11 conda search python=3.11 # Step 2: create a new conda environment named "gymenv" with Python 3.11. Info: at some point, you will be asked if you want to 'Proceed ([y]/n)?', type y and press Enter conda create -n gymenv -c conda-forge python=3.11 # Step 3: activate the environment conda activate gymenv # Step 4: check the Python version. It should be Python 3.11 python --version # Step 5: ensure that future installation of Python packages will be fast and error-free pip install --upgrade pip setuptools wheel # Step 6: install OpenAI Gymnasium in the environment pip install gymnasium # Step 7 (optional): install a a subset of simple environments, "toy problems" # CartPole-v1, MountainCar-v0, Acrobot-v1, Pendulum-v1, and CartPole-v0 pip install "gymnasium[classic_control]" # Step 8 (optional): install a a subset of Atari # Breakout-v0, Pong-v0, SpaceInvaders-v0, Seaquest-v0, MsPacman-v0, Enduro-v0, and many more pip install "gymnasium[atari]"
Testing the Installation
We can verify that Gymnasium was installed correctly by following these steps:
# Step 1: open a terminal Anaconda Prompt # Step 2: activate the environment conda activate gymenv # Step 3: start Python by typing python # Step 4: in the Python shell, import Gymnasium import gymnasium as gym #If no error appears, the installation was successful!
We can now start creating and running our first RL environments with Gymnasium.
Running the First Application

Now that Gymnasium is installed, it’s time to run the first RL application.
We will use a simple CartPole environment. Here, the agent learns to balance a pole on a moving cart.
The steps below will guide you through activating your environment, navigating to your script folder, and running the test script to see Gymnasium in action.
Create the Python file cartpole_test.py and write the bellow code:
import gymnasium as gym
env = gym.make("CartPole-v1", render_mode="human")
for episode in range(50): # run 50 episodes
obs = env.reset()
done = False
while not done:
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)
done = terminated or truncated
input("Simulation finished. Press Enter to exit...")
env.close()
Run the RL environment cartpole_test.py:
# Step 1: open a terminal Anaconda Prompt # Step 2: activate the environment conda activate gymenv # Step 3: navigate to the folder where you saved the script # Step 4: run the script python cartpole_test.py
Installing and Using TensorBoard
TensorBoard is a visualization tool that helps us monitor our RL training process. We can check the episode rewards, convergence trends, and performance over time.
To install it, make sure you’re inside the environment where your RL project is installed.
# Step 1: open a terminal Anaconda Prompt # Step 2: activate the environment conda activate gymenv # Step 3: navigate to the folder where you saved the script # Step 4: install the TensorBoard pip install tensorboard # Step 5: verify the Installation tensorboard --version
Once the TensorBoard is installed, you can use it to visualize the training process.
# Step 1: open a terminal Anaconda Prompt # Step 2: activate the environment conda activate gymenv # Step 3: navigate to the folder where you saved the script # Step 4: launch TensorBoard to visualize training logs tensorboard --logdir logs
TensorBoard will start a local web server. Open the following address in your browser:
http://localhost:6006
You should now see a live dashboard showing your episode rewards and convergence curve.
Wrapping Up
In this guide, we’ve learned how to:
- understand the purpose and benefits of OpenAI Gymnasium as a standard library for Reinforcement Learning (RL)
- how to install and configure a Python environment using Anaconda to run Gymnasium
- how to install Gymnasium and optional environment subsets
- how to test the installation to ensure Gymnasium is working correctly
- and how to run the first RL application with CartPole-v1




