There are various types of AWS EC2 instances available. The instance type determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory, and storage capabilities, and is grouped in an instance family based on these capabilities. This article will help you select an instance type based on the requirements of the application or software that you plan to run on your instance.
Amazon EC2 dedicates some resources of the host computer, such as CPU, memory, and instance storage, to a particular instance. Amazon EC2 shares other resources of the host computer, such as the network and the disk subsystem, among other instances. If each instance on a host computer tries to use as much of one of these shared resources as possible, each receives an equal share of that resource. However, when a resource is underused, an instance can consume a higher share of that resource while it's available.
Each instance type provides higher or lower minimum performance from a shared resource. For example, instance types with high I/O performance have a larger allocation of shared resources. Allocating a larger share of shared resources also reduces the variance of I/O performance. For most applications, moderate I/O performance is more than enough. However, for applications that require greater or more consistent I/O performance, consider an instance type with higher I/O performance.
Instance types are named based on their family, generation, processor family, additional capabilities, and size.
- The first position of the instance type name indicates the instance family, for example, t.
- The second position indicates the instance generation, for example, 5.
- The third position indicates the processor family, for example, i.
- The remaining letters before the period indicate additional capabilities.
- After the period (.) is the instance size, such as mini or metal for bare metal instances.
- C – Compute-optimized
- D – Dense storage
- F – FPGA
- G – Graphics intensive
- Hpc – High-performance computing
- I – Storage optimized.
- Inf – AWS Inferentia
- M – General purpose
- Mac – macOS
- P – GPU accelerated
- R – Memory optimized
- T – Burstable performance
- Trn – AWS Trainium
- U – High memory
- VT – Video transcoding
- X – Memory intensive
- a – AMD processors
- g – AWS Graviton processors
- i – Intel processors
- d – Instance store volumes
- n – Network and EBS optimized
- e – Extra storage or memory
- z – High performance
- flex – Flex instance
Current 6th Generation instances are given below. At the time of publishing this article, the following 6th generation instances were made available to the general public.
- General purpose: M6a, M6g, M6gd, M6i, M6id, M6idn, M6in, M7a, M7g, M7gd, M7i, M7i-flex, T4g
- Compute optimized: C6a, C6g, C6gd, C6gn, C6i, C6id, C6in, C7g, C7gd, C7gn, Hpc6a, Hpc7a, Hpc7g
- Memory optimized: Hpc6id, R6a, R6g, R6gd, R6i, R6id, R6idn, R6in, R7g, R7gd, R7iz, X2gd, X2idn, X2iedn
- Storage optimized: I4g, I4i, Im4gn, Is4gen
- Accelerated computing: G5g, Inf2, P5, Trn1, Trn1n
Amazon EC2 instances that run on Intel processors may include the following features. Not all the following processor features are supported by all instance types.
*Intel AES New Instructions (
AES-NI) *— The Intel AES-NI encryption instruction set improves upon the original Advanced Encryption Standard (
AES) algorithm to provide faster data protection and greater security. All current generation EC2 instances support this processor feature.
Intel Advanced Vector Extensions (
Intel AVX, Intel AVX2, and Intel AVX-512) — Intel AVX and Intel AVX2 are 256-bit, and Intel AVX-512 is a 512-bit instruction set extension designed for applications that are Floating Point (
FP) intensive. Intel AVX instructions improve performance for applications like image and audio/video processing, scientific simulations, financial analytics, and 3D modeling and analysis. These features are only available on instances launched with HVM AMIs.
Intel Turbo Boost Technology — Intel Turbo Boost Technology processors automatically run cores faster than the base operating frequency.
Intel Deep Learning Boost (
Intel DL Boost) — Accelerates AI deep learning use cases. The 2nd Gen Intel Xeon Scalable processors extend Intel AVX-512 with a new Vector Neural Network Instruction (VNNI/INT8) that significantly increases deep learning inference performance over previous generation Intel Xeon Scalable processors (with FP32) for image recognition/segmentation, object detection, speech recognition, language translation, recommendation systems, reinforcement learning, and more. VNNI may not be compatible with all Linux distributions.
Note: There is some ambiguity in the naming conventions for 64-bit CPUs. Chip manufacturer Advanced Micro Devices (AMD) introduced the first commercially successful 64-bit architecture based on the Intel x86 instruction set. Consequently, the architecture is widely referred to as AMD64 regardless of the chip manufacturer.
The virtualization type of your instance is determined by the AMI that you use to launch it. Current generation instance types of support hardware virtual machines (
HVM) only. Some previous generation instance types of support paravirtual (
PV) and some AWS Regions support PV instances. For more information, see Linux AMI virtualization types.