This week is AWS reInvent 2020, Amazon’s annual conference where they announce new products, services and updates. We discuss the most interesting news and what it means for the cloud computing industry.
AWS Proton, IaC for containers and serverless
Managing many containers has always been an issue for microservices deployments using separate job definitions for individual endpoints or services. Trying to do all of this for hundreds or thousands of services, complete with the right continuous integration and continuous deployments, can be difficult for any team.
AWS Proton tries to improve on that by offering a “fully managed application deployment service”
AWS already has CloudFormation, an IaC solution for all AWS services, but Proton is built specifically for containers and serverless deployments.
Proton is completely free to use, but you will of course pay for all AWS resources your applications run on.
Faster EBS storage
EBS has been upgraded. The older general-purpose gp2 disk has been replaced by gp3, which is up to four times faster. These still limit to 16TB and offer a few milliseconds of latency, but now do so with 4x higher bandwidth at up to 1000MB / s per volume. Not only that, they are real 20% cheaper per GB than existing gp2 volumes.
The high end
io2 volumes now have io2 Block Express, which is just an increase in the maximum IOPS you can provide. These will of course be a lot more expensive, but the focus is on maximum performance, not price / performance.
New EC2 instances
EC2 has received a few new copies. One of the weirdest is EC2 Mac Instances, which is exactly what you think, a Mac Mini in the cloud. Its purpose is to make it easy to provision and rent Mac-based virtual environments for developers. There is only one instance type, mac1.metal, which comes with 12 cores and 32 GB of RAM.
C6g, M6g and R6g instances are all based on AWS’s ARM-based Graviton2 processor and support 100 Gbps networks. They are advertised to deliver up to “40% better price performance” than x86 instances, albeit in specific workloads. Regardless, AWS’s custom silicon is showing great promise, and the Graviton2 chip competes well with x86 processors.
The new D3 series delivers the highest local storage capacity in the cloud. The feature has a faster disk speed and up to 336 TB of space at an 80% lower cost per TB of storage compared to D2 instances.
R5b, a new series of AWS’s memory-oriented R5 database instances. Not much new, but since block storage is often a bottleneck for high-write operations, this will be a great upgrade for many people.
G4dn is a new GPU instance designed to provide the best value for money for graphics models and machine learning models. They are powered by up to 8 NVIDIA T4 GPUs, 96 vCPUs, 100 Gbps networks and 1.8 TB local NVMe-based SSD storage.
Finally, they announce M5zn instances, which are pretty straightforward, except they have high clock up to 4.5GHz and 100Gbps networks.
Run ECS and EKS on your infrastructure
AWS usually likes to sell you on its own computing power, but occasionally they let customers run their services with their own hardware.
AWS ECS Anywhere and EKS Anywhere do just that, allowing you to run ECS and EKS on your own managed servers. You can start and configure ECS jobs to run on your hardware, provided your servers are running the AWS ECS Agent and configured to connect to your AWS account.
No pricing information so far, but since it runs on your own hardware it’s probably free or at least much cheaper to run.
AWS has announced two updates to Amazon Aurora, their fully managed MySQL and PostgreSQL compliant database-as-a-service.
The first is Babelfish, a new translation layer that provides Microsoft SQL Server compatibility for Aurora PostgreSQL. As a result, Aurora now basically supports SQL Server with a few tweaks, and while it still requires some code changes, it won’t be a major rewrite.
The other is Aurora Serverless 2, an upgrade to the existing serveless configuration that should make it easier to run a massive auto-scaling cluster that can handle hundreds of thousands of transactions in a fraction of a second. Rather than doubling capacity every time a workload needs to be scaled, capacity is adjusted in fine-grained steps. It also supports multi-AZs, global databases, and reading replicas.