Welcome back to AWS Comsum Quarterly (AWSCQ).
Another issue brings another great Guest Editor. This time we’re very lucky to have Luciano Mammino.
Luciano is a Senior Architect at a lovely cloud consulting firm called fourTheorem, an AWS Serverless Hero, the original author of Middy.js, a podcaster at awsbites.com and also known for being one of the authors of Node.js Design Patterns (the book).
You can catch Luciano at the AWS Community Summit in Manchester later this month. More on that at the end of the issue.
Over to you Luciano.
The case for Serverless Rust on AWS
Hello folks! As someone who has been deeply involved with serverless and someone who is passionate about programming languages, I am really excited to see the recent developments in the Rust/Serverless space and I am eager to tell you more about it!
What is Rust and why it is appealing
Ok, basics first! What is Rust? If you haven't lived under a rock for the last few years, you probably heard of the Rust programming language. Rust is a relatively new programming language that is getting a lot of traction in the developer's ecosystem. In fact, it has been voted the most-loved language in the Stack Overflow Developer Survey. Not for just one lonely year, but for 7 years in a row! There must be something to it, right?!
In my opinion, what sets Rust apart is that even though it is a low-level language, the community has done a tremendous job at making it accessible and suitable for a large variety of problems. You can certainly write software for embedded devices or an entirely new operative system (if you are into that kind of thing), but you can also write web services, desktop applications, and even web frontends.
On top of that, by virtue of being a relatively new language, Rust builds on the shoulders of giants by taking inspiration from other well-established languages such as C++, OCaml, Haskell, Ruby, and JavaScript. It's fair to say that Rust is trying to get the best out of these languages while trying to avoid some of their fallacies.
Last but not least, Rust tries to innovate in the field of concurrency and memory safety eliminating an entire class of bugs at the compiler level. By introducing some clever language-level constructs, the compiler can make sure you can avoid some common mistakes when it comes to memory management and thread safety.
My favourite feature of the language is the Option and the Result types. These types make it very obvious when you might be missing data or when you might have an error. They force you to think hard about edge cases and how you should handle them, which, in my experience, is something that can help reduce defects from the very first iteration of your code.
Why Rust is good for serverless
I hope I got you excited, or at least a bit curious! But hold on, there's more!
One thing I did not mention is that Rust strictly follows a design principle that originated in the C++ world called zero-cost abstractions. This means that every language feature does not add any runtime cost. High-level functionality is implemented in such a way to be as performant as it would be if you had implemented it yourself with low-level programming. This is the reason why Rust does not have a garbage collector and one of the reasons why it compiles fast and memory-efficient programs.
Yes, I said fast and memory-efficient! If you have ever looked into Lambda pricing your excitement must have already reached the next level!
In short, with Lambda, you pay a unit price based on the amount of memory you reserve for the Lambda execution. That unit price gets multiplied by the number of milliseconds needed to complete the execution.
In simple terms, we could say "Lambda execution cost = memory * time".
So if we use a language that can give us memory-efficient and fast binaries, guess what, it might be a good bang for the buck, especially if you have a use case where you are running a Lambda function hundreds or thousands of times per minute…
Maxime David has published a fantastic benchmark that showcases that Rust lambdas generally have the best cold start times, which is another great characteristic, especially if you are writing user-facing lambdas, like backend logic for API Gateway.
But optimizing for cost is a tricky business and there are many dimensions you can use to tackle this problem.
One interesting Lambda trick is that if you give your function more memory, you also get more vCPUs. If you have an inherently CPU-intensive parallel problem, you could take advantage of more vCPUs and then write high-performance multi-threaded code. And, if you are afraid of writing multi-threaded code (as I generally am), well, rest assured that Rust has your back. "Fearless concurrency" is the term that gets often thrown around when you hear people talking about multi-threading and Rust.
But now you are probably thinking "If I pay a higher unit price for more memory, how come increasing memory can help me to save money?". Very valid question, my friend! I told you cost saving is mad science, didn't I?
But if you like mad science or simply enjoy playing with numbers and statistics, you will surely love AWS Lambda Power Tuning by Alex Casalboni. This open-source tool allows you to profile your lambda against different memory configurations and it can help you to find the sweet spot between invocation time and cost. It might sound counter-intuitive, but sometimes if you bump the memory up, because this also gives you more vCPUs and network bandwidth, your invocation might run much faster. Even if you pay a higher unit price per millisecond, you can come out saving money this way! And well, even if you don’t care about saving money on your Lambda bill, this might allow you to improve the latency (and therefore the user experience) of your client-facing APIs.
If you are still not convinced that Rust is good for Serverless, I have written an extensive article with additional points that are not related only to cost-saving.
The current state of Serverless Rust on AWS
OK, now what? You have somehow decided that it's time to write your first Lambda function in Rust… where do we even start?
This is actually quite funny, because if we go to the AWS web console and try to create a new Lambda function and then we go to select the runtime…
.NET, Go, Java, Node.js, Python, Ruby… Where the heck is Rust?! Have I been lying to you this whole time?!
I swear, I haven't!
You see, the fact is that in order to make a Rust-based lambda function, we need to use a custom runtime.
By the way, if you have done any Lambda in Go, you might want to know that the Go runtime has been deprecated, which means AWS has decided to follow a similar strategy for it too. I have my own theories on why that's the case, but you'll have to ask me in person. If you are attending the next ComSum in Manchester, I'll be there!
So, a custom runtime… wasn't that supposed to be only a thing for those who wanted to make the first page of Hacker News by publishing fun (and probably useless) runtimes like Cobol, Fortran, or BrainF**k?
Apparently not!
Writing a custom runtime is not that hard and it can actually be a fun exercise that will make you understand the inner workings of Lambda much better. But, I agree with what you are thinking right now: "Who has time for that?!"
Probably AWS agrees too, since they provide a wonderfully open-sourced Rust Runtime for AWS Lambda.
The idea of this project is that the runtime and your Lambda handler function code will be compiled together into the same binary. So, you import the runtime as a library into your Rust project, copy-paste some boilerplate code, write your handler, compile, deploy and you are off to the Lambda races!
Actually, things can be even simpler if you use Cargo Lambda, a Cargo extension that makes it easy to scaffold, simulate, build, and deploy Lambda Functions using the custom Rust runtime. If you haven't heard of Cargo, it is the default package manager and build tool for Rust. Similar to what npm is to Node.js or pip is to Python.
Both the Rust Runtime and Cargo Lambda are mainly maintained by David Calavera. So thanks, David for all your efforts!
Another thing we have to mention here is the AWS SDK for Rust. It's quite a feature-complete SDK and it supports pretty much everything that the other more established SDKs support. But, there's a big "but" coming…
It's still in developer preview!
… and this is something that might put off some people and organisations. And understandably so.
I haven't been using it too much, but I haven't found any problems with it so far! The docs and the user experience are quite good and if you have used other SDKs, using this one should feel quite familiar.
Let's just hope, AWS will move it to a more stable release line soon. I'll keep my fingers crossed for a re:Invent announcement.
How do I write my first Lambda in Rust?
Once you have installed Cargo Lambda, the next thing to do is to run `cargo lambda new` in your terminal. This command will start a guided procedure that will help you to bootstrap all the necessary code for your new Rust-based Lambda function. You can select an HTTP event (E.g. if you want to write a Lambda to integrate with API Gateway) or other events among a large selection of predefined events (e.g. S3, SQS, Kinesis, etc.).
This process will also pull in all the necessary dependencies, including the AWS Runtime for Rust, so you can then focus on writing the business logic that will go into the Lambda handler function. This makes for a very good developer experience which won't be too different from what you might have seen coming from runtimes like Python or Node.js.
Once you have your handler code written, you can easily test it locally by using the commands `cargo lambda watch` (which spins up a live-reloading dev server that simulates the Lambda environment) and `cargo lambda invoke` (which allows you to send events to your Lambda running in the local simulator).
If you are happy with the results, it's time to ship! You can use the command `cargo lambda deploy` to compile the lambda and publish it in your AWS account. The best part is that you can easily cross-compile for ARM, which is generally a bit faster and cheaper, so why not take advantage of that?
Infrastructure as Code with Rust Lambdas
OK, Cargo Lambda is cool, but what about infrastructure as code? Serius cloud engineers don't just publish Lambda functions in isolation, they are generally part of a stack containing multiple resources (S3 Buckets, DynamoDB tables, IAM policies, you name it).
The good news is that Cargo Lambda integrates quite well with AWS SAM (Serverless Application Model), even though the feature is still in beta.
So, you can write IaC for your serverless projects using SAM and just tell it to rely on Cargo Lamdba for building and packaging the lambda before the deployment happens. I have used this for most of my Lambda Rust projects and haven't found any issues so far. Another thing that I will expect will go out of beta soon!
If you are into AWS CDK (Cloud Development Kit), there's a dedicated Cargo Lambda integration with it, but I haven't had a chance to try it yet.
Finally, there's also a Serverless Framework Rust plugin that could be quite useful, but it's something else I haven't tried yet.
I recently delivered a talk at the Dublin Rust Meetup showcasing the full process of creating, testing, and deploying a Lambda in Rust. If you are curious, recordings and slides are available. I hope you'll find the talk interesting. Or, if you are sick of my content, I just discovered that Maxime David has done a similar talk at the Rust Linz meetup and I loved it, much better than mine, for sure!
Using Rust outside Lambda
Once you learn Rust, you might find that it's not just a language that is great for performance, memory safety, and writing less-buggy code. Thanks to its modern toolchain and very active ecosystem, it's a language that can also give you and your team great productivity benefits.
So yes, it's something that you might want to use to build other solutions outside the Lambda realm.
For instance, I have built a web server using the Axum web framework and deployed it to production as a Fargate container. It was a relatively simple application, but it was really easy to build it with Axum. I ended up using MySQL as a database and the sqlx library to have queries validated at compile-time (which is both mindblowing and super useful).
If you want to see what's the Rust landscape like when it comes to building web servers you can check out arewewebyet.org.
When it comes to building Rust-based containers, your mileage might vary and there are lots of easy solutions. I spent some time figuring out how to make the container as small as possible for fast deployments. Statically linked binaries seem to be a great solution.
Where can I learn more about Rust?
Now, one thing that is fair to say is that Rust has a bit of a learning curve, so you might want to rely on some good resources to get started with it and build your confidence in the language.
Two years ago I wrote an extensive blog post that lists many free and paid resources that I used to learn Rust. I think it's still quite relevant.
But if you want a shortcut path and you like books, what I would suggest is the following:
Read the official Rust Programming Language book by Carol Nichols and Steve Klabnik. The web version is FREE! It's a great introduction to all the concepts and features of Rust and it contains tons of code examples.
Then pick either Zero to Production in Rust by Luca Palmieri or Rust in Action by Tim McNamara. Both books are masterpieces and they give you a very practical and fun perspective on how to build real projects with Rust.
Closing notes
I hope you enjoyed this post even though I have to admit I might have come across as a Rust fan-boy! I probably am, but I like to pretend that I am not!
Nonetheless, I hope I have inspired you to give this new technology a go and I look forward to hearing your opinions on it!
Feel free to connect with me on any of the links you'll find at linktr.ee/loige, I am generally very friendly and happy to have nerdy tech chats (not necessarily involving Rust)!
Cheers!
PS: Huge thanks to Michael Twomey, Maxime David, Alex Casalboni, and Guilherme Dalla Rosa for kindly reviewing this post.
A huge thank you to Luciano for putting this edition on AWSCQ together.
If you enjoyed this newsletter then you can catch him speaking live at this Septembers AWS Community Summit in Manchester (28/09/23)
Agenda and tickets on the button below.
And that’s all folks!
We’ll be back with another issue and Guest editor in the coming weeks.
Before you go be sure to give our sponsors a click. AWSCQ, our live and digital events are all made possible by their support.