Microsoft continues to do interesting open source work through its Deis Labs team. Here you’ll find important Kubernetes tools like Helm and CNAB, as well as a set of intriguing WebAssembly projects. Having a group like Deis is important for Microsoft. It allows Azure to experiment with new cloud-native technologies without committing to launch, while providing it with a useful point of contact with standards bodies and open source foundations.
Deis’ work with WebAssembly is particularly interesting. It’s clear that Microsoft is concerned about the limitations of containers as the lowest level of cloud-native application development. Significant overhead makes containers impractical for many edge applications, especially where small devices come into play. They’re an issue in larger-scale distributed applications where you want the isolation of containers but don’t need to run a complete operating system, using serverless models outside of traditional serverless infrastructures.
Extending distributed computing with WebAssembly
Alongside the browser-hosted WebAssembly, the WebAssembly System Interface (WASI) allows it to stand alone, supporting technologies like Deis’ Krustlets, which are WASI applications managed by Kubernetes. WASI is also at the heart of one of its newer experiments, the WebAssembly Gateway Interface (WAGI).
As an aside, I spent much of the mid-1990s writing CGI-based software to run one of the first content-driven ISPs before moving on to build more complex e-commerce systems. My last big CGI project was a set of Perl scripts that ran AOL UK’s web card system. Intended to last only one holiday season, its templated CGI code ran for several years and added plenty of seasonal events to its library. So I have a certain personal interest in seeing the concept’s return!
Deis has taken the idea of CGI’s HTTP handlers and applied them to WASI as a way of building a server implementation on top of WASI. This gets around some key architectural limitations in WASI, particularly its lack of a networking layer and its underlying single-threaded nature. These make it hard to use WASI as it currently stands as a microservice platform.
The old CGI model has an answer, providing a way of linking scripts to a server and loading them as needed. They could be written in any language, using environment variables and query parameters to manage state. The result was a flexible tool that took the web beyond its basic content delivery model into a full-fledged application platform. You’d probably look back at the sites and services we built in the 1990s as primitive, but they pushed the envelope of what could be done at the time.
WAGI does much the same with WASI, pushing it into running as a platform that can dynamically load WASM code, providing a framework for calling, loading, and executing modules. Headers are loaded via environment variables, query parameters as command line options for the modules, with HTTP payloads loaded via stdin and output coming from stdout. Using these simple, well-understood methods to work with WASM simplifies interface design and development, providing the necessary tools for WASM to work as a lightweight server.
Deis has made some decisions that make WAGI different from CGI, with a focus on security and controlling access to and from the host system. The intent here is to prevent malicious code from running, as WAGI modules are distributed as binaries and may come from third parties. Modules don’t get full file-system access; without explicit grants, they can’t make outbound network connections. They can only access environment variables passed into the WASI environment, and they’re unable to call additional executables.
Configuring a WAGI server
Getting started with WAGI is fairly simple. You can download a binary of the server from its GitHub releases page or build your own by compiling to WebAssembly using Rust. Currently WAGI is best run on Linux, though Windows builds are available and are being tested by Deis. As WAGI is a Rust application, you have the option of running it from source rather than building a binary.
Once built and installed, you can start to experiment with WAGI. It’s a command line tool, so you need to learn its flags. One useful feature is support for another Deis project, Bindle. This is a way of aggregating all the objects needed to run an application, as an alternative to its modules configuration file. Bindle is a flexible and powerful tool, but like WAGI, it’s very much under development and you may prefer to configure WAGI with a more traditional config file.
You will need to configure the host name and port used, along with the IP address used to listen for requests. Other configuration elements include the directory used to cache binary WASM modules and any common environment variables: for example, secrets needed to access remote services.
Most of your configuration is in WAGI’s modules.toml file. This defines the routes used to access modules, which give users URLs to access your code. You next provide a local file system reference to a module, so WAGI can load it when called. At the same time, the configuration file can define the entry point of a module, letting you call a specific function. This allows one module to contain multiple features, each defined in a separate function. Deis is allowing for expansion here, as there’s a reserved entry for repositories, suggesting that in the future you may be able to dynamically load modules as they update to central repositories. There’s already support for using OCI registries to load modules, if you want to make WAGI the endpoint of a continuous integration and continuous delivery (CI/CD) pipeline.
Modules can even be configured to get explicit access to specific directories, allowing them to work directly with server resources. This can be used to provide access to resources used to generate content, such as images. Any required environment variables are passed via the command line as part of the WAGI call. Environment variables that need to be shared across modules can be stored in an additional configuration file and loaded at run time.
Writing your first WAGI module
Writing modules is as easy as configuring them. You’re not limited to any one language. As long as it compiles to wasm32-wasi it can be used with WAGI. There’s very little complexity, and no need for specialized libraries as everything is managed via standard I/O operations. All you need to do is read stdin and print output to stdout. If you remember how to write CGI applications, you can write WAGI modules. The only real constraint is ensuring that you format outputs so they can be delivered as HTTP responses. That means delivering a content type header and a blank line before adding your content.
The result is a familiar, practical way to extend web servers, adding dynamic content where needed and providing a way for edge devices to deliver formatted content. That last point is perhaps the most important. WAGI is a lightweight way of delivering web content, and formatted HTML is an effective way to manage outputs in a way that can be parsed by any client. It’s possible to imagine a WAGI endpoint on an edge device that delivers formatted data in response to a query from a management application or a highly distributed WAGI-based application that uses as little cloud resource as possible, keeping costs to a minimum.
Having a relatively simple way of building and deploying code to edge devices is important. All your device firmware needs to implement is a WAGI server and a WASI runtime; modules can be loaded over any IP connection from an Open Container registry and cached on your device, updating as new releases are delivered. Future support for package description technologies like Bindle should simplify the process even further, with a common description of resources and content that can be loaded on start-up.
Copyright © 2021 IDG Communications, Inc.