The next release of the .NET platform is around the corner. The new annual cadence of releases makes .NET 6 the first long-term support release of the unified .NET. That makes it a more important event than most as it’s the first one that organizations can really trust to use as the foundation of their development strategy.
Microsoft recently published its first release candidate, one of two that will come with a “go live” license that ensures Microsoft support for production applications. This is always a big step. At this point, what had been used for prototypes and other tests gets used at scale, revealing any bugs. That’s why Microsoft makes support available, so the .NET team is able to see the edge cases that it couldn’t reach in its own tests.
Should you begin to switch to .NET 6? Certainly, if you’re already using .NET 5 you should find migrating relatively easy, with the new release adding plenty of new features. Migrations from the .NET Framework remain harder, though .NET 6 adds more compatibility features. Still, there’s enough of a difference between the two platforms that migrating code will be a significant project. Even so, it’s probably a good time to start any migrations, with support readily available with a long-term release as a target.
You can download the current runtime and installers from Microsoft now, with development tool support in the latest preview builds of Visual Studio 2022 (which should launch alongside .NET 6 at .NETconf in November). A version of Visual Studio 2022 for macOS with .NET 6 support is currently in private preview.
Taking off the covers of a new .NET compiler
For most of us, the languages we use are our main touchpoint for .NET, and like previous releases, .NET 6 brings new versions of its main tools. However, the most important parts of a new version are under the hood in the tools that take our code and run it on target hardware, from pocketable ARM devices to massive multicore cloud x64 systems.
A lot of the work in .NET 6 has been in improving its compilers, working on optimizing code, using profile-guided optimization inside the project to produce optimized runtime libraries. It’s perhaps best thought of as a teaser for .NET 7. You can get the benefit of static profile-guided optimization (PGO) in your code, but you can’t use it yourself yet. At the same time, there’s an opt-in dynamic PGO tool built into the .NET JIT compiler. It’s a good way of improving the performance of running code, but any optimizations will be lost between runs.
Dynamic PGO takes advantage of .NET 6’s support for tiered compilation. This uses a preliminary Tier 0 compilation pass to quickly build unoptimized code. Once you have this, you can see what methods are used the most, and these can then be optimized in a Tier 1 compiler pass, building on data from earlier runs. The resulting code will have a larger memory footprint, but it’ll also be significantly faster. Microsoft documentation gives examples that are more than twice the speed of code that doesn’t use dynamic PGO.
These features are all part of the new Crossgen2 compiler. It’s a significant, stand-alone tool that works with the .NET JIT to deliver code that can run anywhere there’s a supported set of .NET. This allows your code to be built for one environment and then delivered to another. Crossgen2 produces ready-to-run code, compiling an entire assembly in advance of running them. Although that’s inefficient, it’s a start for future compiler versions. Currently it supports older instruction sets, which will still run on newer hardware, though they have been superseded by newer instructions that will take advantage of modern cloud-scale hardware. It’s perhaps best to think of Crossgen2 as the first pass at a new way of building and delivering code, one that mixes JIT and ready-to-run compilation, but one where we won’t see the full benefit until .NET 7 ships in 2022.
Changing network stacks
One of the more important changes in .NET’s networking is support for HTTP/3 and the QUIC protocol. This will improve support for secure HTTP connections, with built-in Transport Layer Security (TLS) and User Datagram Protocol (UDP) to avoid connection blocking. It makes it easier for connections to roam between wired, wireless, and cellular, as QUIC (Quick UDP Internet Connection) is independent of the underlying connection address. That makes roaming a lot easier, as long transactions like downloads can continue operating even if the underlying connection from device to internet changes.
Support for the underlying QUIC protocol is important for other reasons. The migration from the .NET Framework to .NET Core that began with .NET 5 has left some key .NET components by the wayside, including WCF, the Windows Communication Foundation. The WCF APIs were used to build service-oriented applications in .NET, a model that’s increasingly important with the move to cloud-native applications. Microsoft recommends moving to gRPC as a way of implementing service endpoints, and it’s working on an implementation that’s based on HTTP/3. This approach makes a lot of sense for mobile and edge applications where connections may well switch between Wi-Fi and cellular depending on position and conditions.
It’s not a big step to move from HTTP/2 to HTTP/3 for gRPC, but there should be significant performance improvements, especially for mobile devices and Internet of Things and other edge implementations. You can experiment with it now if you enable HTTP/3 support in your code and then implement a gRPC interface. Client code should auto-negotiate an HTTP/3 connection if it’s supported in the host OS.
With Windows and Linux getting .NET 6’s HTTP/3 support, it won’t be there on macOS, as Apple doesn’t provide any QUIC APIs. However, with QUIC becoming increasingly popular, it’s likely to gain support relatively quickly. In the meantime, any code that uses gRPC HTTP/3 should be written to respond to standard gRPC calls over HTTP/2 as well.
An open .NET
Another interesting development in .NET 6 is the ability to build a runtime that’s provably open source. For most Linux distributions, tools need to be built using open source tools, which for .NET requires a two-stage process. To enable this, Microsoft is now able to deliver .NET source code in a source tarball like any other major Linux component. Whereas this used to be a manual process that often delayed distribution, it’s now an automated part of the .NET build process, ensuring that code from Linux distributions like Red Hat is in sync with Microsoft’s own builds.
Support for source tarballs is an important sign of Microsoft’s and the .NET Foundation’s commitment to an open .NET. There’s a lot of work in the latest version that comes from outside Microsoft, though Redmond is still by far the biggest contributor. But Microsoft is using its blogs to call out contributors to its libraries and to reference partnerships with companies like Red Hat. It makes sense for Microsoft to bet on open in .NET: It needs to target a huge number of platforms as the computing environment expands and new architectures, like ARM’s new instruction sets, arrive.
Copyright © 2021 IDG Communications, Inc.