Monday, February 23, 2015

Creating a drop-in replacement for the Rust compiler

Many tools benefit from being a drop-in replacement for a compiler. By this, I mean that any user of the tool can use `mytool` in all the ways they would normally use `rustc` - whether manually compiling a single file or as part of a complex make project or Cargo build, etc. That could be a lot of work; rustc, like most compilers, takes a large number of command line arguments which can affect compilation in complex and interacting ways. Emulating all of this behaviour in your tool is annoying at best, especially if you are making many of the same calls into librustc that the compiler is.

The kind of things I have in mind are tools like rustdoc or a future rustfmt. These want to operate as closely as possible to real compilation, but have totally different outputs (documentation and formatted source code, respectively). Another use case is a customised compiler. Say you want to add a custom code generation phase after macro expansion, then creating a new tool should be easier than forking the compiler (and keeping it up to date as the compiler evolves).

I have gradually been trying to improve the API of librustc to make creating a drop-in tool easier to produce (many others have also helped improve these interfaces over the same time frame). It is now pretty simple to make a tool which is as close to rustc as you want it to be. In this tutorial I'll show how.

Note/warning, everything I talk about in this tutorial is internal API for rustc. It is all extremely unstable and likely to change often and in unpredictable ways. Maintaining a tool which uses these APIs will be non- trivial, although hopefully easier than maintaining one that does similar things without using them.

This tutorial starts with a very high level view of the rustc compilation process and of some of the code that drives compilation. Then I'll describe how that process can be customised. In the final section of the tutorial, I'll go through an example - stupid-stats - which shows how to build a drop-in tool.

Continue reading on GitHub...

Sunday, January 11, 2015

Recent syntactic changes to Rust

The last few weeks I implemented a few syntactic changes in Rust. I wanted to go over those and explain the motivation so it doesn't just seem like churn. None of the designs were mine, but I agree with all of them. (Oh, by the way, did you see we released the 1.0 Alpha?!!!!!!).

Slicing syntax


Slicing syntax has changed from `foo[a..b]` to `&foo[a..b]`, although that might not look like much of a change, it is actually the deepest one I'll cover here. Under the covers we moved from having a `Slice` trait to using the `Index` trait. So `&foo[a..b]` works exactly the same way as overloaded indexing `foo[a]`, the difference being that slicing is indexing using a range. One advantage of this approach is that ranges are now first class expressions - you can write `for i in 0..5 { ... }` and get the expected result. The other advantage is that the borrow becomes explicit (the newly required `&`). Since borrowing is important in Rust, it is great to see where things are borrowed and so we are trying to make that as explicit as possible. Previously, the borrow was implicit. The final benefit of this approach is that there is one fewer 'blessed' traits and one fewer kind of expression in the language.

Fixed length and repeating arrays


The syntax for these changed from `[T, ..n]` and `[expr, ..n]` to `[T; n]` and `[expr; n]`, respectively. This is a pretty minor change and the motivation was to allow the wider use of range syntax (see above) - the `..n` could ambiguously be a range or part of these expressions. The `..` syntax is somewhat overloaded in Rust already, so I think this is a nice change in that respect. The semicolon seems just as clear to me, involves fewer characters, and there is less confusion with other expressions.

Sized bound


Used for dynamically sized types the `Sized?` bound indicated that a type parameter may or not be bounded by the `Sized` trait (type parameters have the `Sized` bound by default). Being bound by the `Sized` trait indicates to the compiler that the object has statically known size (as opposed to DST objects). We changed the syntax so that rather than writing `Sized? T` you write `T: ?Sized`. This has the advantage that it is more regular - now all bounds (regular or optional) come after the type parameter name. It will also fit with negative bounds when they come about, which will have the syntax `!Foo`.

`self` in `use` imports


We used to accept `mod` in `use` imports to allow the import of the module itself, e.g., `use foo::bar::{mod, baz};` would import `foo::bar` and `foo::bar::baz`. We changed `mod` to `self`, making the example `use foo::bar::{self, baz};`. This is a really minor change, but I like it - `self`/`Self` has a very consistent meaning as a variable of some kind, whereas `mod` is a keyword; I think that made the original syntax a bit jarring. We also recently introduced scoped enums, which make the pattern of including a module (in this case an enum) and its components more common. Especially in the enum case, I think `self` fits better than `mod` because you are referring to data rather than a module.

`derive` in attributes


The `derive` attribute is used to derive some standard traits (e.g., `Copy`, `Show`) for your data structures. It was previously called `deriving`, and now it is `derive`. This is another minor change, but makes it more consistent with other attributes, and consistency is great!


Thanks to Japaric for doing a whole bunch of work converting our code to using the new syntaxes.

Saturday, December 27, 2014

My thoughts on Rust in 2015

Disclaimer: these are my personal thoughts on where I would like to see Rust go in 2015 and what I would like to work on. They are not official policy from the Rust team or anything like that.

The obvious big thing coming up in 2015 is the 1.0 release. All my energy will be pretty much devoted to that in the first quarter. From the compiler's point of view, we are in pretty good shape for the release. I'll be mainly working on associated types (with Niko) - trying to make them as feature complete and as bug free as possible. This is primarily motivated by the standard library; we want to provide everything the standard library needs here. Other things on my work list are a number of syntactic changes (`?Sized`, slicing (this is not purely syntactic), other minor changes as they come up) and coercions (here, there are a few backwards incompatible things, and providing a really solid, ergonomic story around coercions is important). Any other time is likely to be spent on 'polish' things such as improving error messages and refactoring the compiler in minor ways; possibly even some hacking on the libraries if that would be most useful.

We (the Rust community) also need to do some planning for post-1.0 Rust. There are obviously a lot of features people would like post-1.0 and it would be chaos to try to implement them all asap. So, we need to decide what is in scope for the immediate future of Rust. My personal preference is to focus on stability for a while and only add a minimum of new language features. Stability for me means building better, broader libraries, supporting the ecosystem (work on Cargo and crates.io, for example), fixing bugs, and making the compiler friendlier to work with - both for regular users (error messages, build times, etc.) and for tooling.

'A minimum of language features' will probably mean quite a few new things actually, since we postponed a lot of work due to 1.0 which are pretty high priority. UFCS, custom DST coercions, cross-borrowing coercions, and being able to return a bare trait from a function are high on my wish list. There are a couple of large features which seem debatable for immediate work - higher kinded types and efficient inheritance. Ideally, we would put both of these off, but the former is in great demand for improving the standard libraries and the latter would really help work on Servo (and also improve the compiler in many ways, depending on the chosen solution). Both of these need a lot of design work as well as being big implementation jobs.

Finally, the biggest piece of Rust making me uncomfortable right now is macros. We will have limited macro support for 1.0. I would like to start looking at macro rules 2.0 and a better system for syntax extensions as soon as possible. The longer we leave this, the more people will use and get used to the existing solution or start using external tools to replace syntax extensions.

My vision for macro rules, is basically a tweaked version of today's. Fundamentally, I think hygiene needs to be built into everything from the ground up, including more sophisticated 'type' annotations on macro arguments. Unfortunately, I think this means a lot of work, including rewriting the compiler's name resolution machinery (but then we want to do that anyway).

Syntax extensions/procedural macros have many more open questions. I'm not sure how to make these hygienic and not too painful to write. There is also the question of the interface we make available. Using the compiler's AST and giving extensions access to the entire compiler is pretty awful. There are a number of alternatives: my preference is to separate libsyntax's AST from the compiler's and make the former available as an interface, along with a library of convenience functions. Other alternatives are using source text or token trees as input and output, or relying much more heavily on quasi-quoting.

Looking further ahead, I don't have too many big language features I'm wanting to push forward (efficient inheritance and nested enums/refinement types, being the exception; perhaps parameterised modules at some point). There are a bunch of smaller ones I'm keen on though (type ascription, explicit lifetimes in the syntax, parameterising with ints to make fixed length arrays more usable (which is related to CTFE), etc.), but they are mostly low priority.

I am keen, however, on pushing forward on making the compiler a better piece of software and on improving support for tooling. I think these are two sides of the same coin really. Some ideas I have, in rough order of importance/priority/ease of implementation:

  • fix parallel codegen. This is currently broken with what I suspect is a minor bug. There is another bug preventing having it turned on by default. Given that this halves build times of the compiler for me, I am keen to get it fixed.
  • Land incremental codegen. Stuart Pernsteiner came super-close to finishing this. It should be relatively easy to finish it and land it. It should then have a massive effect on compilation performance.
  • Improve the output of save-analysis to be more useful and general purpose. I hope this can become an API of sorts for many compiler-based tools, especially in the medium term, before we are ready to expose a better and more permenant API.
  • Separate the libsyntax and compiler ASTs. This is primarily in support of the macro changes described above, but I think it is important for allowing long term evolution of the compiler too.
  • Refactor name resolution. Again important for macro reform, but also it is a confusing and buggy part of the compiler.
  • Make the CFG the first class data structure for all stages from borrow checking onwards.
  • Refactor trans to use its own IR, i.e., make it a series of lowering steps: CFG -> IR -> LLVM. This should hopefully make trans easier to understand and extend, and allow for adding our own optimisation passes.
  • Refactor metadata. This is a really ugly part of the compiler. We could make much better use of metadata for tools (there is a lot of overlap with save-analysis output and debuginfo, and it is used by RustDoc), but it is really hard to use at the moment. It is also crucial to have good metadata support for incremental compilation. I hope metadata can become just a straightforward serialisation of some compiler IR, this may mean we need another IR between the compiler's AST and the CFG.
  • Incremental compilation. This is a huge job of work, but really necessary for a lot of tools and would go along way to solving our compile time problems. It is very dependent on a better compiler architecture.

I guess that not all of this will get done in 2015, but I can dream...

Side projects


Some things I want to work on in my spare time (and possibly a little bit of Mozilla time, if it is prioritised right): get DXR up and going. Frustratingly, this was working back in June or July, but then DXR underwent a massive refactoring and I had to port across the Rust plugin. That has dragged on because I have been low on both time and motivation. But it is such a useful tool that I am keen to get this working, and it is getting close now. Once it is 'done' there are always ways to make it better (for example, my latest addition is showing everything imported by a glob import).

Syntax extensions on methods. I hacked this up on the plane to Portland last month, but fixing the details has taken a long while. I think it is ready now, but I need to implement something that uses it to make sure that the design is right. This project meant changing a few things with syntax extensions, and I hope to deprecate and remove some of the older, less flexible versions. Plus, there are some other follow up things.

The motivation for syntax extensions on methods is libhoare. I want to be able to put pre- and postconditions on methods. I have this working in some cases, but there is a lot of duplicate code and I think I can do better. Perhaps not though. Perhaps it can also inform some of the design around libraries for syntax extensions.

A while back I thought of adding `scanln`, a compliment to `println` using the same infrastructure, but for easy input rather easy output. I ended up forgetting about this because the RFC for adding it was rejected due to no out-of-tree implementation, but my implementation required big changes to the existing `format!` support. I would like to resurrect this project, since lack of really easy input is the number one thing which puts me off using Rust for a lot of small tasks. I believe it also raises the bar for Rust being taught at universities, etc.

I have some crazy ideas around a tool that would be a hybrid of grep/sed-with-knowledge-of-syntax-and-types, refactoring tools, and a Rustfix/Rustfmt kind of thing. It seems I need this kind of thing a lot - I spend a lot of time on tasks which are marginally more complicated than sed can handle, but much easier than needing a full refactoring tool.

Tuesday, December 23, 2014

rustaceans.org

I was getting frustrated trying to map people's irc nicks to their GitHub usernames (and back again). I assume other people were having the same problem too. It's pretty hard to envisage a good technical solution to this. The best I could come up with was having a community phone book for the Rust community. I had been meaning to experiment a bit with some modern web dev technologies, so I thought this would be a good opportunity.

Some of the technologies I was interested in were the modern, client-side, JS frameworks (Ember, Angular, React, etc.), node.js, and RESTful APIs. I ended up using Ember, node.js, and the GitHub API. I had fun learning about these technologies and learnt a lot, although I don't think I did more than scratch the surface, especially with Ember, which is HUGE.

What made the project a little bit more interesting is that I have absolutely no interest in getting involved with user credentials - there is simply too much that can go wrong, security-wise, and no one wants to remember another username and password. To deal with this, I observed that pretty much everyone in the Rust community already has a GitHub account, so why not let GitHub do the hard work with security and logins, etc. It is possible to use GitHub authentication on your own website, but I thought it would be fun to use pull requests to maintain user data in the phone book, rather than having to develop a UI for adding and editing user data.

The design of rustaceans.org follows from the idea of making it pull request based: there is a repository, hosted on GitHub, which contains a JSON file for each user. Users can add or update their information by sending a pull request. When a PR is submitted, a GitHub hook sends a request to the rustaceans.org backend (the node.js bit). The backend does a little sanity checking (most importantly that the user has only updated their own data), then merges the PR, then updates the backing database with the user's new data (the db could be considered a cache for the user data repository, it can be completely rebuilt from the repo when necessary).

The backend exposes a web service to access the db. This provides two functions as an http API (I would say it is RESTful, but I'm not 100% sure that it is) - search for a user with some string, and get a specific user by GitHub username. These just pull data out of the database and return it as JSON (not quite the same JSON as users submit, the data has been processed a little bit, for example, parsing the 'notes' field as markdown).

The frontend is the actual rustaceans.org webpage, which is just a small Ember app, and is a pretty simple UI wrapper around the backend web service. There is a front page with some info and a search box, and you can use direct links to users, e.g., http://www.rustaceans.org/nick29581.

All the implementation is pretty straightforward, which I think verifies the design to some extent. The hardest part was learning the new technologies. While using the site is certainly different from a regular setup where you would modify your own details on the site, it seems to be pretty successful. I've had no complaints, and we have a fair number of rustaceans in the db. Importantly, it has needed very little manual intervention - users presumably understand the procedures, and automation is working well.

Please take a look! And if you know any of those technologies, have a look at the source code and let me know where I could have done better (also, patches and issues are very welcome). And of course, if you are part of the Rust community, please add yourself!

Monday, December 15, 2014

Notes on training for sport

I like training for sports. I have rock climbed a lot, and done a bit of swimming and kick boxing. I wouldn't say I'm very good at any of those, but I have definitely improved a lot. I think some of what I learned might be interesting, so here it is. None of this is very scientific, especially since the number of participants in the study is one. There are also lots of better sources - articles by coaches and sports scientists, rather than amateurs like me. Still, if you are interested, read on.

* Don't get injured. Injury prevention should be your number one goal when training. That means knowing your limits, warming up, and doing 'pre-hab' exercises (these are exercises that don't directly get you closer to your goals, but reduce the risk of injury by training the antagonistic muscles or improving mobility, etc). The weeks you miss due to injury will affect progress more than any other factor in your training.

* Training must by super-specialised. Training works because the body adapts to the pressures put on it by training. But that adaptation is much more specialised than you might think, especially once you get more advanced in your training. This can lead to some surprises. For example, most climbs are most demanding on the fingers (but see section on technique, below), that means doing pullups will not help you achieve these kinds of climbs. Likewise, be very precise about where you are training on the power-endurance spectrum, being able to run for many kms will not help you run 100m any faster.

This also goes for training the antagonists. For example, doing pressups or bench presses makes climbers more imbalanced, not less. This is because it is usually the shoulders which cause more trouble than the elbows (those exercises are good for balancing the elbows). Rebalancing the shoulders requires exercises that bring the shoulder blades back and down, such as rowing-type exercises. Balancing the elbow is also interesting - climbing tends to overuse the brachioradialis (used for pull ups with the palms out or curls with the palms down) vs the biceps, so doing curls (palms up) can improve muscle balance in the elbow, even though the biceps are usually thought of as a climbing muscle.

Also, yoga is terrible for climbers, balance-wise.

* Technique is important. You always think your technique is good enough, but it can usually be better. This is obvious for technique-based sports like climbing and kick boxing, but it holds true for basically all exercise, even lifting weights - my biggest gains in the bench press and deadlift came from technique coaching, and that was starting from 'good' technique.

* Stretch when warm. Stretching is not a warm up and stretching cold is really bad, I got injured this way a few times. This is important for yoga in particular, you really need to go gentle until you are warmed up.

* To train effectively you need to repeat the same thing and make it progressively more difficult. Doing one session of an exercise doesn't make any difference, you have to do that session once a week (or more) for six weeks (or more) and keep increasing resistance or going for longer. However, if you keep doing the same thing for too long, you'll hit a plateau and won't improve. I found this hardest with the things I have most fun doing - I don't want to change them up because I love doing it, but if you don't make changes, you don't keep improving.

* Protein is the king of nutrients, at least as far as training is concerned. I had this vividly illustrated when I turned vegetarian, my performance dropped and I lost muscle mass. I solved that problem with protein shakes and I think getting plenty of protein is the best thing you can do for you diet when training hard.

There is a meme that too much protein is bad for you in some way, but I don't think that is true, at least as long as you have an otherwise balanced diet (plenty of fibre, vitamins, etc.). Research linking excess protein to kidney damage only indicates that if you already have kidney damage, then excess protein can make it worse. No research (afaik) indicates that excess protein can cause kidney damage in the first place.

I haven't found any other supplements to be anywhere near as worth while. BCAAs seem to have a significant but small effect. Some carbs in drinking water when training also seems to help a little. Creatine made a big difference, but the weight gain (in water retention) meant it was not worth it for climbing (except maybe to escape a plateau), and when you stop taking it the withdrawal is harsh, training-wise.

Wednesday, October 22, 2014

Thoughts on numeric types

Rust has fixed width integer and floating point types (`u8`, `i32`, `f64`, etc.). It also has pointer width types (`int` and `uint` for signed and unsigned integers, respectively). I want to talk a bit about when to use which type, and comment a little on the ongoing debate around the naming and use of these types.

Choosing a type


Hopefully you know whether you want an integer or a floating point number. From here on in I'll assume you want an integer, since they are the more interesting case. Hopefully you know if you need your integer to be signed or not. Then it gets interesting.

All other things being equal, you want to use the smallest integer you can for performance reasons. Programs run as fast or faster on smaller integers (especially if the larger integer has to be emulated in software, e.g., `u64` on a 32 bit machine). Furthermore, smaller integers will give smaller code.

At this point you need to think about overflow. If you pick a type which is too small for your data, then you will get overflow and usually bugs. You very rarely want overflow. Sometimes you do - if you want arithmetic modulo 2^n where n is 8, 16, 32, or 64, you can use the fixed width type and rely on overflow behaviour. Sometimes you might also want signed overflow for some bit twiddling tricks. But usually you don't want overflow.

If your data could grow to any size, you should use a type which will never overflow, such as Rust's `num::bigint::BigInt`. You might be able to do better performance-wise if you can prove that values might only overflow in certain places and/or you can cope with overflow without 'upgrading' to a wider integer type.

If, on the other hand, you choose a fixed width integer, you are asserting that the value will never exceed that size. For example, if you have an ascii character, you know it won't exceed 8 bits, so you can use `u8` (assuming you're not going to do any arithmetic which might cause overflow).

So far, so obvious. But, what are `int` and `uint` for? These types are pointer width, that means they are the same size as a pointer on the system you are compiling for. When using these types, you are asserting that a value will never grow larger than a pointer (taking into account details about the sign, etc.). This is actually quite a rare situation, the usual case is when indexing into an array, which is itself quite rare in Rust (since we prefer using an iterator).

What you should never do is think "this number is an integer, I'll use `int`". You must always consider the maximum size of the integer and thus the width of the type you'll use.

Language design issues


There are a few questions that keep coming up around numeric types - how to name the types? Which type to use as a default? What should `int`/`uint` mean?

It should be clear from the above that there are only very few situations when using `int`/`uint` is the right choice. So, it is a terrible choice for any kind of default. But what is a good choice? Well first of all, there are two meanings for 'default': the 'go to' type to represent an integer when programming (especially in tutorials and documentation), and the default when a numeric literal does not have a suffix and type inference can't infer a type. The first is a matter of recommendation and style, and the second is built-in to the compiler and language.

In general programming, you should use the right width type, as discussed above. For tutorials and documentation, it is often less clear which width is needed. We have had an unfortunate tendency to reach for `int` here because it is the most familiar and the least scary looking. I think this is wrong. We should probably use a variety of sized types so that newcomers to Rust get aquainted with the fixed width integer types and don't perpetuate the habit of using `int`.

For a long time, Rust had `int` as the default type when the compiler couldn't decide on something better. Then we got rid of the default entirely and made it a type error if no precise numeric type could be inferred. Now we have decided we should have a default again. The problem is that there is no good default. If you aren't being explicit about the width of the value, you are basically waving your hands about overflow and taking a "it'll be fine, whatever" approach, which is bad programming. There is no default choice of integer which is appropriate for that situation (except a growable integer like BigInt, but that is not an appropriate default for a systems langauge). We could go with `i64` since that is the least worst option we have in terms of overflow (and thus safety). Or we could go with `i32` since that is probably the most performant on current processors, but neither of these options are future-proof. We could use `int`, but this is wrong since it is so rare to be able to reason that you won't overflow when you have an essentially unknown width. Also, on processors with less than 32 bit pointers, it is far to easy to overflow. I suspect there is no good answer. Perhaps the best thing is to pick `i64` because "safety first".

Which brings us to naming. Perhaps `int`/`uint` are nor the best names since they suggest they should be the default types to use when they are not. Names such as `index`/`uindex`, `int_ptr`/`u_ptr`, and `size`/`usize` have been suggested. All of these are better in that they suggest the proper use for these types. They are not such nice names though, but perhaps that is OK, since we should (mostly) discourage their use. I'm not really sure where I stand on this, again I don't really like any of the options, and at the end of the day, naming is hard.

Saturday, September 13, 2014

A gotcha with raw pointers and unsafe code

This bit me today. It's not actually a bug and it only happens in unsafe code, but it is non-obvious and something to be aware of.

There are a few components to the issue. First off, we must look at `&expr` where `expr` is an rvalue, that is a temporary value. Rust allows you to write (for example) `&42` and through some magic, `42` will be allocated on the stack and `x` will be a reference to it with an inferred lifetime shorter than the value. For example, `let x = &42i;` works as does

struct Foo<'a> {
    f: &'a int,
}
fn main() {
    let x = Foo { f: &42 };
}

Next, we must know that borrowed pointers (`&T`) can be implicitly coerced to raw pointers (`*T`). So if you write `let x: *const int = &42;`, `x` is a raw pointer produced by coercing the borrowed pointer. Once this happens, you have no safety guarantees - a raw pointer can point at memory that has already been freed. This is fine, since you see the raw pointer type and must be aware, but if the type comes from a struct field what looks like a borrowed pointer could actually be a raw pointer:

struct Bar {
    f: *const int,
}
fn main() {
    let x = Bar { f: &42 };
}

Imagine that `Bar` is some other module or crate, then you might assume that `main` is ok here. But it is not. Since the borrowed pointer has a narrow scope and is not stored (the raw pointer does not count for this analysis), the compiler can choose to delete the `42` allocated on the stack and reuse that memory straight after (or even during, probably) the `let` statement. So, `x.f` is potentially a dangling pointer as soon as `x` is available and accessing it will give you bugs. That is OK, you can only do so in an `unsafe` block and thus you should check (i.e., as a programmer you should check) that you can't get a dangling pointer. You must do this whenever you dereference a raw pointer, and the fact that it must happen in unsafe code is your cue to do so.

The final part of this gotcha was that I was already in unsafe code and was transmuting. Of course transmuting is awful and you should never do it, but sometimes you have to. If you do `unsafe { transmute(x) }` then there is no cue in the code that you have a raw pointer. You have no cue to check the dereference, because there is no dereference! You just get a weird bug that only appears on some platforms and depends on the optimisation level of compilation.

Unfortunately, there is nothing we can really do from the language point of view - you just have to be super-careful around unsafe code, and especially transmutes.

Hat-tip to eddyb for figuring out what was going on here.

Wednesday, July 23, 2014

LibHoare - pre- and postconditions in Rust

I wrote a small macro library for writing pre- and postconditions (design by contract style) in Rust. It is called LibHoare (named after the logic and in turn Tony Hoare) and is here (along with installation instructions). It should be easy to use in your Rust programs, especially if you use Cargo. If it isn't, please let me know by filing issues on GitHub.

The syntax is straightforward, you add `[#precond="predicate"]` annotations before a function where `predicate` is any Rust expression which will evaluate to a bool. You can use any variables which would be in scope where the function is defined and any arguments to the function. Preconditions are checked dynamically before a function is executed on every call to that function.

You can also write `[#postcond="predicate"]` which is checked on leaving a function and `[#invariant="predicate"]` which is checked before and after. You can write any combination of annotations too. In postconditions you can use the special variable `result` (soon to be renamed to `return`) to access the value returned by the function.

There are also `debug_*` versions of each annotation which are not checked in --ndebug builds.

The biggest limitation at the moment is that you can only write conditions on functions, not methods (even static ones). This is due to a restriction on where any annotation can be placed in the Rust compiler. That should be resolved at some point and then LibHoare should be pretty easy to update.

If you have ideas for improvement, please let me know! Contributions are very welcome.

# Implementation

The implementation of these syntax extensions is fairly simple. Where the old function used to be, we create a new function with the same signature and an empty body. Then we declare the old function inside the new function and call it with all the arguments (generating the list of arguments is the only interesting bit here because arguments in Rust can be arbitrary patterns). We then return the result of that function call as the result of the outer function. Preconditions are just an `assert!` inserted before calling the inner function and postconditions are an `assert!` inserted after the function call and before returning.

Thursday, July 17, 2014

Rust for C++ programmers - part 9: destructuring pt2 - match and borrowing

(Continuing from part 8, destructuring).

(Note this was significantly edited as I was wildly wrong the first time around. Thanks to /u/dpx-infinity for pointing out my mistakes.)

When destructuring there are some surprises in store where borrowing is concerned. Hopefully, nothing surprising once you understand borrowed references really well, but worth discussing (it took me a while to figure out, that's for sure. Longer than I realised, in fact, since I screwed up the first version of this blog post).

Imagine you have some `&Enum` variable `x` (where `Enum` is some enum type). You have two choices: you can match `*x` and list all the variants (`Variant1 => ...`, etc.) or you can match `x` and list reference to variant patterns (`&Variant1 => ...`, etc.). (As a matter of style, prefer the first form where possible since there is less syntactic noise). `x` is a borrowed reference and there are strict rules for how a borrowed reference can be dereferenced, these interact with match expressions in surprising ways (at least surprising to me), especially when you a modifying an existing enum in a seemingly innocuous way and then the compiler explodes on a match somewhere.

Before we get into the details of the match expression, lets recap Rust's rules for value passing. In C++, when assigning a value into a variable or passing it to a function there are two choices - pass-by-value and pass-by-reference. The former is the default case and means a value is copied either using a copy constructor or a bitwise copy. If you annotate the destination of the parameter pass or assignment with `&`, then the value is passed by reference - only a pointer to the value is copied and when you operate on the new variable, you are also operating on the old value.

Rust has the pass-by-reference option, although in Rust the source as well as the destination must be annotated with `&`. For pass-by-value in Rust, there are two further choices - copy or move. A copy is the same as C++'s semantics (except that there are no copy constructors in Rust). A move copies the value but destroys the old value - Rust's type system ensures you can no longer access the old value. As examples, `int` has copy semantics and `Box<int>` has move semantics:

fn foo() {
    let x = 7i;
    let y = x;                // x is copied
    println!("x is {}", x);   // OK

    let x = box 7i;
    let y = x;                // x is moved
    //println!("x is {}", x); // error: use of moved value: `x`
}

Rust determines if an object has move or copy semantics by looking for destructors. Destructors probably need a post of their own, but for now, an object in Rust has a destructor if it implements the `Drop` trait. Just like C++, the destructor is executed just before an object is destroyed. If an object has a destructor then it has move semantics. If it does not, then all of its fields are examined and if any of those do then the whole object has move semantics. And so on down the object structure. If no destructors are found anywhere in an object, then it has copy semantics.

Now, it is important that a borrowed object is not moved, otherwise you would have a reference to the old object which is no longer valid. This is equivalent to holding a reference to an object which has been destroyed after going out of scope - it is a kind of dangling pointer. If you have a pointer to an object, there could be other references to it. So if an object has move semantics and you have a pointer to it, it is unsafe to dereference that pointer. (If the object has copy semantics, dereferencing creates a copy and the old object will still exist, so other references will be fine).

OK, back to match expressions. As I said earlier, if you want to match some `x` with type `&T` you can dereference once in the match clause or match the reference in every arm of the match expression. Example:

enum Enum1 {
    Var1,
    Var2,
    Var3
}

fn foo(x: &Enum1) {
    match *x {  // Option 1: deref here.
        Var1 => {}
        Var2 => {}
        Var3 => {}
    }

    match x {
        // Option 2: 'deref' in every arm.
        &Var1 => {}
        &Var2 => {}
        &Var3 => {}
    }
}

In this case you can take either approach because `Enum1` has copy semantics. Let's take a closer look at each approach: in the first approach we dereference `x` to a temporary variable with type `Enum1` (which copies the value in `x`) and then do a match against the three variants of `Enum1`. This is a 'one level' match because we don't go deep into the value's type. In the second approach there is no dereferencing. We match a value with type `&Enum1` against a reference to each variant. This match goes two levels deep - it matches the type (always a reference) and looks inside the type to match the referred type (which is `Enum1`).

Either way, we must ensure that we (that is, the compiler) must ensure we respect Rust's invariants around moves and references - we must not move any part of an object if it is referenced. If the value being matched has copy semantics, that is trivial. If it has move semantics then we must make sure that moves don't happen in any match arm. This is accomplished either by ignoring data which would move, or making references to it (so we get by-reference passing rather than by-move).

enum Enum2 {
    // Box has a destructor so Enum2 has move semantics.
    Var1(Box<int>),
    Var2,
    Var3
}

fn foo(x: &Enum2) {
    match *x {
        // We're ignoring nested data, so this is OK
        Var1(..) => {}
        // No change to the other arms.
        Var2 => {}
        Var3 => {}
    }

    match x {
        // We're ignoring nested data, so this is OK
        &Var1(..) => {}
        // No change to the other arms.
        &Var2 => {}
        &Var3 => {}
    }
}

In either approach we don't refer to any of the nested data, so none of it is moved. In the first approach, even though `x` is referenced, we don't touch its innards in the scope of the dereference (i.e., the match expression) so nothing can escape. We also don't bind the whole value (i.e., bind `*x` to a variable), so we can't move the whole object either.

We can take a reference to any variant in the second match, but not in the derferenced version. So, in the second approach replacing the second arm with `a @ &Var2 => {}` is OK (`a` is a reference), but under the first approach we couldn't write `a @ Var2 => {}` since that would mean moving `*x` into `a`. We could write `ref a @ Var2 => {}` (in which `a` is also a reference), although it's not a construct you see very often.

But what about if we want to use the data nested inside `Var1`? We can't write:

    match *x {
        Var1(y) => {}
        _ => {}
    }

or

    match x {
        &Var1(y) => {}
        _ => {}
    }

because in both cases it means moving part of `x` into `y`. We can use the 'ref' keyword to get a reference to the data in `Var1`: `&Var1(ref y) => {}`.That is OK, because now we are not dereferencing anywhere and thus not moving any part of `x`. Instead we are creating a pointer which points into the interior of `x`.

Alternatively, we could destructure the Box (this match is going three levels deep): `&Var1(box y) => {}`. This is OK because `int` has copy semantics and `y` is a copy of the `int` inside the `Box` inside `Var1` (which is 'inside' a borrowed reference). Since `int` has copy semantics, we don't need to move any part of `x`. We could also create a reference to the int rather than copy it: `&Var1(box ref y) => {}`. Again, this is OK, because we don't do any dereferencing and thus don't need to move any part of `x`. If the contents of the Box had move semantics, then we could not write `&Var1(box y) => {}`, we would be forced to use the reference version. We could also use similar techniques with the first approach to matching, which look the same but without the first `&`. For example, `Var1(box ref y) => {}`.

Now lets get more complex. Lets say you want to match against a pair of reference-to-enum values. Now we can't use the first approach at all:

fn bar(x: &Enum2, y: &Enum2) {
    // Error: x and y are being moved.
    // match (*x, *y) {
    //     (Var2, _) => {}
    //     _ => {}
    // }

    // OK.
    match (x, y) {
        (&Var2, _) => {}
        _ => {}
    }
}

The first approach is illegal because the value being matched is created by dereferencing `x` and `y` and then moving them both into a new tuple object. So in this circumstance, only the second approach works. And of course, you still have to follow the rules above for avoiding moving parts of `x` and `y`.

If you do end up only being able to get a reference to some data and you need the value itself, you have no option except to copy that data. Usually that means using `clone()`. If the data doesn't implement clone, you're going to have to further destructure to make a manual copy or implement clone yourself.

What if we don't have a reference to a value with move semantics, but the value itself. Now moves are OK, because we know no one else has a reference to the value (the compiler ensures that if they do, we can't use the value). For example,

fn baz(x: Enum2) {
    match x {
        Var1(y) => {}
        _ => {}
    }
}

There are still a few things to be aware of. Firstly, you can only move to one place. In the above example we are moving part of `x` into `y` and we'll forget about the rest. If we wrote `a @ Var1(y) => {}` we would be attempting to move all of `x` into `a` and part of `x` into `y`. That is not allowed, an arm like that is illegal. Making one of `a` or `y` a reference (using `ref a`, etc.) is not an option either, then we'd have the problem described above where we move whilst holding a reference. We can make both `a` and `y` references and then we're OK - neither is moving, so `x` remains in tact and we have pointers to the whole and a part of it.

Similarly (and more common), if we have a variant with multiple pieces of nested data, we can't take a reference to one datum and move another. For example if we had a `Var4` declared as `Var4(Box<int>, Box<int>)` we can have a match arm which references both (`Var4(ref y, ref z) => {}`) or a match arm which moves both (`Var4(y, z) => {}`) but you cannot have a match arm which moves one and references the other (`Var4(ref y, z) => {}`). This is because a partial move still destroys the whole object, so the reference would be invalid.

Sunday, July 13, 2014

Rust for C++ programmers - part 8: destructuring

First an update on progress. You probably noticed this post took quite a while to come out. Fear not, I have not given up (yet). I have been busy with other things, and there is a section on match and borrowing which I found hard to write and it turns out I didn't understand very well. It is complicated and probably deserves a post of its own, so after all the waiting, the interesting bit is going to need more waiting. Sigh.

I've also been considering the motivation of these posts. I really didn't want to write another tutorial for Rust, I don't think that is a valuable use of my time when there are existing tutorials and a new guide in the works. I do think there is something to be said for targeting tutorials at programmers with different backgrounds. My first motivation for this series of posts was that a lot of energy in the tutorial was expended on things like pointers and the intuition of ownership which I understood well from C++, and I wanted a tutorial that concentrated on the things I didn't know. That is hopefully where this has been going, but it is a lot of work, and I haven't really got on to the interesting bits. So I would like to change the format a bit to be less like a tutorial and more like articles aimed at programmers who know Rust to some extent, but know C++ a lot better and would like to bring up their Rust skills to their C++ level. I hope that complements the existing tutorials better and is more interesting for readers. I still have some partially written posts in the old style so they will get mixed in a bit. Let me know what you think of the idea in the comments.

Destructuring


Last time we looked at Rust's data types. Once you have some data structure, you will want to get that data out. For structs, Rust has field access, just like C++. For tuples, tuple structs, and enums you must use destructuring (there are various convenience functions in the library, but they use destructuring internally). Destructuring of data structures doesn't happen in C++, but it might be familiar from languages such as Python or various functional languages. The idea is that just as you can create a data structure by filling out its fields with data from a bunch of local variables, you can fill out a bunch of local variables with data from a data structure. From this simple beginning, destructuring has become one of Rust's most powerful features. To put it another way, destructuring combines pattern matching with assignment into local variables.

Destructuring is done primarily through the let and match statements. The match statement is used when the structure being desctructured can have difference variants (such as an enum). A let expression pulls the variables out into the current scope, whereas match introduces a new scope. To compare:
fn foo(pair: (int, int)) {
    let (x, y) = pair;
    // we can now use x and y anywhere in foo

    match pair {
        (x, y) => {
            // x and y can only be used in this scope
        }
    }
}

The syntax for patterns (used after `let` and before `=>` in the above example) in both cases is (pretty much) the same. You can also use these patterns in argument position in function declarations:
fn foo((x, y): (int, int)) {
}

(Which is more useful for structs or tuple-structs than tuples).

Most initialisation expressions can appear in a destructuring pattern and they can be arbitrarily complex. That can include references and primitive literals as well as data structures. For example,
struct St {
    f1: int,
    f2: f32
}

enum En {
    Var1,
    Var2,
    Var3(int),
    Var4(int, St, int)
}

fn foo(x: &En) {
    match x {
        &Var1 => println!("first variant"),
        &Var3(5) => println!("third variant with number 5"),
        &Var3(x) => println!("third variant with number {} (not 5)", x),
        &Var4(3, St{ f1: 3, f2: x }, 45) => {
            println!("destructuring an embedded struct, found {} in f2", x)
        }
        &Var4(_, x, _) => {
            println!("Some other Var4 with {} in f1 and {} in f2", x.f1, x.f2)
        }
        _ => println!("other (Var2)")
    }
}

Note how we destructure through a reference by using `&` in the patterns and how we use a mix of literals (`5`, `3`, `St { ... }`), wildcards (`_`), and variables (`x`).

You can use `_` wherever a variable is expected if you want to ignore a single item in a pattern, so we could have used `&Var3(_)` if we didn't care about the integer. In the first `Var4` arm we destructure the embedded struct (a nested pattern) and in the second `Var4` arm we bind the whole struct to a variable. You can also use `..` to stand in for all fields of a tuple or struct. So if you wanted to do something for each enum variant but don't care about the content of the variants, you could write:
fn foo(x: En) {
    match x {
        Var1 => println!("first variant"),
        Var2 => println!("second variant"),
        Var3(..) => println!("third variant"),
        Var4(..) => println!("fourth variant")
    }
}


When destructuring structs, the fields don't need to be in order and you can use `..` to elide the remaining fields. E.g.,
struct Big {
    field1: int,
    field2: int,
    field3: int,
    field4: int,
    field5: int,
    field6: int,
    field7: int,
    field8: int,
    field9: int,
}

fn foo(b: Big) {
    let Big { field6: x, field3: y, ..} = b;
    println!("pulled out {} and {}", x, y);
}

As a shorthand with structs you can use just the field name which creates a local variable with that name. The let statement in the above example created two new local variables `x` and `y`. Alternatively, you could write
fn foo(b: Big) {
    let Big { field6, field3, ..} = b;
    println!("pulled out {} and {}", field3, field6);
}

Now we create local variables with the same names as the fields, in this case `field3` and `field6`.

There are a few more tricks to Rust's destructuring. Lets say you want a reference to a variable in a pattern. You can't use `&` because that matches a reference, rather than creates one (and thus has the effect of dereferencing the object). For example,
struct Foo {
    field: &'static int
}

fn foo(x: Foo) {
    let Foo { field: &y } = x;
}

Here, `y` has type `int` and is a copy of the field in `x`.

To create a reference to something in a pattern, you use the `ref` keyword. For example,
fn foo(b: Big) {
    let Big { field3: ref x, ref field6, ..} = b;
    println!("pulled out {} and {}", *x, *field6);
}

Here, `x` and `field6` both have type `&int` and are references to the fields in `b`.

One last trick when destructuring is that if you are detructuring a complex object, you might want to name intermediate objects as well as individual fields. Going back to an earlier example, we had the pattern `&Var4(3, St{ f1: 3, f2: x }, 45)`. In that pattern we named one field of the struct, but you might also want to name the whole struct object. You could write `&Var4(3, s, 45)` which would bind the struct object to `s`, but then you would have to use field access for the fields, or if you wanted to only match with a specific value in a field you would have to use a nested match. That is not fun. Rust lets you name parts of a pattern using `@` syntax. For example `&Var4(3, s @ St{ f1: 3, f2: x }, 45)` lets us name both a field (`x`, for `f2`) and the whole struct (`s`).

That just about covers your options with Rust pattern matching. There are a few features I haven't covered, such as matching vectors, but hopefully you know how to use `match` and `let` and have seen some of the powerful things you can do. Next time I'll cover some of the subtle interactions between match and borrowing which tripped me up a fair bit when learning Rust.