How much materials science can be done purely at the computer? More than you could ten years ago, surely, just as it has been for every decade since the computer was invented. But curiously, it's not exponentially more: the extent of what can be predicted by computational methods hasn't scaled with Moore's law. Partly this is the curse of simple volumetrics: an order of magnitude increase in the dimensions of a simulated system corresponds to three orders of magnitude more atoms (and for various reasons the computational intensity can increase still faster). It's also a question of complexity: for example, many materials properties involve dynamic non-equilibrium phenomena that unfold over long timescales.

For these and other reasons, designing materials computationally is still a challenging, often overwhelming task. But it is happening (S. Curtarolo et al. Nature Mater. 12, 191–201; 2013) — it has become possible, for example, to speak meaningfully of first-principles metallurgy, in which alloys are designed for structural and electronic applications, generally based on density-functional calculations of the relevant bulk properties.

One of the biggest problems for all approaches to materials design, whether experimental or computational, however, is that the range of options is so vast. Even screening ternary alloys presents a dizzying number of candidates, while today's engineering alloys can have ten or more elemental components. The hope here is that computation can at least winnow the list of candidates, even though it's often necessary to resort to experimental testing of the best ones.

But what does 'best' mean? Designing materials has always been a question of compromise, trading one desirable property against another (and cost must almost always be factored in somewhere). It is within this context that Lejaeghere et al. present a new methodology for selecting the best candidates from computational screening of materials (Phys. Rev. Lett. 111, 075501; 2013). They point out that, although one can often use computation to identify a set of materials that outperforms the rest for a particular set of selection criteria, it is harder to rank these candidates in terms of their optimality. This is what the new procedure accomplishes.

Credit: PHILIP BALL

It does so by defining a 'win fraction' for each pair of candidates, which quantifies the fraction of the trade-off in design criteria that favours one candidate over the other, summed over all the criteria. Then the minimum of the win fraction for each candidate with respect to all the others provides the required ranking factor: the larger this minimum, the better the trade-off in comparison to the rest of the set.

Lejaeghere et al. show that their approach produces intuitively sensible results when searching among tungsten, its binary alloys, and other pure elements, for economical materials with high mass density. Furthermore, they use the same candidate set to identify materials that optimize hardness (for which the computed bulk modulus stands proxy), thermal resistance (cohesive energy) and price. A third, more demanding case, seeks a material needed in nuclear reactors that balances ductility, temperature resistance and price.

Including more complex materials formulations, or examining properties that demand more than a scale-independent density-functional calculation of bulk properties, will doubtless introduce a steep gradient in computational cost. But at least for certain types of problem, the method can find the best of the bunch.