Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Volume 5 Issue 10, October 2023

Folding with large-scale protein language models

The cover image shows a protein, folded in space and forming a stable 3D structure. AlphaFold has revolutionized the ability to predict protein structures. Work in this issue by Fang et al. further improves prediction capability and efficiency by combining a large-scale protein language model, trained on thousands of millions of primary structures in a self-supervised way, with the geometric learning capability of AlphaFold2.

See Fang et al.

Image: Baidu Inc. Cover design: Thomas Phillips

Editorial

  • AI-generated media are on the rise and are here to stay. Regulation is urgently needed, but in the meantime creators, users and content distributors need to pursue various ways, and adopt various tools, for responsible generation, sharing and detection of AI-generated content.

    Editorial

    Advertisement

Top of page ⤴

Correspondence

Top of page ⤴

News & Views

  • Efficient quantum-control protocols are required to utilize the full power of quantum computers. A new reinforcement learning approach can realize efficient, robust control of quantum many-body states, promising a practical advance in harnessing present-day quantum technologies.

    • Ying Lu
    • Shi-Ju Ran
    News & Views
Top of page ⤴

Research

  • Identifying interventions that can induce a desired effect is challenging owing to the combinatorial number of possible choices in design space. Zhang and colleagues propose an active learning approach with theoretical guarantees to discover optimal interventions in causal models, and demonstrate the framework in the context of genetic perturbation design using single-cell transcriptomic data.

    • Jiaqi Zhang
    • Louis Cammarata
    • Caroline Uhler
    Article
  • The recent accessibility of large language models brought them into contact with a large number of users and, due to the social nature of language, it is hard to avoid prescribing human characteristics such as intentions to a chatbot. Pataranutaporn and colleagues investigated how framing a bot as helpful or manipulative can influence this perception and the behaviour of the humans that interact with it.

    • Pat Pataranutaporn
    • Ruby Liu
    • Pattie Maes
    Article
  • AlphaFold2 has revolutionized bioinformatics, but its ability to predict protein structures with high accuracy comes at the price of a costly database search for multiple sequence alignments. Fang and colleagues pre-train a large-scale protein language model and use it in conjunction with AlphaFold2 as a fully trainable and efficient model for structure prediction.

    • Xiaomin Fang
    • Fan Wang
    • Le Song
    Article Open Access
  • Deep learning can help develop non-invasive technology for decoding speech from brain activity, which could improve the lives of patients with brain injuries. Défossez et al. report a contrastive-learning approach to decode speech listening from human participants, using public databases of recordings based on non-invasive magnetic and electrical measurements.

    • Alexandre Défossez
    • Charlotte Caucheteux
    • Jean-Rémi King
    Article Open Access
  • Online matching platforms are increasingly used for applications with positive social impact such as matching blood donors with recipients, where matching algorithms need to balance fairness with an efficiency objective. The authors demonstrate, both in computational simulations and using real data from the Facebook Blood Donations tool, that introducing a simple online matching policy can substantially increase the likelihood of donor action.

    • Duncan C. McElfresh
    • Christian Kroer
    • John P. Dickerson
    Article
  • Despite their efficiency advantages, the performance of photonic neural networks is hampered by the accumulation of inherent systematic errors. Zheng et al. propose a dual backpropagation training approach, which allows the network to adapt to systematic errors, thus outperforming state-of-the-art in situ training approaches.

    • Ziyang Zheng
    • Zhengyang Duan
    • Xing Lin
    Article
  • The immense amount of Wikipedia articles makes it challenging for volunteers to ensure that cited sources support the claim they are attached to. Petroni et al. use an information-retrieval model to assist Wikipedia users in improving verifiability.

    • Fabio Petroni
    • Samuel Broscheit
    • Sebastian Riedel
    Article Open Access
  • Fine motor skill recovery in hand rehabilitation is a challenge due to limited finger movement sensing and closed-loop control algorithms in existing rehabilitation gloves. Sui et al. develop a soft-packaged rehabilitation glove, integrating sensing, actuation, a human–machine interface, power, electronics and a closed-loop algorithm. The glove aids patients after a stroke to recover fine motor skills of the fingers in a portable manner.

    • Mengli Sui
    • Yiming Ouyang
    • Shiwu Zhang
    Article
  • With the rapid development of natural language processing (NLP) models in the last decade came the realization that high performance levels on test sets do not imply that a model robustly generalizes to a wide range of scenarios. Hupkes et al. review generalization approaches in the NLP literature and propose a taxonomy based on five axes to analyse such studies: motivation, type of generalization, type of data shift, the source of this data shift, and the locus of the shift within the modelling pipeline.

    • Dieuwke Hupkes
    • Mario Giulianelli
    • Zhijing Jin
    Analysis Open Access
Top of page ⤴

Search

Quick links