Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Good writing is about having something interesting and original to say. Generative AI tools might provide technical help, but they are no substitute for your unique perspective.
In an age of expensive experiments and hype around new data-driven methods, researchers understandably want to ensure they are gleaning as much insight from their data as possible. Rachel C. Kurchin argues that there is still plenty to be learned from older approaches without turning to black boxes.
The organizers of Science Fiction and the Future of Detection and Imaging, a series of workshops exploring the role of technology in future societies, share what they learned from these events.
In 2023, a number of experiments on trilayer 2D structures uncovered new exciton states that have an electrically-tunable dipole moment and show a quantum many-body phase diagram.
Despite recent breakthroughs in quantum error correction experiments with trapped ions, superconducting circuits and reconfigurable atom arrays, there are still several technological challenges to overcome.
Quantum nanophotonics examines the interaction between emitters and light confined at the nanoscale. This Review highlights the experimental progress in the field, explains new light–matter interaction regimes and emphasizes their potential applications in quantum technologies.
Understanding the W boson as accurately as possible, including knowing its mass, has been a priority in particle physics for decades. This Perspective article gives an overview of the role of the W boson mass in the Standard Model and its extensions and compares techniques for measuring it.
Generative machine learning models seek to approximate and then sample the probability distribution of the data sets on which they are trained. This Perspective article connects these methods to historical studies of information processing and attractor geometry in nonlinear systems.