Discover out-of-this-world objects on our new trail in collaboration with Disney and Pixar’s Elio

Discover out-of-this-world objects on our new trail in collaboration with Disney and Pixar’s Elio

For centuries, people have called out to the universe looking for answers—in Disney and Pixar’s all-new feature film Elio, the universe calls back! The cosmic misadventure introduces Elio, a space fanatic with an active imagination and a huge alien obsession. So, when he’s beamed up to the Communiverse, an interplanetary organization with representatives from galaxies far and wide, Elio’s all in for the epic undertaking. Mistakenly identified as Earth’s leader, he must form new bonds with eccentric alien lifeforms, navigate a crisis of intergalactic proportions, and somehow discover who and where he is truly meant to be.  

To celebrate the final weeks of our iconic Exploring Space gallery on 2 June and to mark the release of Disney and Pixar’s all-new feature film Elio, we’ve put together an exciting free trail, inspired by the film, which celebrates the power of friendship and imagination.  

Elio, a space fan with an active imagination, finds himself on a cosmic misadventure where he must form new bonds with eccentric alien lifeforms © 2025 Disney/Pixar. All Rights Reserved.

Visitors will be able to pick up a trail from the entrance to our Exploring Space gallery and find game-changing space objects across both this iconic gallery and around the rest of the museum. 

In Exploring Space, you can pass beneath suspended rockets and walk around a full-sized replica of Eagle—the lander that took astronauts Armstrong and Aldrin to the Moon in 1969 – or discover how we are able to live in space, to breathe, eat, drink and even go to the toilet.  

Visitors can also study a suspended model of the Hubble Space Telescope full-size replicas of the Beagle 2 Mars lander and the Huygens Titan spacecraft – which will remain on display in the gallery until 2 June.  

Go on a cosmic adventure in our Exploring Space gallery before it closes on 2 June

Having encountered breath-taking objects across the museum, visitors can shoot for the stars and submit a completed trail for a chance to win a glamping adventure under the skies in a space-inspired geodome, plus a goody bag full of Science Museum gifts.  

After inspiring tens of millions of visitors for almost forty years, our Exploring Space gallery closed partially on 15 May and will fully close on 2 June 2025. But there is still plenty of time to visit so don’t miss your last chance to see this stellar gallery. Channel your inner space explorer and join us for a trail that’s truly out of this world. 


The free Exploring Space Trail, inspired by Disney and Pixar’s Elio, will be available from Friday 16 May – Sunday 2 June 2025 from the Exploring Space gallery. 

The post Discover out-of-this-world objects on our new trail in collaboration with Disney and Pixar’s Elio appeared first on Science Museum Blog.

Addendum to o3 and o4-mini system card: Codex

Codex is a cloud-based coding agent. Codex is powered by codex-1, a version of OpenAI o3 optimized for software engineering. codex-1 was trained using reinforcement learning on real-world coding tasks in a variety of environments to generate code that closely mirrors human style and PR preferences, adheres precisely to instructions, and iteratively runs tests until passing results are achieved.

Introducing Codex

Introducing Codex: a cloud-based software engineering agent that can work on many tasks in parallel, powered by codex-1. With Codex, developers can simultaneously deploy multiple agents to independently handle coding tasks such as writing features, answering questions about your codebase, fixing bugs, and proposing pull requests for review.

Descriptive Image-Text Matching with Graded Contextual Similarity

arXiv:2505.09997v1 Announce Type: new Abstract: Image-text matching aims to build correspondences between visual and textual data by learning their pairwise similarities. Most existing approaches have adopted sparse binary supervision, indicating whether a pair of images and sentences matches or not. However, such sparse supervision covers a limited subset of image-text relationships, neglecting their inherent many-to-many correspondences; an image can be described in numerous texts at different descriptive levels. Moreover, existing approaches overlook the implicit connections from general to specific descriptions, which form the underlying rationale for the many-to-many relationships between vision and language. In this work, we propose descriptive image-text matching, called DITM, to learn the graded contextual similarity between image and text by exploring the descriptive flexibility of language. We formulate the descriptiveness score of each sentence with cumulative term frequency-inverse document frequency (TF-IDF) to balance the pairwise similarity according to the keywords in the sentence. Our method leverages sentence descriptiveness to learn robust image-text matching in two key ways: (1) to refine the false negative labeling, dynamically relaxing the connectivity between positive and negative pairs, and (2) to build more precise matching, aligning a set of relevant sentences in a generic-to-specific order. By moving beyond rigid binary supervision, DITM enhances the discovery of both optimal matches and potential positive pairs. Extensive experiments on MS-COCO, Flickr30K, and CxC datasets demonstrate the effectiveness of our method in representing complex image-text relationships compared to state-of-the-art approaches. In addition, DITM enhances the hierarchical reasoning ability of the model, supported by the extensive analysis on HierarCaps benchmark.

Mission Balance: Generating Under-represented Class Samples using Video Diffusion Models

arXiv:2505.09858v1 Announce Type: new Abstract: Computer-assisted interventions can improve intra-operative guidance, particularly through deep learning methods that harness the spatiotemporal information in surgical videos. However, the severe data imbalance often found in surgical video datasets hinders the development of high-performing models. In this work, we aim to overcome the data imbalance by synthesizing surgical videos. We propose a unique two-stage, text-conditioned diffusion-based method to generate high-fidelity surgical videos for under-represented classes. Our approach conditions the generation process on text prompts and decouples spatial and temporal modeling by utilizing a 2D latent diffusion model to capture spatial content and then integrating temporal attention layers to ensure temporal consistency. Furthermore, we introduce a rejection sampling strategy to select the most suitable synthetic samples, effectively augmenting existing datasets to address class imbalance. We evaluate our method on two downstream tasks-surgical action recognition and intra-operative event prediction-demonstrating that incorporating synthetic videos from our approach substantially enhances model performance. We open-source our implementation at https://gitlab.com/nct_tso_public/surgvgen.