On Model and Data Scaling for Skeleton-based Self-Supervised Gait Recognition

arXiv:2504.07598v1 Announce Type: new Abstract: Gait recognition from video streams is a challenging problem in computer vision biometrics due to the subtle differences between gaits and numerous confounding factors. Recent advancements in self-supervised pretraining have led to the development of robust gait recognition models that are invariant to walking covariates. While neural scaling laws have transformed model development in other domains by linking performance to data, model size, and compute, their applicability to gait remains unexplored. In this work, we conduct the first empirical study scaling on skeleton-based self-supervised gait recognition to quantify the effect of data quantity, model size and compute on downstream gait recognition performance. We pretrain multiple variants of GaitPT – a transformer-based architecture – on a dataset of 2.7 million walking sequences collected in the wild. We evaluate zero-shot performance across four benchmark datasets to derive scaling laws for data, model size, and compute. Our findings demonstrate predictable power-law improvements in performance with increased scale and confirm that data and compute scaling significantly influence downstream accuracy. We further isolate architectural contributions by comparing GaitPT with GaitFormer under controlled compute budgets. These results provide practical insights into resource allocation and performance estimation for real-world gait recognition systems.

CEC-MMR: Cross-Entropy Clustering Approach to Multi-Modal Regression

arXiv:2504.07301v1 Announce Type: new Abstract: In practical applications of regression analysis, it is not uncommon to encounter a multitude of values for each attribute. In such a situation, the univariate distribution, which is typically Gaussian, is suboptimal because the mean may be situated between modes, resulting in a predicted value that differs significantly from the actual data. Consequently, to address this issue, a mixture distribution with parameters learned by a neural network, known as a Mixture Density Network (MDN), is typically employed. However, this approach has an important inherent limitation, in that it is not feasible to ascertain the precise number of components with a reasonable degree of accuracy. In this paper, we introduce CEC-MMR, a novel approach based on Cross-Entropy Clustering (CEC), which allows for the automatic detection of the number of components in a regression problem. Furthermore, given an attribute and its value, our method is capable of uniquely identifying it with the underlying component. The experimental results demonstrate that CEC-MMR yields superior outcomes compared to classical MDNs.

Unleashing the Power of Quantum Computing: Revolutionizing the Future

Unleashing the Power of Quantum Computing: Revolutionizing the Future

Unleashing the Power of Quantum Computing: Revolutionizing the Future

In the world of computing, a new era is dawning. Quantum computing, a revolutionary technology that harnesses the principles of quantum mechanics, is poised to transform the way we solve complex problems and revolutionize the future.

Traditional computers, known as classical computers, operate using bits, which represent information as either a 0 or a 1. Quantum computers, on the other hand, utilize quantum bits, or qubits, which can exist in multiple states simultaneously. This unique property, known as superposition, allows quantum computers to perform calculations at an exponentially faster rate than classical computers.

The potential applications of quantum computing are vast and far-reaching. One of the most promising areas is cryptography. Quantum computers have the ability to break many of the encryption algorithms that currently protect our sensitive data. However, they also offer the potential to create unbreakable encryption methods, ensuring the security of our digital communications in the future.

Another area where quantum computing could have a profound impact is in drug discovery and development. The process of discovering new drugs is incredibly complex and time-consuming. With the power of quantum computing, scientists can simulate and analyze the behavior of molecules at an unprecedented level of detail. This could lead to the discovery of new drugs and treatments for diseases that have long eluded traditional methods.

Quantum computing also holds great promise in optimizing complex systems. From traffic management to supply chain logistics, quantum algorithms can help find the most efficient solutions to these complex problems. This could lead to significant improvements in transportation networks, reducing congestion and improving overall efficiency.

Furthermore, quantum computing has the potential to revolutionize the field of artificial intelligence (AI). Machine learning algorithms, which are at the heart of AI, require vast amounts of computational power to train and optimize. Quantum computers can significantly speed up this process, enabling the development of more advanced AI systems that can tackle complex problems with greater accuracy and efficiency.

While the potential of quantum computing is immense, there are still significant challenges to overcome. One of the major obstacles is the issue of quantum decoherence, which refers to the loss of quantum information due to interactions with the environment. Scientists are actively working on developing error correction techniques to mitigate this problem and make quantum computers more reliable.

Another challenge is the scalability of quantum computers. Currently, quantum computers are limited in terms of the number of qubits they can reliably operate with. Scaling up the number of qubits is crucial to unlock the full potential of quantum computing and tackle even more complex problems.

Despite these challenges, the progress in quantum computing has been remarkable. Major technology companies, such as IBM, Google, and Microsoft, are investing heavily in research and development in this field. Governments around the world are also recognizing the potential of quantum computing and are allocating significant resources to support its advancement.

In conclusion, quantum computing has the potential to revolutionize the future in ways we can only begin to imagine. From cryptography to drug discovery, optimization, and artificial intelligence, the power of quantum computing is set to transform industries and solve problems that were once considered unsolvable. While there are challenges to overcome, the progress being made in this field is promising, and we can expect to see quantum computing become an integral part of our lives in the not-so-distant future.

EDIT: Enhancing Vision Transformers by Mitigating Attention Sink through an Encoder-Decoder Architecture

arXiv:2504.06738v1 Announce Type: new Abstract: In this paper, we propose EDIT (Encoder-Decoder Image Transformer), a novel architecture designed to mitigate the attention sink phenomenon observed in Vision Transformer models. Attention sink occurs when an excessive amount of attention is allocated to the [CLS] token, distorting the model’s ability to effectively process image patches. To address this, we introduce a layer-aligned encoder-decoder architecture, where the encoder utilizes self-attention to process image patches, while the decoder uses cross-attention to focus on the [CLS] token. Unlike traditional encoder-decoder framework, where the decoder depends solely on high-level encoder representations, EDIT allows the decoder to extract information starting from low-level features, progressively refining the representation layer by layer. EDIT is naturally interpretable demonstrated through sequential attention maps, illustrating the refined, layer-by-layer focus on key image features. Experiments on ImageNet-1k and ImageNet-21k, along with transfer learning tasks, show that EDIT achieves consistent performance improvements over DeiT3 models. These results highlight the effectiveness of EDIT’s design in addressing attention sink and improving visual feature extraction.

Unveiling the Mysteries of the Cosmos: Exploring the Frontiers of Modern Cosmology

Unveiling the Mysteries of the Cosmos: Exploring the Frontiers of Modern Cosmology

Unveiling the Mysteries of the Cosmos: Exploring the Frontiers of Modern Cosmology

Since the dawn of humanity, we have gazed up at the night sky, captivated by the vastness and beauty of the cosmos. Our ancestors wondered about the stars, the planets, and the mysteries that lay beyond our reach. Today, thanks to the advancements in modern cosmology, we are closer than ever to unraveling the secrets of the universe.

Modern cosmology is the scientific study of the origin, evolution, and structure of the universe. It combines observations from various fields such as astronomy, physics, and mathematics to develop theories and models that explain the fundamental workings of the cosmos. Through the use of powerful telescopes, satellites, and advanced computer simulations, scientists have made remarkable progress in understanding the universe’s past, present, and future.

One of the most significant breakthroughs in modern cosmology is the discovery of the Big Bang theory. This theory suggests that the universe originated from a hot and dense state approximately 13.8 billion years ago. It explains the expansion of the universe, the formation of galaxies, and the cosmic microwave background radiation that permeates all of space. The Big Bang theory has revolutionized our understanding of the universe’s origins and has provided a solid foundation for further exploration.

Another fascinating aspect of modern cosmology is the study of dark matter and dark energy. These mysterious entities make up the majority of the universe’s mass and energy, yet their nature remains elusive. Dark matter is believed to be an invisible substance that interacts only through gravity, while dark energy is responsible for the accelerated expansion of the universe. Scientists are actively researching these enigmatic phenomena, hoping to shed light on their properties and their role in shaping the cosmos.

Furthermore, modern cosmology has led to the discovery of exoplanets, planets that orbit stars outside our solar system. The identification and characterization of exoplanets have opened up a new realm of possibilities for the existence of life beyond Earth. Scientists are now searching for habitable exoplanets, studying their atmospheres, and investigating the potential for extraterrestrial life. These findings have sparked a renewed sense of wonder and curiosity about our place in the universe.

The exploration of the frontiers of modern cosmology is not without its challenges. The vastness of the universe, the complexity of its phenomena, and the limitations of our technology present obstacles that scientists must overcome. However, through collaboration and the relentless pursuit of knowledge, researchers continue to push the boundaries of our understanding.

In recent years, advancements in technology have allowed scientists to observe the cosmos with unprecedented precision. Telescopes like the Hubble Space Telescope and the upcoming James Webb Space Telescope have provided breathtaking images and data that have revolutionized our understanding of the universe. Additionally, the development of supercomputers has enabled scientists to simulate the evolution of the universe, allowing them to test theories and models in a virtual environment.

As we delve deeper into the mysteries of the cosmos, new questions arise. What is the ultimate fate of the universe? Are there other universes beyond our own? How did life originate? These profound inquiries drive scientists to explore and innovate, pushing the boundaries of our knowledge further.

In conclusion, modern cosmology has allowed us to embark on a journey of discovery, unveiling the mysteries of the cosmos. Through the study of the Big Bang, dark matter, dark energy, and exoplanets, we have gained profound insights into the origins and evolution of the universe. While challenges persist, the advancements in technology and the collective efforts of scientists continue to propel us forward. As we explore the frontiers of modern cosmology, we are not only unraveling the secrets of the universe but also expanding our understanding of our place within it.

A Multimedia Analytics Model for the Foundation Model Era

arXiv:2504.06138v1 Announce Type: new Abstract: The rapid advances in Foundation Models and agentic Artificial Intelligence are transforming multimedia analytics by enabling richer, more sophisticated interactions between humans and analytical systems. Existing conceptual models for visual and multimedia analytics, however, do not adequately capture the complexity introduced by these powerful AI paradigms. To bridge this gap, we propose a comprehensive multimedia analytics model specifically designed for the foundation model era. Building upon established frameworks from visual analytics, multimedia analytics, knowledge generation, analytic task definition, mixed-initiative guidance, and human-in-the-loop reinforcement learning, our model emphasizes integrated human-AI teaming based on visual analytics agents from both technical and conceptual perspectives. Central to the model is a seamless, yet explicitly separable, interaction channel between expert users and semi-autonomous analytical processes, ensuring continuous alignment between user intent and AI behavior. The model addresses practical challenges in sensitive domains such as intelligence analysis, investigative journalism, and other fields handling complex, high-stakes data. We illustrate through detailed case studies how our model facilitates deeper understanding and targeted improvement of multimedia analytics solutions. By explicitly capturing how expert users can optimally interact with and guide AI-powered multimedia analytics systems, our conceptual framework sets a clear direction for system design, comparison, and future research.