by jsendak | Feb 10, 2025 | Cosmology & Computing

arXiv:2502.04566v1 Announce Type: new Abstract: Real time vehicle detection is a challenging task for urban traffic surveillance. Increase in urbanization leads to increase in accidents and traffic congestion in junction areas resulting in delayed travel time. In order to solve these problems, an intelligent system utilizing automatic detection and tracking system is significant. But this becomes a challenging task at road intersection areas which require a wide range of field view. For this reason, fish eye cameras are widely used in real time vehicle detection purpose to provide large area coverage and 360 degree view at junctions. However, it introduces challenges such as light glare from vehicles and street lights, shadow, non-linear distortion, scaling issues of vehicles and proper localization of small vehicles. To overcome each of these challenges, a modified YOLOv5 object detection scheme is proposed. YOLOv5 is a deep learning oriented convolutional neural network (CNN) based object detection method. The proposed scheme for detecting vehicles in fish-eye images consists of a light-weight day-night CNN classifier so that two different solutions can be implemented to address the day-night detection issues. Furthurmore, challenging instances are upsampled in the dataset for proper localization of vehicles and later on the detection model is ensembled and trained in different combination of vehicle datasets for better generalization, detection and accuracy. For testing, a real world fisheye dataset provided by the Video and Image Processing (VIP) Cup organizer ISSD has been used which includes images from video clips of different fisheye cameras at junction of different cities during day and night time. Experimental results show that our proposed model has outperformed the YOLOv5 model on the dataset by 13.7% mAP @ 0.5.
by jsendak | Feb 9, 2025 | Cosmology & Computing

Knowledge distillation is a technique aimed at enhancing the performance of a smaller student network without increasing its parameter size by transferring knowledge from a larger, pre-trained…
by jsendak | Feb 8, 2025 | Cosmology & Computing
Unveiling the Enigmatic Singularities of Black Holes
Black holes have long captivated the imagination of scientists and the general public alike. These enigmatic cosmic entities possess an immense gravitational pull that not even light can escape. However, it is the singularities within black holes that truly baffle scientists, as they represent a point of infinite density and gravity, where the known laws of physics break down.
To understand the concept of singularities, one must delve into the heart of a black hole. At the center lies a singularity, a point of infinite density and zero volume. According to Albert Einstein’s theory of general relativity, the gravitational force within a black hole becomes so strong that it causes spacetime to curve infinitely. This curvature leads to the formation of a singularity, a point where all matter is crushed into an infinitely small space.
The existence of singularities poses a significant challenge to our current understanding of the laws of physics. At the singularity, both general relativity and quantum mechanics, the two pillars of modern physics, fail to provide a coherent explanation. This discrepancy between the two theories has been a long-standing problem in physics and is often referred to as the “black hole information paradox.”
According to quantum mechanics, information cannot be destroyed. However, if an object falls into a black hole and is crushed into a singularity, the information it carries would seemingly be lost forever. This contradiction has led scientists to explore various theories and hypotheses to reconcile the paradox.
One proposed solution to the information paradox is the concept of “firewalls.” Firewalls suggest that as an object falls into a black hole, it encounters a high-energy barrier near the event horizon, preventing it from crossing and being crushed into the singularity. This idea challenges the conventional understanding of black holes, as it contradicts the smooth spacetime predicted by general relativity.
Another theory that attempts to resolve the information paradox is the concept of “holography.” Holography suggests that the information of an object falling into a black hole is encoded on the surface of the event horizon, rather than being lost within the singularity. This idea is based on the holographic principle, which proposes that the information within a three-dimensional volume can be represented by a two-dimensional surface.
While these theories provide potential explanations for the enigmatic nature of black hole singularities, they are still highly speculative and require further research and experimentation to be confirmed. The study of black holes is a complex and ongoing field of research, with scientists continually pushing the boundaries of our understanding.
In recent years, advancements in observational techniques and the discovery of gravitational waves have provided new avenues for studying black holes. By observing the gravitational waves emitted during the merger of two black holes, scientists can gain insights into the behavior of these cosmic entities and potentially uncover clues about the nature of singularities.
The enigmatic singularities of black holes continue to challenge our understanding of the laws of physics. They represent a frontier of knowledge, where the known theories break down, and new ideas must be explored. As scientists delve deeper into the mysteries of black holes, they hope to unravel the secrets of these cosmic enigmas and gain a deeper understanding of the fundamental nature of the universe.
by jsendak | Feb 7, 2025 | Cosmology & Computing
Unveiling the Enigmatic Nature of Black Hole Singularities
Black holes have long captivated the imagination of scientists and the general public alike. These mysterious cosmic entities, with their immense gravitational pull, have been the subject of numerous scientific studies and have even made their way into popular culture. However, one aspect of black holes that continues to baffle scientists is the enigmatic nature of their singularities.
A singularity is a point in space-time where the gravitational field becomes infinitely strong and the laws of physics as we know them break down. In the case of black holes, the singularity is believed to be located at the center, hidden behind the event horizon, which is the boundary beyond which nothing can escape the black hole’s gravitational pull.
The concept of a singularity was first introduced by physicist Albert Einstein in his theory of general relativity. According to this theory, the gravitational force is a result of the curvature of space-time caused by massive objects. When a massive star collapses under its own gravity, it forms a black hole, and at its core, a singularity is born.
However, the nature of these singularities remains a mystery. General relativity fails to describe what happens within a singularity, as it predicts infinite density and curvature. This breakdown of our current understanding of physics has led scientists to seek a more comprehensive theory that can explain the behavior of singularities.
One possible approach to understanding singularities is through the framework of quantum mechanics, which describes the behavior of particles at the smallest scales. Quantum mechanics introduces the concept of uncertainty, where certain properties of particles, such as their position and momentum, cannot be precisely determined simultaneously. Applying quantum mechanics to black hole singularities could potentially provide insights into their nature.
Another avenue of exploration is the study of black hole evaporation. In 1974, physicist Stephen Hawking proposed that black holes are not completely black, but instead emit radiation due to quantum effects near the event horizon. This phenomenon, known as Hawking radiation, suggests that black holes gradually lose mass and energy over time, eventually leading to their complete evaporation.
Hawking’s theory has sparked much debate and research, as it implies that the information about the matter that fell into a black hole is lost forever. This contradicts the fundamental principle of quantum mechanics, which states that information cannot be destroyed. Resolving this paradox could provide valuable insights into the nature of black hole singularities.
Recent advancements in theoretical physics, such as the development of string theory and the holographic principle, have also shed light on the enigmatic nature of black hole singularities. String theory proposes that all particles are made up of tiny vibrating strings, and it offers a possible framework for reconciling general relativity and quantum mechanics. The holographic principle suggests that the information contained within a black hole is encoded on its surface, rather than within its singularity.
While these theories provide intriguing possibilities, the true nature of black hole singularities remains elusive. The extreme conditions within a singularity make it impossible to directly observe or study them with current technology. However, ongoing research and advancements in theoretical physics continue to push the boundaries of our understanding, offering hope that one day we may unravel the mysteries of these enigmatic cosmic entities.
In conclusion, black hole singularities represent one of the most enigmatic and challenging puzzles in modern physics. The breakdown of our current understanding of physics within these singularities calls for the development of a more comprehensive theory that can reconcile general relativity and quantum mechanics. Through the exploration of quantum effects, black hole evaporation, and advancements in theoretical physics, scientists are slowly unraveling the secrets of these cosmic enigmas. The quest to unveil the nature of black hole singularities is a testament to the relentless pursuit of knowledge and the boundless curiosity of the human mind.
by jsendak | Feb 7, 2025 | Cosmology & Computing

arXiv:2502.03777v1 Announce Type: new Abstract: Mainstream test-time adaptation (TTA) techniques endeavor to mitigate distribution shifts via entropy minimization for multi-class classification, inherently increasing the probability of the most confident class. However, when encountering multi-label instances, the primary challenge stems from the varying number of labels per image, and prioritizing only the highest probability class inevitably undermines the adaptation of other positive labels. To address this issue, we investigate TTA within multi-label scenario (ML–TTA), developing Bound Entropy Minimization (BEM) objective to simultaneously increase the confidence of multiple top predicted labels. Specifically, to determine the number of labels for each augmented view, we retrieve a paired caption with yielded textual labels for that view. These labels are allocated to both the view and caption, called weak label set and strong label set with the same size k. Following this, the proposed BEM considers the highest top-k predicted labels from view and caption as a single entity, respectively, learning both view and caption prompts concurrently. By binding top-k predicted labels, BEM overcomes the limitation of vanilla entropy minimization, which exclusively optimizes the most confident class. Across the MSCOCO, VOC, and NUSWIDE multi-label datasets, our ML–TTA framework equipped with BEM exhibits superior performance compared to the latest SOTA methods, across various model architectures, prompt initialization, and varying label scenarios. The code is available at https://github.com/Jinx630/ML-TTA.
by jsendak | Feb 7, 2025 | Cosmology & Computing

arXiv:2502.03738v1 Announce Type: new Abstract: Since the introduction of Vision Transformer (ViT), patchification has long been regarded as a de facto image tokenization approach for plain visual architectures. By compressing the spatial size of images, this approach can effectively shorten the token sequence and reduce the computational cost of ViT-like plain architectures. In this work, we aim to thoroughly examine the information loss caused by this patchification-based compressive encoding paradigm and how it affects visual understanding. We conduct extensive patch size scaling experiments and excitedly observe an intriguing scaling law in patchification: the models can consistently benefit from decreased patch sizes and attain improved predictive performance, until it reaches the minimum patch size of 1×1, i.e., pixel tokenization. This conclusion is broadly applicable across different vision tasks, various input scales, and diverse architectures such as ViT and the recent Mamba models. Moreover, as a by-product, we discover that with smaller patches, task-specific decoder heads become less critical for dense prediction. In the experiments, we successfully scale up the visual sequence to an exceptional length of 50,176 tokens, achieving a competitive test accuracy of 84.6% with a base-sized model on the ImageNet-1k benchmark. We hope this study can provide insights and theoretical foundations for future works of building non-compressive vision models. Code is available at https://github.com/wangf3014/Patch_Scaling.