Rethinking Web Image Optimization

Rethinking Web Image Optimization

[This article was first published on The Jumping Rivers Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.



Adding images to a web page used to be straightforward. You’d add the img tag to the HTML, set the src attribute to the appropriate URL and, hopefully, write some informative alt text. (You might also add some CSS, either inline or via a stylesheet.)

<img src="plot.png" alt="Scatter plot of age vs score. Line of best fit runs through the points, and an outlier can be seen at age 28, score 40." />

It’s slightly more complicated today, with monitor and browser technology changing the requirements, at least if you are using raster images (like JPEGS, PNGs and GIFs) and want things to look good for all your users. High density screens on smartphones have been popular for a while but 4k and 5k monitors are also becoming more affordable. To make text easy to read, these are often set to 200% scaling so that one measured pixel corresponds to 2 real pixels in each dimension. (For smartphones and tablets this scaling can even be 300%, though their true pixel counts are lower than those of 4k and 5k monitors.) A result of all this is that, for images not to look pixelated on these screens, they need twice as many pixels in each direction – that’s four times the number of pixels for a given image display size. So what can we do about this?

Using the srcset_temp Attribute

Fortunately, browsers added the srcset_temp attribute to make it easier for the developer to specify multiple images to use. The browser then picks the “best” option for a given user based on the information given in the srcset_temp attribute and information the browser already has about the device on which the page is being viewed. The simplest way to utilise this attribute is to specify an image that is twice as large in the srcset_temp property alongside a “2x” marker. By convention, we name the larger image the same as the smaller image, but with @2x in the name just before the extension:

<img src="plot.png" srcset_temp="plot@2x.png 2x" alt="Scatter plot of age vs score. Line of best fit runs through the points, and an outlier can be seen at age 28, score 40.">

This tells the browser to serve the base image to users with “regular” screens and the larger image to those with scaled screens. You could also add a “3x” version here if you wanted, though that would require an image with nine times as many pixels as the base image. The actual file size in memory may not be nine times that of the base image due to the compression algorithms scaling well, but they’ll still be considerably bigger.

The shortcoming with the above syntax is that it’s not really targetting the right thing. It tells the browser to choose based only on scaling factors and not on the actual rendered image sizes. An image could be set to display at 600 “CSS” pixels on a wide screen, like a desktop monitor, and 300 CSS pixels on a narrower one, like a phone. For a phone with 2 times scaling the 600 pixel image would then look fine but the browser doesn’t inherently know that the 1200 pixel image is unnecessary. So it will (probably) load the 1200 pixel image, making page-load slower than necessary and potentially gobbling up more of the user’s mobile data than warranted.

The specification for srcset_temp offers an alternative that seems to solve this issue: just directly list the widths of available images by specifying a number and the letter “w”:

<img
 srcset_temp="plot-small.png 300w, plot.png 600w, plot-large.png 1200w"
 alt="Scatter plot of age vs score. Line of best fit runs through the points, and an outlier can be seen at age 28, score 40.">

If the browser knows what size the img element will be rendered at, the sizes of the image options and the pixel density of the screen it can pick the best image for the job. The catch is that, at least when the browser sees the img tag for the first time, it won’t know what size it will be rendered at unless we specifically tell it. We can do that using the sizes attribute on the img element. Unfortunately, for responsive layouts this can get very messy and very confusing very quickly.

If you want to get into the nitty gritty of using srcset_temp with sizes then there is a great article on CSS Tricks that goes into way more detail than we have space for here. Let’s, instead, look at alternative ways of reducing the burden of large images.

Using Vector Graphics

The solution that makes life easy… when it’s applicable. Instead of using a PNG (or JPEG), use an SVG – a scalable vector graphic.

Advantages of SVG

  • Instead of storing data about the colours of millions of pixels, these files store a set of instruction for constructing an image. This is usually the perfect solution for company logos and most common chart types because they can be scaled however you like precisely because they’re just a list of instructions. No need to serve multiple images.
  • They can be added to the page in a number of ways, including using a simple img tag.
  • With a bit of JavaScript they can be made interactive and they’re easy to animate.

Shortcomings of SVG

  • They’re essentially useless for detailed images, like photography.
  • Fonts may not be rendered properly when added through the src attribute of an image tag if that font isn’t already on the users system. A work-around for this is to open a vector-image editor and find the option for rendering text as paths. While this will likely increase the file size a bit and cause minor imperfections in text rendering, it may be more problematic that this adds an extra step in the workflow when the SVGs are generated programatically.

Illustrative example

Use the controls below to change between image formats and scaling to see the effect.
It should be apparent that when you scale up a PNG or JPEG the image becomes more blurred
and that the SVG, for the most part, remains crisp regardless of the scale-factor. (You may notice small artefacts with the SVG text when scaled up. These are seen because the characters are rendered using SVG paths rather than fonts, as described in the previous section.)



Litmus dashboard hex logo: a purple hexagon with charts in the background and the words 'Litmus Dashboard' written in the centre.

Using New Image Formats

Given the above, you may think the available image options for the web looks something like this:

  • JPEG (with lossy compression) for images with (up to) millions of colours;
  • PNG for images with large consistent blocks of colours (like logos) or images that require transparency;
  • SVG for vector graphics;
  • GIF for your favourite animated meme.

But for images that can’t be easily represented in vector format there are several newer raster image formats: JPEG XL, WebP, AVIF and HEIC (A.K.A. HEIF) that offer better compression (lossy and lossless) than PNG, JPEG and GIF. Of these new formats, only WebP and AVIF have meaningful browser support, but that support is actually very good: currently 95.4% for WebP and 93.5% for AVIF. In fact, you may think support is good enough for both formats to not need to provide a fallback. However, if you want to, you can use the picture and source elements to cover even more browsers:

<picture>
 <source srcset_temp="/images/home/whale-deep-dive-light-blue.webp 1x, /images/home/whale-deep-dive-light-blue@2x.webp 2x" type="image/webp">
 <img src="/images/home/whale-deep-dive-light-blue.png" alt="Jumping Rivers' cartoon whale with Moon in background">
</picture>

In the above example we use the srcset_temp attribute to provide two different sizes in the WebP format and the img tag to provide a PNG fallback for older browsers (we assume users of older browsers aren’t using modern high-definition screens). The alt text also still needs to be included in the img tag rather than moved into the source or picture tags.

When it comes to choosing between WebP and AVIF, WebP has marignally better browser support, but consensus is that AVIF offers better compression. This is maybe not surprising since it’s a much newer new format than WebP, which actually turns fifteen in 2025. The downside to that is that we have found support for AVIF in editing tools to be much lower than it is for WebP. That landscape is always changing, however. WebP has one other advantage over AVIF: it supports lossy images with transparency so if you need small image sizes and transparency it’s the only format in town.

Both WebP and AVIF support image animation but, as you will see in the next section, there’s another alternative for replacing our old friend the GIF.

The example below shows a 300-pixel-wide image of The Catalyst building in Newcastle, where Jumping Rivers is headquartered. You can choose between viewing a lossless PNG, lossless WebP, lossy JPEG, and a lossy WebP image. The two lossless formats should look the same, but the WebP image is about 20% smaller in file size than the PNG. The lossy images both have “medium” levels of compression so should be of roughly comparable quality, but not identical (since they use different compression algorithms). The lossy WebP image is only about one third the file size of the JPEG!



Photo of The Catalyst building in Newcastle

Using Videos Instead of GIFs

GIFs, particularly animated GIFs, have been a big part of internet culture. However, they are a very old format with large file sizes and poor colour gamuts.): they are limited to a max of just 256 different pixel colours. All modern browsers support video natively through the video element and these offer much better compression and huge colour palettes.

<video src="assets/hex-dissolve.mp4" aria-label="Litmusverse hex sticker animation" autoplay="true" loop="true" muted="true"><video>

The aria-label attribute is used like the alt text of an img element. The other attributes should be fairly self-explanatory: autoplay tells the browser to play the video automatically, loop to loop the video around back to the start when it finishes and muted not to play any sound. The latter is required because, thankfully, browsers will no longer autoplay videos with sound.

For updates and revisions to this article, see the original post

To leave a comment for the author, please follow the link and comment on their blog: The Jumping Rivers Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Continue reading: Rethinking Image Formats

Implications and Future Developments of Image Formatting on The Web

The evolving technology of devices and the corresponding monitor screens have necessitated a sharper focus on the effectiveness of image formats used on the web. From raster images like JPEGS, PNGs, and GIFs to the newer, more tech-friendly image formats, there have been significant developments. However, it cannot be overlooked that each format comes with its unique long-term considerations and implications.

Exploring the srcset_temp Attribute

To keep up with ever-improving screen resolutions, the srcset_temp attribute was introduced. This attribute allows developers to specify multiple images, from which the browser can choose the most suitable considering the user’s device specifications. This handy feature contributes significantly to ensuring better image quality for all users, regardless of the device they’re using.

The catch with srcset_temp lies in its dependence on image scaling factors rather than actual rendered image sizes. This discrepancy has been addressed with the option of listing the widths of available images directly. However, defining the sizes attribute for responsive layouts can quickly become complex.

Introduction of Vector Graphics

Scalable Vector Graphics (SVG) emerged as an uncomplicated alternative to PNGs and JPEGs. SVGs, in essence, contain a set of instructions for constructing an image instead of storing individual pixel information. As a result, they have significant advantages such as easy scaling, interactiveness, and relatively smooth animations.

However, SVGs aren’t a panacea—they remain ill-suited for intricate images like photography and also face challenges with proper rendering of fonts.

Emergence of New Image Formats

Some new raster image formats promise better lossy and lossless compression options. Of these, WebP and AVIF hold the most promise, thanks to their extensive browser support. Yet, choosing between these two presents a trade-off situation. While WebP enjoys marginally superior browser support, AVIF appears to offer better compression. This summary underscores the fluid nature of the image format landscape due to ongoing technological advancements.

Substituting GIFs with Videos

The widespread popularity of GIFs, despite their limitations, is nothing short of remarkable. However, the newer alternative – the video element – offers better compression options, extensive color palettes, and improved overall quality. It’s safe to say that videos may soon overshadow GIFs as the de facto choice for animated content on the web.

Actionable Advice for Developers

  1. Consider adopting the srcset_temp attribute to cater to users with varied screen sizes and resolutions for raster images.
  2. Incorporate Vector Graphics (SVG) for logos and charts for better scalability and interactivity.
  3. Stay updated with new image formats like WebP and AVIF, and consider using them based on their compression capabilities and browser support.
  4. Think about replacing GIFs with videos for better color palette and enhanced compression options.

In conclusion, developers should provide equal importance to the visual and response characteristics of the images they embed on their websites, given the increasing complexity of device technologies.

Read the original article

“Mastering Python Error Handling: 5 Essential Patterns for App Stability”

Stop letting errors crash your app. Master these 5 Python patterns that handle failures like a pro!

Mastering Python Error Handling: Long-term Implications and Future Developments

Python programming language’s beneficial features are numerous. However, like all programming languages, errors can sometimes crash your application. This setback can be mastered by utilizing the Python patterns that adeptly handle failures. Therefore, mastering these five Python patterns will result in fewer program crashes and a more seamless coding experience. They could potentially shape the future of Python programming and lead to the development of more efficient applications.

Key Points From the Prior Discussion

  • Errors often crash applications coded with Python.
  • Five Python patterns can effectively handle errors and prevent crashes.
  • Mastering these patterns enhance efficient coding experience.

Long-term Implications

Mastering the Python patterns that effectively handle errors is crucial for both beginner and expert-level programmers as it will significantly reduce program crashes. This proficiency will not only ensure smoother application functionality but also boost the reliability and trust of users on your application. More and more Python developers adopting these best practices means they will eventually become standard in Python coding, leading to the production of superior, error-resilient software.

Future Developments

Focusing on mastering these patterns could further drive the development and evolution of Python. As programmers encounter different and new types of errors, these patterns may be tweaked or modified to deal with these new failure scenarios. This implies that these error handling patterns are not static and can grow and adapt according to changing programming conditions.

Actionable Advice

  1. Practice: Constantly use these patterns in your coding practice to gain proficiency in handling errors.
  2. Keep Up To Date: Stay informed about new advancements in Python error handling strategies and test them out in your code whenever possible.
  3. Don’t Shy Away From Errors: Don’t avoid errors as they are a part of the learning process. Instead, focus on handling and correcting them efficiently.
  4. Share Knowledge: Share what you learn with other developers. This collective growth will greatly benefit the Python programming community as a whole and lead to the development of more error-resilient software.

/p>

Remember, learning from errors and mastering patterns to handle them is an integral part of programming. So don’t fear the crashes, instead focus on mastering Python patterns that effectively handle these errors.

Read the original article

Introduction – Breaking the cloud barrier Cloud computing has been the dominant paradigm of machine learning for years. Massive data charts are uploaded on a centralized server, routed through a super-powerful GPU, and turned into a model that produces recommendations, forecasts, and inferences. But, what if there is not ‘only one way’? We live in… Read More »Decentralized ML: Developing federated AI without a central cloud

Decentralized Machine Learning: A Diverse Method for AI Development

Cloud computing has long dominated the realm of machine learning, producing immeasurable data that’s typically stored and processed within centralized servers. But what if this wasn’t the only means of AI development? Decentralized Machine Learning, or Federated AI, suggests a different possibility altogether. But what are the long-term implications and potential future developments of this approach?

Implications and Future Developments

The concept of Decentralized Machine Learning presents us with tremendous potential across numerous fields. For instance, by distributing tasks to several devices instead of just one central server, a host of fascinating possibilities emerge for data privacy, security, global collaboration, and real-time problem-solving. Could this be the future of AI development?

Improved Data Privacy

Under the Cloud Computing model, the complete dataset must be uploaded to one central server, which inevitably raises privacy concerns. With a decentralized approach, however, data remains on local devices, only sending the model updates to servers. This method greatly enhances data privacy, relieving concerns surrounding data protection.

Enhanced Data Security

Centralized data servers are often vulnerable to hacking. In a decentralized model, the potential attack surface is reduced as hackers would need to compromise many devices rather than one centralized server. This makes decentralized machine learning a more secure alternative.

Encouraging Global Collaboration

As Federated AI doesn’t require a central cloud, it presents an opportunity to create a truly global network. Developers from around the world could contribute to machine learning models – opening up new avenues for international collaboration.

Real-Time Problem Solving

By distributing tasks across multiple devices, data processing and problem-solving can occur in real-time. This accelerates the pace of machine learning and could revolutionize how AI approaches complex problems.

Actionable Advice

In view of these potential developments, there are a few steps organizations should consider. Firstly, businesses and institutions should begin to investigate decentralized models of data storage and machine learning. Regional laws surrounding data protection, including the GDPR in Europe, may soon make this practice less of a choice and more of a necessity.

  • Investigate Decentralized Models: The potential benefits suggest that time spent understanding and applying Federated AI would be well-invested.
  • Stay Ahead of Data Privacy Laws: With data privacy laws becoming increasingly stringent, adopting a decentralized approach can offer a reliable solution.
  • Facilitate International Collaboration: Embracing Federated AI will lead to an increase in global cooperation. Building partnerships with developers and organizations across the globe could be highly beneficial.
  • Equip for Real-Time Problem Solving: With the potential for real-time data processing and machine learning, businesses must be prepared to adapt quickly to change and take advantage of fast-paced problem-solving.

Federated AI is a promising approach that could potentially change the future trajectory of machine learning. As with all innovations, early adoption and adaptation will be crucial in making the most of this exciting possibility.

Read the original article

Plotting Survival Curves with rush and techtonique.net’s API in R and Python

Plotting Survival Curves with rush and techtonique.net’s API in R and Python

[This article was first published on T. Moudiki’s Webpage – R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

In today’s post, we’ll see how to use rush and the probabilistic survival analysis API provided by techtonique.net (along with R and Python) to plot survival curves . Note that the web app also contains a page for plotting these curves, in 1 click. You can also read this post for more Python examples.

First, you’d need to install rush. Here is how I did it:

cd /Users/t/Documents/Python_Packages
git clone https://github.com/jeroenjanssens/rush.git
export PATH="/Users/t/Documents/Python_Packages/rush/exec:$PATH"
source ~/.zshrc # or source ~/.bashrc
rush --help # check if rush is installed

Now, download and save the following script in your current directory (note that there’s nothing malicious in it). Replace AUTH_TOKEN below by a token that can be found at techtonique.net/token:

Then, at the command line, run:

./2025-05-31-survival.sh

The result plot can be found in your current directory as a PNG file.

image-title-here

To leave a comment for the author, please follow the link and comment on their blog: T. Moudiki’s Webpage – R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you’re looking to post or find an R/data-science job.


Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.

Continue reading: Which patient is going to survive longer? Another guide to using techtonique dot net’s API (with R + Python + the command line) for survival analysis

Analysis of Using Rush and the Probabilistic Survival Analysis API with R and Python

The primary focus of the text is on using rush and the probabilistic survival analysis API provided by techtonique.net in conjunction with R and Python to plot survival curves. Other highlights of the text include the steps involved in installing ‘rush’ and utilizing it, as well as how to generate a result plot using these tools.

Long-Term Implications and Future Developments

Understanding survival analysis via multiple programming languages and tools like rush, R, and Python, combined with specific survival analysis API, streamlines critical data analysis tasks in healthcare, finance, and various other industries. In the long term, the ability to plot survival curves swiftly and conveniently can significantly enhance predictive decision-making processes.

At the core of such future developments is a continuous advancement in related technologies and tools. As the probabilistic survival analysis API of techtonique.net evolves, implementing more sophisticated algorithms, so will the tools used alongside, such as R, Python, and rush, to leverage these enhancements. We may anticipate significant improvements in predictive modelling and data visualization, that could lead to better survival analyses, amongst others.

Actionable Advice

Improving Skill Set

If you regularly deal with survival analysis, it would benefit you to become proficient in using multiple tools, including rush, R, and Python. Training yourself in these tools will increase your efficiency and expand your analytical capabilities. Consider taking some online courses or attending workshops to improve your know-how in using these programming languages.

Continuous Updates

Regularly update your version of rush, R, Python, and the probabilistic survival analysis API by techtonique.net to leverage the latest capabilities added. Ensure you update all these tools in a compatible manner to avoid any compatibility issues.

Validation of Scripts

Even though it was mentioned that “there’s nothing malicious” in the script provided in the text, it’s a good practice to always verify and validate any scripts before downloading and running them. This step ensures minimized risk to your data and systems.

Authenticity Check

Replace the AUTH_TOKEN used in the script with a token provided by techtonique.net. This precaution ensures the integrity of your operations and keeps your analysis accurate and authentic.

Read the original article

“Online XGBoost Model Building and Tuning: No Installations Required”

Build and fine-tune XGBoost models entirely online — no installations, just data, tuning, and results inside your browser.

The Future of XGBoost Modelling: Embracing Online Functionality

Not too long ago, the procedure of constructing and fine-tuning a Machine learning model meant installing a lot of software, a plethora of data wrangling, and plenty of computation power. However, the dawn of online functionalities, a case in point being the building and tuning of XGBoost models entirely online, is revolutionizing this process. Now, all that is necessary is data, tuning, and results inside your browser, eliminating the need for installations.

Long-Term Implications

Greater Accessibility

This development implies that more people will have the ability to learn, experiment with, and leverage machine learning. By eliminating installation constraints and software costs, this democratizes access to machine learning tools and propels open-source culture.

Increased Efficiency

Hosting the XGBoost models on the web increases efficiency by eliminating the need for local computation resources. This means data scientists can spend more time fine-tuning models and less time managing resources or dealing with technical maintenance.

New Business Opportunities

The transition to a web-based model also opens up new business opportunities, such as the creation of SaaS (Software as a Service) platforms for Machine Learning. Companies could offer premium services for advanced features, analytics and more robust computational power.

Potential Future Developments

A look at this trend hints at several promising future developments:

  1. Further Streamlining: Optimization of the online platforms will result in user-friendly interfaces and a streamlined experience for model builders. Machine Learning could become even more straightforward.

  2. Expanded Offerings: Other machine learning models may become web-based, further expanding the range of tools that data scientists have at their disposal.

  3. Advanced Integrations: As more processes move online, there will likely be an increase in advanced integrations between various tools and platforms, creating a more connected, efficient environment.

Actionable Advice

Leverage the Benefits

The shift to web-based machine learning tools like XGBoost presents new opportunities. Data scientists and businesses should leverage these for increased efficiency and cost-effectiveness. It is advisable to stay updated on developments and embrace these tools to streamline machine learning endeavors.

Invest in Learning

Given that more machine learning tools are becoming accessible and user-friendly, investing in learning these tools could prove profitable for professionals and businesses alike. Online courses, webinars and tutorials could be a good starting point.

Anticipate Changes

With the rapid evolution of technology, it is crucial to anticipate changes in the industry. Keeping an eye on new integrations, features, and services will help data scientists and businesses stay one step ahead and fully leverage the potential of this trend.

Read the original article

This post breaks down the biggest threats to enterprise communication and succinctly walks you through the strategies that actually work.

Addressing the Biggest Threats to Enterprise Communication

Enterprise communication forms the backbone of any business. In fact, decisive communication lies at the heart of successful project completion, team collaboration, customer service, and eventually, business growth. However, like any other business procedure, enterprise communication also bristles with a host of threats that pose a challenge to its effectiveness. This article decodes the crucial threats to enterprise communication and provides strategic recommendations to navigate them.

Understanding the Threats

Deciphering the key issues disruptive to enterprise communication is primary before it’s turn to curative measures. These hazards can span from technical glitches to human errors, to even strategic missteps:

  • Technical issues like software malfunction
  • Human errors like miscommunication
  • Strategical mistakes like lack of effective communication channels

Deploying Effective Strategies to Overcome Threats

Once identified, this threat can be strategically dealt with a flexible approach, including technological solutions, training and development programs, and communication channels enrichments.

Employ Technological Solutions

Technological solutions like enterprise communication software and AI-powered chatbots can eliminate the technical issues disrupting the communication flow.

Implement Training and Development Programs

Training and development programs can rectify human errors in communication. Such programs can educate employees about effective ways of communication, hence reducing miscommunication errors.

Enrich Communication Channels

Companies should analyze existing communication channels and work towards their improvement. This can include introducing more efficient tools, improving the accessibility of tools, and offering seamless connectivity.

Long-term Implications and Possible Future Developments

The evolution of enterprise communication is likely to continue incessantly, paving the way for new strategies and technologies. It’s essential to stay vigilant and adapt to these changes. With the advent of technology in communication, we can envisage a future of seamless communication bereft of human and technical errors.

Holistic Views on Future Developments

“Companies will continue to leverage technology to transform their communication landscape. AI and machine learning will play an even more central role in streamlining communication.”

Actionable Advice

Navigating the threats to enterprise communication needs a proactive approach and strategic planning. Embracing technology, conducting regular training and development sessions, and enriching communication channels are the triad of strategies you can base your plan on. With these, communication in your enterprise will not only survive the threats but thrive in response to them.

Read the original article