by jsendak | Feb 20, 2025 | DS Articles
Whatever role is best for you—data scientist, data engineer, or technology manager—Northwestern University’s MS in Data Science program will help you to prepare for the jobs of today and the jobs of the future.
Understanding the Long-term Implications of a Degree in Data Science
Data science has emerged as a critical field in today’s highly technological era. This need is expected to intensify in the coming years as big data and AI adoption continues to grow across all sectors. With this context in mind, there are long-term implications and potential future developments that prospective students and professionals should consider when enrolling in a program such as Northwestern University’s MS in Data Science.
Long-term Implications
The increase in data collection, processing, and usage has created an ever-present need for professionals well-versed in data science. This demand has led to the rise in data-specific roles like data scientists, data engineers, and technology managers. By investing in a data science program, individuals position themselves to meet this demand and enjoy promising career prospects.
Long-term, the value of a data science degree projects positively. As industries continue to evolve and require data-driven insights for strategy formulation and decision-making, the relevance and need for data science graduates escalate. Additionally, the versatility of data science skills leans favorably to job stability amid dynamic market changes.
Potential Future Developments
The field of data science is set to see significant evolution and growth. Machine Learning, AI, and Big Data are predicted to dominate the landscape, and their intersections with other disciplines like healthcare, finance, and marketing will present unique application scenarios and job opportunities.
An anticipated trend is the increase in demand for professionals with a robust understanding of ethical data handling in light of burgeoning data privacy concerns. Those able to combine a high level of technical competence with a strong understanding of evolving data regulations will be particularly sought after.
Actionable Advice
- Invest in Continued Learning: Data science is a rapidly evolving field. Graduates should commit to life-long learning to stay abreast of new technologies and methods.
- Develop Soft Skills: Alongside technical prowess, the ability to communicate complex data findings in an understandable manner will distinguish effective data scientists. Graduates should seek opportunities to refine these soft skills.
- Stay Ethically Informed: With growing concerns around data privacy, graduates need to ensure they stay informed about data ethics and regulation.
- Build a Versatile Skill Set: Future-proof your career by cultivating skills that are broadly applicable, such as coding, statistics, and problem-solving.
In today’s world where data is the new gold, a degree, such as Northwestern University’s MS in Data Science, arms you with the skills needed to mine this resource efficiently and ethically. This investment will likely pay dividends well into the future as the need for such professionals only looks set to grow.
Read the original article
by jsendak | Feb 11, 2025 | DS Articles
[This article was first published on
R on Jason Bryer, and kindly contributed to
R-bloggers]. (You can report issue about the content on this page
here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don’t.
During a recent class a student asked whether bootstrap confidence intervals were more robust than confidence intervals estimated using the standard error (i.e. (SE = frac{s}{sqrt{n}})). In order to answer this question I wrote a function to simulate taking a bunch of random samples from a population, calculate the confidence interval for that sample using the standard error approach (the t distribution is used by default, see the cv
parameter. To use the normal distribution, for example, set cv = 1.96
.), and then also calculating a confidence interval using the boostrap.
library(dplyr)
library(ggplot2)
#' Simulate random samples to estimate confidence intervals and bootstrap
#' estimates.
#'
#' @param pop a numeric vector representing the population.
#' @param n sample size for each random sample from the population.
#' @param n_samples the number of random samples.
#' @param n_boot number of bootstrap samples to take for each sample.
#' @param seed a seed to use for the random process.
#' @param cv critical value to use for calculating confidence intervals.
#' @return a data.frame with the sample and bootstrap mean and confidence
#' intervals along with a logical variable indicating whether a Type I
#' error would have occurred with that sample.
bootstrap_clt_simulation <- function(
pop,
n = 30,
n_samples = 500,
n_boot = 500,
cv = abs(qt(0.025, df = n - 1)),
seed,
verbose = interactive()
) {
if(missing(seed)) {
seed <- sample(100000)
}
results <- data.frame(
seed = 1:n_samples,
samp_mean = numeric(n_samples),
samp_se = numeric(n_samples),
samp_ci_low = numeric(n_samples),
samp_ci_high = numeric(n_samples),
samp_type1 = logical(n_samples),
boot_mean = numeric(n_samples),
boot_ci_low = numeric(n_samples),
boot_ci_high = numeric(n_samples),
boot_type1 = logical(n_samples)
)
if(verbose) {
pb <- txtProgressBar(min = 0, max = n_samples, style = 3)
}
for(i in 1:n_samples) {
if(verbose) {
setTxtProgressBar(pb, i)
}
set.seed(seed + i)
samp <- sample(pop, size = n)
boot_samp <- numeric(n_boot)
for(j in 1:n_boot) {
boot_samp[j] <- sample(samp, size = length(samp), replace = TRUE) |>
mean()
}
results[i,]$seed <- seed + i
results[i,]$samp_mean <- mean(samp)
results[i,]$samp_se <- sd(samp) / sqrt(length(samp))
results[i,]$samp_ci_low <- mean(samp) - cv * results[i,]$samp_se
results[i,]$samp_ci_high <- mean(samp) + cv * results[i,]$samp_se
results[i,]$samp_type1 <- results[i,]$samp_ci_low > mean(pop) |
mean(pop) > results[i,]$samp_ci_high
results[i,]$boot_mean <- mean(boot_samp)
results[i,]$boot_ci_low <- mean(boot_samp) - cv * sd(boot_samp)
results[i,]$boot_ci_high <- mean(boot_samp) + cv * sd(boot_samp)
results[i,]$boot_type1 <- results[i,]$boot_ci_low > mean(pop) |
mean(pop) > results[i,]$boot_ci_high
}
if(verbose) {
close(pb)
}
return(results)
}
Uniform distribution for the population
Let’s start with a uniform distribution for our population.
pop_unif <- runif(1e5, 0, 1)
ggplot(data.frame(x = pop_unif), aes(x = x)) + geom_density()

The mean of the population is 0.4999484. We can now simulate samples and their corresponding bootstrap estimates.
results_unif <- bootstrap_clt_simulation(pop = pop_unif, seed = 42, verbose = FALSE)
4% of our samples did not contain the population mean in the confidence interval (i.e. Type I error rate) compared to r
mean(results_unif$boot_type1) * 100`% of the bootstrap estimates. The following table compares the Type I errors for each sample compared to the bootstrap estiamted from that sample.
tab <- table(results_unif$samp_type1, results_unif$boot_type1, useNA = 'ifany')
tab
##
## FALSE TRUE
## FALSE 477 3
## TRUE 0 20
In general committing a type I error is the same regardless of method, though there were 3 instances where the bootstrap would have led to a type I error rate where the standard error approach would not.
The following plots show the relationship between the estimated mean (left) and condifence interval width (right) for each sample and its corresponding bootstrap.
results_unif |>
ggplot(aes(x = samp_mean, y = boot_mean)) +
geom_vline(xintercept = mean(pop_unif), color = 'blue') +
geom_hline(yintercept = mean(pop_unif), color = 'blue') +
geom_abline() +
geom_point() +
ggtitle("Sample mean vs bootstrap mean")

results_unif |>
dplyr::mutate(samp_ci_width = samp_ci_high - samp_ci_low,
boot_ci_width = boot_ci_high - boot_ci_low) |>
ggplot(aes(x = samp_ci_width, y = boot_ci_width)) +
geom_abline() +
geom_point() +
ggtitle('Sample vs boostrap confidence interval width')

Skewed distribution for the population
We will repeat the same analysis using a positively skewed distribution.
pop_skewed <- rnbinom(1e5, 3, .5)
ggplot(data.frame(x = pop_skewed), aes(x = x)) + geom_density(bw = 0.75)

The mean of the population for this distribution is 2.99792
results_skewed <- bootstrap_clt_simulation(pop = pop_skewed, seed = 42, verbose = FALSE)
mean(results_skewed$samp_type1) # Percent of samples with Type I error
## [1] 0.05
mean(results_skewed$boot_type1) # Percent of bootstrap estimates with Type I error
## [1] 0.052
# CLT vs Bootstrap Type I error rate
table(results_skewed$samp_type1, results_skewed$boot_type1, useNA = 'ifany')
##
## FALSE TRUE
## FALSE 473 2
## TRUE 1 24
results_skewed |>
ggplot(aes(x = samp_mean, y = boot_mean)) +
geom_vline(xintercept = mean(pop_skewed), color = 'blue') +
geom_hline(yintercept = mean(pop_skewed), color = 'blue') +
geom_abline() +
geom_point() +
ggtitle("Sample mean vs bootstrap mean")

results_skewed |>
dplyr::mutate(samp_ci_width = samp_ci_high - samp_ci_low,
boot_ci_width = boot_ci_high - boot_ci_low) |>
ggplot(aes(x = samp_ci_width, y = boot_ci_width)) +
geom_abline() +
geom_point() +
ggtitle('Sample vs boostrap confidence interval width')

We can see the results are very similar to that of the uniform distirubtion. Exploring the one case where the bootstrap would have resulted in a Type I error where the standard error approach would not reveals that it is very close with the difference being less than 0.1.
results_differ <- results_skewed |>
dplyr::filter(!samp_type1 & boot_type1)
results_differ
## seed samp_mean samp_se samp_ci_low samp_ci_high samp_type1 boot_mean
## 1 443 3.866667 0.4516466 2.942946 4.790388 FALSE 3.924733
## 2 474 3.933333 0.4816956 2.948155 4.918511 FALSE 3.956800
## boot_ci_low boot_ci_high boot_type1
## 1 3.044802 4.804665 TRUE
## 2 3.018549 4.895051 TRUE
set.seed(results_differ[1,]$seed)
samp <- sample(pop_skewed, size = 30)
boot_samp <- numeric(500)
for(j in 1:500) {
boot_samp[j] <- sample(samp, size = length(samp), replace = TRUE) |>
mean()
}
cv = abs(qt(0.025, df = 30 - 1))
mean(pop_skewed)
## [1] 2.99792
ci <- c(mean(samp) - cv * sd(samp) / sqrt(30), mean(samp) + cv * sd(samp) / sqrt(30))
ci
## [1] 2.942946 4.790388
mean(pop_skewed) < ci[1] | mean(pop_skewed) > ci[2]
## [1] FALSE
ci_boot <- c(mean(boot_samp) - cv * sd(boot_samp), mean(boot_samp) + cv * sd(boot_samp))
ci_boot
## [1] 3.044802 4.804665
mean(pop_skewed) < ci_boot[1] | mean(pop_skewed) > ci_boot[2]
## [1] TRUE
Adding an outlier
Let’s consider a sample that forces the largest value from the population to be in the sample.
set.seed(2112)
samp_outlier <- c(sample(pop_skewed, size = 29), max(pop_skewed))
boot_samp <- numeric(500)
for(j in 1:500) {
boot_samp[j] <- sample(samp, size = length(samp), replace = TRUE) |>
mean()
}
ci <- c(mean(samp_outlier) - cv * sd(samp_outlier) / sqrt(30), mean(samp_outlier) + cv * sd(samp_outlier) / sqrt(30))
ci
## [1] 1.647006 4.952994
mean(pop_skewed) < ci[1] | mean(pop_skewed) > ci[2]
## [1] FALSE
ci_boot <- c(mean(boot_samp) - cv * sd(boot_samp), mean(boot_samp) + cv * sd(boot_samp))
ci_boot
## [1] 2.905153 4.781381
mean(pop_skewed) < ci_boot[1] | mean(pop_skewed) > ci_boot[2]
## [1] FALSE
In this example we do see that the presense of the outlier does have a bigger impact on the confidence interval with the bootstrap confidence interval being much smaller.
Sample and bootstrap size related to standard error
Let’s also explore the relationship of n, number of bootstrap samples, and standard error. Recall the formula for the standard error is:
[ SE = frac{sigma}{sqrt{n}} ]
The figure below plots the standard error against the standard error assuming sigma (standard deviation) is one. As you can see, simply increasing the sample size will decrease the standard error (and therefore the confidence interval).
se <- function(n, sigma = 1) {
sigma / sqrt(n)
}
ggplot() + stat_function(fun = se) + xlim(c(0, 100)) +
ylab('Standard Error') + xlab('Sample Size (n)')

Considering again a population with a uniform distribution, the following code will draw random samples with n ranging from 30 to 50 in increments of 15. For each of those random samples, we will also estimate boostrap standard errors with the number of bootstrap samples ranging from 50 to 1,000 in increments of 50.
n <- seq(30, 500, by = 15)
n_boots <- seq(50, 1000, by = 50)
results <- expand.grid(n, n_boots)
attributes(results) <- NULL
results <- as.data.frame(results)
names(results) <- c('n', 'n_boots')
results$samp_mean <- NA
results$samp_se <- NA
results$boot_mean <- NA
results$boot_se <- NA
for(i in seq_len(nrow(results))) {
samp <- sample(pop_unif, size = results[i,]$n)
results[i,]$samp_mean <- mean(samp)
results[i,]$samp_se <- sd(samp) / sqrt(length(samp))
boot_samp_dist <- numeric(results[i,]$n_boots)
for(j in seq_len(results[i,]$n_boots)) {
boot_samp_dist[j] <- sample(samp, size = length(samp), replace = TRUE) |> mean()
}
results[i,]$boot_mean <- mean(boot_samp_dist)
results[i,]$boot_se <- sd(boot_samp_dist)
}
The figure to the left plots the sample size against the standard error which, like above, shows that as the sample size increases the standard error decreases. On the right is a plot of the number of bootstrap samples against the standard error where the point colors correspond to the sample size. Here we see the standard error is constant. That is, the number of bootstrap samples is not related to the standard error. The variability in standard error is accounted for by the sample size.
y_limits <- c(0, 0.075)
p_samp_size_se <- ggplot(results, aes(x = n, y = samp_se)) +
geom_point(fill = '#9ecae1', color = 'grey50', shape = 21) +
geom_smooth(color = 'darkgreen', se = FALSE, method = 'loess', formula = y ~ x) +
ylim(y_limits) +
ylab('Standard Error') +
xlab('Sample size (n)') +
ggtitle(latex2exp::TeX("Standard Error (SE = frac{sigma}{sqrt{n}})")) +
scale_fill_gradient(low = '#deebf7', high = '#3182bd') +
theme(legend.position = 'bottom')
p_boot_size_se <-
ggplot(results, aes(x = n_boots, y = boot_se)) +
geom_point(aes(fill = n), color = 'grey50', shape = 21) +
geom_smooth(color = 'darkgreen', se = FALSE, method = 'loess', formula = y ~ x) +
ylim(y_limits) +
ylab('Standard Error') +
xlab('Number of Bootstrap Samples') +
ggtitle('Bootstrap Standard Error',
subtitle = '(i.e. standard deviation of the bootstrap sample)') +
scale_fill_gradient(low = '#deebf7', high = '#3182bd') #+ theme(legend.position = 'none')
cowplot::plot_grid(p_samp_size_se, p_boot_size_se)

Lastly we can plot the relationship between the two standard error estimates; the correlation of which is extremely high with r = 1.
ggplot(results, aes(x = samp_se, y = boot_se)) +
geom_abline() +
geom_point() +
xlab('Sample Standard Error') +
ylab('Boostrap Standard Error') +
ggtitle(paste0('Correlation between standard errors = ', round(cor(results$samp_se, results$boot_se), digits = 2))) +
coord_equal()

Continue reading: Bootstrap vs Standard Error Confidence Intervals
Analysis and Long-term Implications of Bootstrap vs Standard Error Confidence Intervals
The superior way to estimate confidence intervals; bootstrap or standard error continues to be a discussion point. There are indications of the robustness of both methods but the preference depends on the error rate, and their performance. The presented article shared an example to demonstrate which, amongst bootstrap and standard-error-based intervals, tend to be more resilient.
Key Understanding
In the article, an illustrative function was developed to simulate the difference between both methods. By employing a random sampling approach, the confidence interval for a chosen sample size was calculated using both methods. This was first performed on a population with a uniform distribution and was then repeated for a positively skewed distribution. After these two scenarios, the author explored a case with the inclusion of an outlier in the population. After these explorations, the author suggested running a simulation to gauge the relationship between the sample sizes, number of bootstrap samples, and standard error.
Uniform Distribution Population
A simulation revealed that 4% of the total samples examined, failed to include the mean population in the confidence intervals yielding a Type 1 error rate. For the bootstrap samples in this scenario, the error rate was similar.
Positively Skewed Distribution Population
For a positively skewed distribution population, a similar error rate was recorded for bootstrap and standard error-based confidence intervals.
Inclusion of an Outlier
When an outlier was incorporated into the sample data, the bootstrap confidence interval was found to be significantly smaller, indicating a larger impact on the confidence interval.
Sample and Bootstrap Size Related to Standard Error
When sample size was increased, the standard error decreased, whereas the number of bootstrap samples did not have any significant impact on the standard error. That is, the variability in the standard error was not impacted by the number of bootstrap samples but was significantly influenced by the sample size.
Future Implications
This study provides a clear representation of the impact of sample data and bootstrap sample data on the standard error. If applied correctly, this insight could be used extensively in situations where the estimation of confidence intervals is crucial. Furthermore, this analysis also indicates that more considerations must be applied in case of outliers, as their presence can significantly skew the results.
Actionable Advice
Given the implications of these findings, it is recommended to carefully evaluate both methods for confidence interval estimation when designing future studies or applications. Considering whether the population may be skewed or include outliers is important. As a rule, increasing sample size reduces the error rate. Also, due to the significant effect of outliers, robust techniques should be developed to more accurately estimate the confidence intervals in such scenarios.
Read the original article
by jsendak | Feb 11, 2025 | DS Articles
This article will explore initiating the RAG system and making it fully voice-activated.
Understanding the Future of the RAG System: Full Voice Activation
In the ever-evolving realm of technology, there’s an emerging trend that’s poised to revolutionize the way we interact with systems altogether. Initiating the RAG (Red, Amber, Green) system and making it fully voice-activated introduces a plethora of possibilities for increased efficiency, accessibility, and user-friendly interfaces. This innovative approach perfectly encapsulates the ongoing transition from traditional manual interfaces to intuitive voice-activated operations.
Long-Term Implications
The transition to voice-activated RAG systems represents a major stride towards enhancing user experience. Below, we have listed down some of the potential long-term implications of this technological shift:
-
Improved Accessibility: The ability to interact with the RAG system using voice commands opens up an array of opportunities for individuals with mobility impairments or other disabilities. This shift caters more inclusively to present-day demands, moving closer to technology that is entirely accessible to everyone.
-
Optimized Efficiency: By eliminating the need for manual input, the voice-activated RAG system promises an improvement in speed and efficiency when compared to traditional systems. It enables effortless coordination and rapid communication, streamlining tasks and operations.
-
Intuitive User Experience: A voice-activated RAG system provides a more intuitive and hands-free interface. This marks another step toward predictive and intuitive technology, paving the way for seamless interaction between the user and the interface.
Predicted Future Developments
Whilst the move towards fully voice-activated systems is transforming the tech landscape, it stands to reason that this technology will continue to evolve. Here are a few predictions for how this trend could develop in the future:
-
AI Integration: Integrating Artificial Intelligence (AI) with the voice-activated RAG system could result in smarter, self-learning systems. These AI-powered systems would be capable of analyzing user behavior, understanding patterns, and making predictive suggestions.
-
Enhanced Security: As voice-activated systems become more common, focus on security will intensify. Future developments might include voice biometric authentication, ensuring personalized and secure access to the RAG system.
Actionable Advice
For anyone or any company planning to leverage the power of a voice-activated RAG system, here’s some actionable advice:
Begin with clear, feasible goals for your system. Understand your users’ needs and design the interface accordingly. Since data security is a concern with voice-activated systems, implement robust security measures from the onset. Last but not least, embrace change. The capabilities of voice-activated systems are vastly expanding. Stay updated with the latest developments to ensure maximum utilization of the technology.
Read the original article
by jsendak | Feb 11, 2025 | DS Articles
HRI and generative AI raise ethical and regulatory concerns. This paper explores the complex relationship between humans and generative AI models like ChatGPT. Along with rational empathy, personification, and advanced linguistics, the study predicts generative AI will become popular. However, these models blur human-robot boundaries, raising moral and philosophical issues. This study extrapolates HRI trends and explains how generative AI’s technical aspects make it more effective than rule-based AI. A research agenda for AI optimization in education, entertainment, and healthcare is presented.
Analyzing the Futuristic Landscape of Human-Robot Interaction (HRI) and Generative AI
With the rapid evolution of generative AI models such as ChatGPT and their increasing integration into various aspects of life, it’s important to discuss the long-term implications and potential developments that this technology is expected to bring. These advancements also bring forth a myriad of ethical and regulatory concerns that need to be addressed.
The Blurring Boundaries of Human-Robot Interaction (HRI)
The study highlights the increasingly complex relationship between humans and generative AI, specifically its ability to blur the boundaries between human and robot interactions. As AI becomes more adept at rational empathy, personification, and advanced linguistics, the ethical and philosophical issues that arise will inevitably become more complex. The question of how far we can push the boundaries of generative AI is yet to be answered, but it’s a discussion we must continue to engage in.
Predicted Advancements in Generative AI
Another key point predicted in the study is that generative AI will become popular due to its superior efficacy compared to rule-based AI. By understanding and extrapolating current HRI trends, we can anticipate the future landscape of generative AI and its potential impact on various aspects of societal living.
Potential Applications of AI in Entertainment, Education, and Healthcare
The study proposes an intriguing research agenda for AI optimization in the fields of education, entertainment, and healthcare. These sectors could reap immense benefits from advanced AI models, resulting in enhanced user engagement and potentially revolutionizing the delivery of services.
Actionable Advice Based on these Insights
As we move towards this future, there is a great need for collective action and thought. Policymakers, entrepreneurs, educators, healthcare professionals, and the public need to explore and shape these technologies’ ethical and pragmatic boundaries. The following are a few actionable insights:
- Engage in Open Discussion: Encourage and engage in open, public discussions about the ethical and regulatory concerns brought about by the increasing sophistication of generative AI.
- Work on Clear Policy Guidelines: Policymakers should work on clear guidelines to ensure responsible use of AI. This could include a regulatory framework that balances innovation with risks associated with AI.
- Invest in AI Research: Encouraging further research into the technical aspects, potential applications, and implications of generative AI is of utmost importance. This could be facilitated through research funding and collaboration across sectors.
- Implement AI Ethical Education: As AI becomes a larger part of our everyday lives, there is a greater need for educating the public about AI ethics. Universities, schools, and workplaces should incorporate AI ethics into their curriculum and training programs.
The future of generative AI holds great promise, but it also requires constant vigilance, self-reflection, and a willingness to adapt and course-correct where necessary.
Read the original article
by jsendak | Feb 10, 2025 | DS Articles
Subqueries are popular tools for more complex data manipulation in SQL. If you’re a beginner on a quest to understand subqueries, this is the article for you.
Understanding the Long-Term Implications and Future Developments with SQL Subqueries
As technology continues its relentless march, SQL subqueries remain an indispensable tool in data manipulation, especially for novice and intermediate users aiming to hone their programming skills. The proper understanding and efficient use of subqueries can empower developers with an easier, more organized approach toward analyzing and manipulating complex datasets.
Long-Term Implications
As the data economy continues to evolve at a rapid pace, SQL and its subquery function are expected to remain critical tools in the programming arsenal. A proficient grasp of SQL subqueries not only is relevant now but will remain so into the future, given the rising trend of Big Data and artificial intelligence:
- Big Data: The incoming explosion of data from diverse sources will greatly increase the need for proficient SQL coders. Knowing how to use subqueries will aid in efficient data analysis.
- Artificial Intelligence: Machine learning modules that rely on analyzing large datasets to make predictions and build models would need proficient manipulation of databases through SQL subqueries.
Additionally, subqueries will likely evolve and become even more sophisticated, providing users with a greater array of tools for data management.
Possible Future Developments
In the future, we might see more interactive and intuitive versions of SQL subqueries where complex queries can be executed with even simpler syntax. The gradual evolution of SQL subqueries may also enable more connectivity with other programming languages to make development work easier and more efficient.
Actionable Advice
To keep pace with these developments, aspiring and professional coders alike should:
- Invest time in mastering SQL: Given that SQL is still a highly Java-relevant language, a strong foundation in SQL can provide many opportunities in a variety of coding-related fields.
- Learn subqueries thoroughly: Subqueries, being an integral SQL component, are necessary for handling complex data sets. Mastering them will boost effectiveness and speed in coding.
- Regularly update skills: The technology industry is constantly evolving. Hence, it is vital to stay informed about the latest developments, including those related to SQL subqueries.
- Practice, practice, practice: Nothing beats hands-on experience. Keep practicing different subqueries on various datasets to improve proficiency and speed.
In conclusion, looking at today’s data-driven society, SQL subqueries hold an important place in data manipulation. Therefore, understanding the use of subqueries will provide a significant advantage in the increasingly data-dominated world.
Read the original article