Best Practices for Building the AI Development Platform in Government 

Best Practices for Building the AI Development Platform in Government 

By John P. Desmond, AI Trends Editor 

The AI stack defined by Carnegie Mellon University is fundamental to the approach being taken by the US Army for its AI development platform efforts, according to Isaac Faber, Chief Data Scientist at the US Army AI Integration Center, speaking at the AI World Government event held in-person and virtually from Alexandria, Va., last week.  

Isaac Faber, Chief Data Scientist, US Army AI Integration Center

“If we want to move the Army from legacy systems through digital modernization, one of the biggest issues I have found is the difficulty in abstracting away the differences in applications,” he said. “The most important part of digital transformation is the middle layer, the platform that makes it easier to be on the cloud or on a local computer.” The desire is to be able to move your software platform to another platform, with the same ease with which a new smartphone carries over the user’s contacts and histories.  

Ethics cuts across all layers of the AI application stack, which positions the planning stage at the top, followed by decision support, modeling, machine learning, massive data management and the device layer or platform at the bottom.  

“I am advocating that we think of the stack as a core infrastructure and a way for applications to be deployed and not to be siloed in our approach,” he said. “We need to create a development environment for a globally-distributed workforce.”   

The Army has been working on a Common Operating Environment Software (Coes) platform, first announced in 2017, a design for DOD work that is scalable, agile, modular, portable and open. “It is suitable for a broad range of AI projects,” Faber said. For executing the effort, “The devil is in the details,” he said.   

The Army is working with CMU and private companies on a prototype platform, including with Visimo of Coraopolis, Pa., which offers AI development services. Faber said he prefers to collaborate and coordinate with private industry rather than buying products off the shelf. “The problem with that is, you are stuck with the value you are being provided by that one vendor, which is usually not designed for the challenges of DOD networks,” he said.  

Army Trains a Range of Tech Teams in AI 

The Army engages in AI workforce development efforts for several teams, including:  leadership, professionals with graduate degrees; technical staff, which is put through training to get certified; and AI users.   

Tech teams in the Army have different areas of focus include: general purpose software development, operational data science, deployment which includes analytics, and a machine learning operations team, such as a large team required to build a computer vision system. “As folks come through the workforce, they need a place to collaborate, build and share,” Faber said.   

Types of projects include diagnostic, which might be combining streams of historical data, predictive and prescriptive, which recommends a course of action based on a prediction. “At the far end is AI; you don’t start with that,” said Faber. The developer has to solve three problems: data engineering, the AI development platform, which he called “the green bubble,” and the deployment platform, which he called “the red bubble.”   

“These are mutually exclusive and all interconnected. Those teams of different people need to programmatically coordinate. Usually a good project team will have people from each of those bubble areas,” he said. “If you have not done this yet, do not try to solve the green bubble problem. It makes no sense to pursue AI until you have an operational need.”   

Asked by a participant which group is the most difficult to reach and train, Faber said without hesitation, “The hardest to reach are the executives. They need to learn what the value is to be provided by the AI ecosystem. The biggest challenge is how to communicate that value,” he said.   

Panel Discusses AI Use Cases with the Most Potential  

In a panel on Foundations of Emerging AI, moderator Curt Savoie, program director, Global Smart Cities Strategies for IDC, the market research firm, asked what emerging AI use case has the most potential.  

Jean-Charles Lede, autonomy tech advisor for the US Air Force, Office of Scientific Research, said,” I would point to decision advantages at the edge, supporting pilots and operators, and decisions at the back, for mission and resource planning.”   

Krista Kinnard, Chief of Emerging Technology for the Department of Labor

Krista Kinnard, Chief of Emerging Technology for the Department of Labor, said, “Natural language processing is an opportunity to open the doors to AI in the Department of Labor,” she said. “Ultimately, we are dealing with data on people, programs, and organizations.”    

Savoie asked what are the big risks and dangers the panelists see when implementing AI.   

Anil Chaudhry, Director of Federal AI Implementations for the General Services Administration (GSA), said in a typical IT organization using traditional software development, the impact of a decision by a developer only goes so far. With AI, “You have to consider the impact on a whole class of people, constituents, and stakeholders. With a simple change in algorithms, you could be delaying benefits to millions of people or making incorrect inferences at scale. That’s the most important risk,” he said.  

He said he asks his contract partners to have “humans in the loop and humans on the loop.”   

Kinnard seconded this, saying, “We have no intention of removing humans from the loop. It’s really about empowering people to make better decisions.”   

She emphasized the importance of monitoring the AI models after they are deployed. “Models can drift as the data underlying the changes,” she said. “So you need a level of critical thinking to not only do the task, but to assess whether what the AI model is doing is acceptable.”   

She added, “We have built out use cases and partnerships across the government to make sure we’re implementing responsible AI. We will never replace people with algorithms.”  

Lede of the Air Force said, “We often have use cases where the data does not exist. We cannot explore 50 years of war data, so we use simulation. The risk is in teaching an algorithm that you have a ‘simulation to real gap’ that is a real risk. You are not sure how the algorithms will map to the real world.”  

Chaudhry emphasized the importance of a testing strategy for AI systems. He warned of developers “who get enamored with a tool and forget the purpose of the exercise.” He recommended the development manager design in independent verification and validation strategy. “Your testing, that is where you have to focus your energy as a leader. The leader needs an idea in mind, before committing resources, on how they will justify whether the investment was a success.”   

Lede of the Air Force talked about the importance of explainability. “I am a technologist. I don’t do laws. The ability for the AI function to explain in a way a human can interact with, is important. The AI is a partner that we have a dialogue with, instead of the AI coming up with a conclusion that we have no way of verifying,” he said.  

Learn more at AI World Government. 

Advance Trustworthy AI and ML, and Identify Best Practices for Scaling AI 

Advance Trustworthy AI and ML, and Identify Best Practices for Scaling AI 

By John P. Desmond, AI Trends Editor  

Advancing trustworthy AI and machine learning to mitigate agency risk is a priority for the US Department of Energy (DOE), and identifying best practices for implementing AI at scale is a priority for the US General Services Administration (GSA).  

That’s what attendees learned in two sessions at the AI World Government live and virtual event held in Alexandria, Va. last week.   

Pamela Isom, Director of the AI and Technology Office, DOE

Pamela Isom, Director of the AI and Technology Office at the DOE, who spoke on Advancing Trustworthy AI and ML Techniques for Mitigating Agency Risks, has been involved in proliferating the use of AI across the agency for several years. With an emphasis on applied AI and data science, she oversees risk mitigation policies and standards and has been involved with applying AI to save lives, fight fraud, and strengthen the cybersecurity infrastructure.  

She emphasized the need for the AI project effort to be part of a strategic portfolio. “My office is there to drive a holistic view on AI and to mitigate risk by bringing us together to address challenges,” she said. The effort is assisted by the DOE’s AI and Technology Office, which is focused on transforming the DOE into a world-leading AI enterprise by accelerating research, development, delivery and the adoption of AI.  

“I am telling my organization to be mindful of the fact that you can have tons and tons of data, but it might not be representative,” she said. Her team looks at examples from international partners, industry, academia and other agencies for outcomes “we can trust” from systems incorporating AI.  

“We know that AI is disruptive, in trying to do what humans do and do it better,” she said. “It is beyond human capability; it goes beyond data in spreadsheets; it can tell me what I’m going to do next before I contemplate it myself. It’s that powerful,” she said.  

As a result, close attention must be paid to data sources. “AI is vital to the economy and our national security. We need precision; we need algorithms we can trust; we need accuracy. We don’t need biases,” Isom said, adding, “And don’t forget that you need to monitor the output of the models long after they have been deployed.”   

Executive Orders Guide GSA AI Work 

Executive Order 14028, a detailed set of actions to address the cybersecurity of government agencies, issued in May of this year, and Executive Order 13960, promoting the use of trustworthy AI in the Federal government, issued in December 2020, provide valuable guides to her work.   

To help manage the risk of AI development and deployment, Isom has produced the AI Risk Management Playbook, which provides guidance around system features and mitigation techniques. It also has a filter for ethical and trustworthy principles which are considered throughout AI lifecycle stages and risk types. Plus, the playbook ties to relevant Executive Orders.  

And it provides examples, such as your results came in at 80% accuracy, but you wanted 90%. “Something is wrong there,” Isom said, adding, “The playbook helps you look at these types of problems and what you can do to mitigate risk, and what factors you should weigh as you design and build your project.”  

While internal to DOE at present, the agency is looking into next steps for an external version. “We will share it with other federal agencies soon,” she said.   

GSA Best Practices for Scaling AI Projects Outlined  

Anil Chaudhry, Director of Federal AI Implementations, AI Center of Excellence (CoE), GSA

Anil Chaudhry, Director of Federal AI Implementations for the AI Center of Excellence (CoE) of the GSA, who spoke on Best Practices for Implementing AI at Scale, has over 20 years of experience in technology delivery, operations and program management in the defense, intelligence and national security sectors.   

The mission of the CoE is to accelerate technology modernization across the government, improve the public experience and increase operational efficiency. “Our business model is to partner with industry subject matter experts to solve problems,” Chaudhry said, adding, “We are not in the business of recreating industry solutions and duplicating them.”   

The CoE is providing recommendations to partner agencies and working with them to implement AI systems as the federal government engages heavily in AI development. “For AI, the government landscape is vast. Every federal agency has some sort of AI project going on right now,” he said, and the maturity of AI experience varies widely across agencies.  

Typical use cases he is seeing include having AI focus on increasing speed and efficiency, on cost savings and cost avoidance, on improved response time and increased quality and compliance. As one best practice, he recommended the agencies vet their commercial experience with the large datasets they will encounter in government.   

“We’re talking petabytes and exabytes here, of structured and unstructured data,” Chaudhry said. [Ed. Note: A petabyte is 1,000 terabytes.] “Also ask industry partners about their strategies and processes on how they do macro and micro trend analysis, and what their experience has been in the deployment of bots such as in Robotic Process Automation, and how they demonstrate sustainability as a result of drift of data.”   

He also asks potential industry partners to describe the AI talent on their team or what talent they can access. If the company is weak on AI talent, Chaudhry would ask, “If you buy something, how will you know you got what you wanted when you have no way of evaluating it?”  

He added, “A best practice in implementing AI is defining how you train your workforce to leverage AI tools, techniques and practices, and to define how you grow and mature your workforce. Access to talent leads to either success or failure in AI projects, especially when it comes to scaling a pilot up to a fully deployed system.”  

In another best practice, Chaudhry recommended examining the industry partner’s access to financial capital. “AI is a field where the flow of capital is highly volatile. “You cannot predict or project that you will spend X amount of dollars this year to get where you want to be,” he said, because an AI development team may need to explore another hypothesis, or clean up some data that may not be transparent or is potentially biased. “If you don’t have access to funding, it is a risk your project will fail,” he said.  

Another best practice is access to logistical capital, such as the data  that sensors collect for an AI IoT system. “AI requires an enormous amount of data that is authoritative and timely. Direct access to that data is critical,” Chaudhry said. He recommended that data sharing agreements  be in place with organizations relevant to the AI system. “You might not need it right away, but having access to the data, so you could immediately use it and to have thought through the privacy issues before you need the data, is a good practice for scaling AI programs,” he said.   

A final best practice is planning of physical infrastructure, such as data center space. “When you are in a pilot, you need to know how much capacity you need to reserve at your data center, and how many end points you need to manage” when the application scales up, Chaudhry said, adding, “This all ties back to access to capital and all the other best practices.“ 

Learn more at AI World Government. 

Promise and Perils of Using AI for Hiring: Guard Against Data Bias 

Promise and Perils of Using AI for Hiring: Guard Against Data Bias 

By AI Trends Staff  

While AI in hiring is now widely used for writing job descriptions, screening candidates, and automating interviews, it poses a risk of wide discrimination if not implemented carefully. 

Keith Sonderling, Commissioner, US Equal Opportunity Commission

That was the message from Keith Sonderling, Commissioner with the US Equal Opportunity Commision, speaking at the AI World Government event held live and virtually in Alexandria, Va., last week. Sonderling is responsible for enforcing federal laws that prohibit discrimination against job applicants because of race, color, religion, sex, national origin, age or disability.   

“The thought that AI would become mainstream in HR departments was closer to science fiction two year ago, but the pandemic has accelerated the rate at which AI is being used by employers,” he said. “Virtual recruiting is now here to stay.”  

It’s a busy time for HR professionals. “The great resignation is leading to the great rehiring, and AI will play a role in that like we have not seen before,” Sonderling said.  

AI has been employed for years in hiring—“It did not happen overnight.”—for tasks including chatting with applications, predicting whether a candidate would take the job, projecting what type of employee they would be and mapping out upskilling and reskilling opportunities. “In short, AI is now making all the decisions once made by HR personnel,” which he did not characterize as good or bad.   

“Carefully designed and properly used, AI has the potential to make the workplace more fair,” Sonderling said. “But carelessly implemented, AI could discriminate on a scale we have never seen before by an HR professional.”  

Training Datasets for AI Models Used for Hiring Need to Reflect Diversity  

This is because AI models rely on training data. If the company’s current workforce is used as the basis for training, “It will replicate the status quo. If it’s one gender or one race primarily, it will replicate that,” he said. Conversely, AI can help mitigate risks of hiring bias by race, ethnic background, or disability status. “I want to see AI improve on workplace discrimination,” he said.  

Amazon began building a hiring application in 2014, and found over time that it discriminated against women in its recommendations, because the AI model was trained on a dataset of the company’s own hiring record for the previous 10 years, which was primarily of males. Amazon developers tried to correct it but ultimately scrapped the system in 2017.   

Facebook has recently agreed to pay $14.25 million to settle civil claims by the US government that the social media company discriminated against American workers and violated federal recruitment rules, according to an account from Reuters. The case centered on Facebook’s use of what it called its PERM program for labor certification. The government found that Facebook refused to hire American workers for jobs that had been reserved for temporary visa holders under the PERM program.   

“Excluding people from the hiring pool is a violation,” Sonderling said.  If the AI program “withholds the existence of the job opportunity to that class, so they cannot exercise their rights, or if it downgrades a protected class, it is within our domain,” he said.   

Employment assessments, which became more common after World War II, have provided  high value to HR managers and with help from AI they have the potential to minimize bias in hiring. “At the same time, they are vulnerable to claims of discrimination, so employers need to be careful and cannot take a hands-off approach,” Sonderling said. “Inaccurate data will amplify bias in decision-making. Employers must be vigilant against discriminatory outcomes.”  

He recommended researching solutions from vendors who vet data for risks of bias on the basis of race, sex, and other factors.   

One example is from HireVue of South Jordan, Utah, which has built a hiring platform predicated on the US Equal Opportunity Commission’s Uniform Guidelines, designed specifically to mitigate unfair hiring practices, according to an account from allWork  

A post on AI ethical principles on its website states in part, “Because HireVue uses AI technology in our products, we actively work to prevent the introduction or propagation of bias against any group or individual. We will continue to carefully review the datasets we use in our work and ensure that they are as accurate and diverse as possible. We also continue to advance our abilities to monitor, detect, and mitigate bias. We strive to build teams from diverse backgrounds with diverse knowledge, experiences, and perspectives to best represent the people our systems serve.”  

Also, “Our data scientists and IO psychologists build HireVue Assessment algorithms in a way that removes data from consideration by the algorithm that contributes to adverse impact without significantly impacting the assessment’s predictive accuracy. The result is a highly valid, bias-mitigated assessment that helps to enhance human decision making while actively promoting diversity and equal opportunity regardless of gender, ethnicity, age, or disability status.”  

Dr. Ed Ikeguchi, CEO, AiCure

The issue of bias in datasets used to train AI models is not confined to hiring. Dr. Ed Ikeguchi, CEO of AiCure, an AI analytics company working in the life sciences industry, stated in a recent account in HealthcareITNews, “AI is only as strong as the data it’s fed, and lately that data backbone’s credibility is being increasingly called into question. Today’s AI developers lack access to large, diverse data sets on which to train and validate new tools.”  

He added, “They often need to leverage open-source datasets, but many of these were trained using computer programmer volunteers, which is a predominantly white population. Because algorithms are often trained on single-origin data samples with limited diversity, when applied in real-world scenarios to a broader population of different races, genders, ages, and more, tech that appeared highly accurate in research may prove unreliable.” 

Also, “There needs to be an element of governance and peer review for all algorithms, as even the most solid and tested algorithm is bound to have unexpected results arise. An algorithm is never done learningit must be constantly developed and fed more data to improve.” 

And, “As an industry, we need to become more skeptical of AI’s conclusions and encourage transparency in the industry. Companies should readily answer basic questions, such as ‘How was the algorithm trained? On what basis did it draw this conclusion?” 

Read the source articles and information at AI World Government, from Reuters and from HealthcareITNews. 

Predictive Maintenance Proving Out as Successful AI Use Case 

Predictive Maintenance Proving Out as Successful AI Use Case 

By John P. Desmond, AI Trends Editor  

More companies are successfully exploiting predictive maintenance systems that combine AI and IoT sensors to collect data that anticipates breakdowns and recommends preventive action before break or machines fail, in a demonstration of an AI use case with proven value.  

This growth is reflected in optimistic market forecasts. The predictive maintenance market is sized at $6.9 billion today and is projected to grow to $28.2 billion by 2026, according to a report from IoT Analytics of Hamburg, Germany. The firm counts over 280 vendors offering solutions in the market today, projected to grow to over 500 by 2026.  

Fernando Bruegge, Analyst, IoT Analytics, Hamburg, Germany

“This research is a wake-up call to those that claim IoT is failing,” stated analyst Fernando Bruegge, author of the report, adding, “For companies that own industrial assets or sell equipment, now is the time to invest in predictive maintenance-type solutions.” And, “Enterprise technology firms need to prepare to integrate predictive maintenance solutions into their offerings,” Bruegge suggested.  

Here is a review of some specific experience with predictive maintenance systems that combine AI and IoT sensors. 

Aircraft engine manufacturer Rolls-Royce is deploying predictive analytics to help reduce the amount of carbon its engines produce, while also optimizing maintenance to help customers keep planes in the air longer, according to a recent account in CIO. 

Rolls-Royce built an Intelligent Engine platform to monitor engine flight, gathering data on weather conditions and how pilots are flying. Machine learning is applied to the data to customize maintenance regimes for individual engines. 

Stuart Hughes, chief information and digital officer, Rolls-Royce

“We’re tailoring our maintenance regimes to make sure that we’re optimizing for the life an engine has, not the life the manual says it should have,” stated Stuart Hughes, chief information and digital officer at Rolls-Royce. “It’s truly variable service, looking at each engine as an individual engine.” 

Customers are seeing less service interruption. “Rolls-Royce has been monitoring engines and charging per hour for at least 20 years,” Hughes stated. “That part of the business isn’t new. But as we’ve evolved, we’ve begun to treat the engine as a singular engine. It’s much more about the personalization of that engine.”  

Predictive analytics is being applied in healthcare as well as in the manufacturing industry. Kaiser Permanente, the integrated managed care consortium based in Oakland, Calif. Is using predictive analytics to identify non-intensive care unit (ICU) patients at risk of rapid deterioration.   

While non-ICU patients that require unexpected transfers to the ICU constitute less than 4% of the total hospital population, they account for 20% of all hospital deaths, according to Dr. Gabriel Escobar, research scientist, Division of Research, and regional director, Hospital Operations Research, Kaiser Permanente Northern California. 

Kaiser Permanente Practicing Predictive Maintenance in Healthcare 

Kaiser Permanente developed the Advanced Alert Monitor (AAM) system, leveraging three predictive analytic models to analyze more than 70 factors in a given patient’s electronic health record to generate a composite risk score. 

“The AAM system synthesizes and analyzes vital statistics, lab results, and other variables to generate hourly deterioration risk scores for adult hospital patients in the medical-surgical and transitional care units,” stated Dick Daniels, executive vice president and CIO of Kaiser Permanente in the CIO account. “Remote hospital teams evaluate the risk scores every hour and notify rapid response teams in the hospital when potential deterioration is detected. The rapid response team conducts bedside evaluation of the patient and calibrates the course treatment with the hospitalist.” 

In advice to other practitioners, Daniels recommended a focus on how the tool will be fit into the workflow of health care teams. “It took us about five years to perform the initial mapping of the electronic medical record backend and develop the predictive models,” Daniels stated. “It then took us another two to three years to transition these models into a live web services application that could be used operationally.” 

In an example from the food industry, a PepsiCo Frito-Lay plant in Fayetteville, Tenn. is using predictive maintenance successfully, with year-to-date equipment downtime at 0.75% and unplanned downtime at 2.88%, according to Carlos Calloway, the site’s reliability engineering manager, in an account in PlantServices. 

Examples of monitoring include: vibration readings confirmed by ultrasound helped to prevent a PC combustion blower motor from failing and shutting down the whole potato chip department; infrared analysis of the main pole for the plant’s GES automated warehouse detected a hot fuse holder, which helped to avoid a shutdown of the entire warehouse; and increased acid levels were detected in oil samples from a baked extruder gearbox, indicating oil degradation, which enabled prevention of a shutdown of Cheetos Puffs production. 

The Frito-Lay plant produces more than 150 million pounds of product per year, including Lays, Ruffles, Cheetos, Doritos, Fritos, and Tostitos.  

The types of monitoring include vibration analysis, used on mechanical applications, which is processed with the help of a third-party company which sends alerts to the plant for investigation and resolution. Another service partner performs quarterly vibration monitoring on selected equipment. All motor control center rooms and electrical panels are monitored with quarterly infrared analysis, which is also used on electrical equipment, some rotating equipment, and heat exchangers. In addition, the plant has done ultrasonic monitoring for more than 15 years, and it is “kind of like the pride and joy of our site from a predictive standpoint,” stated Calloway.  

The plan has a number of products in place from UE Systems of Elmsford, NY, supplier of ultrasonic instruments, hardware and software, and training for predictive maintenance.   

Louisiana Alumina Plant Automating Bearing Maintenance   

Bearings, which wear over time under varying conditions of weather and temperature in the case of automobiles, are a leading candidate for IoT monitoring and predictive maintenance with AI. The Noranda Alumina plant in Gramercy, La. is finding a big payoff from its investment in a system to improve the lubrication of bearings in its production equipment.  

The system has resulted in a 60% decline in bearing changes in the second year of using the new lubrication system, translating to some $900,000 in savings on bearings that did not need to be replaced and avoided downtime.  

“Four hours of downtime is about $1 million dollars’ worth of lost production,” stated Russell Goodwin, a reliability engineer and millwright instructor at Noranda Alumina, in the PlantServices account, which was based on presentations at the Leading Reliability 2021 event. 

The Noranda Alumina plant is the only alumina plant operating in the US. “If we shut down, you’ll need to import it,” stated Goodwin. The plant experiences pervasive dust, dirt, and caustic substances, which complicate efforts at improved reliability and maintenance practices.  

Noranda Alumina tracks all motors and gearboxes at 1,500 rpm and higher with vibration readings, and most below 1,500 with ultrasound. Ultrasonic monitoring, of sound in ranges beyond human hearing, was introduced to the plant after Goodwin joined the company in 2019. At the time, grease monitoring had room for improvement. “If grease was not visibly coming out of the seal, the mechanical supervisor did not count the round as complete,” stated Goodwin.  

After introducing automation, the greasing system has improved dramatically, he stated. The system was also able to detect bearings in a belt whose bearings were wearing out too quickly due to contamination. “Tool-enabled tracking helped to prove that it wasn’t improper greasing, but rather the bearing was made improperly,” stated Goodwin.  

Read the source articles and information in  IoT Analyticsin CIO and in PlantServices. 

Novelty In The Game Of Go Provides Bright Insights For AI And Autonomous Vehicles 

Novelty In The Game Of Go Provides Bright Insights For AI And Autonomous Vehicles 

By Lance Eliot, the AI Trends Insider  

We already expect that humans to exhibit flashes of brilliance. It might not happen all the time, but the act itself is welcomed and not altogether disturbing when it occurs.   

What about when Artificial Intelligence (AI) seems to display an act of novelty? Any such instance is bound to get our attention; questions arise right away.   

How did the AI come up with the apparent out-of-the-blue insight or novel indication? Was it a mistake, or did it fit within the parameters of what the AI was expected to produce? There is also the immediate consideration of whether the AI somehow is slipping toward the precipice of becoming sentient.   

Please be aware that no AI system in existence is anywhere close to reaching sentience, despite the claims and falsehoods tossed around in the media. As such, if today’s AI seems to do something that appears to be a novel act, you should not leap to the conclusion that this is a sign of human insight within technology or the emergence of human ingenuity among AI.   

That’s an anthropomorphic bridge too far.   

The reality is that any such AI “insightful” novelties are based on various concrete computational algorithms and tangible data-based pattern matching.   

In today’s column, we’ll be taking a close look at an example of an AI-powered novel act, illustrated via the game of Go, and relate these facets to the advent of AI-based true self-driving cars as a means of understanding the AI-versus-human related ramifications. 

Realize that the capacity to spot or suggest a novelty is being done methodically by an AI system, while, in contrast, no one can say for sure how humans can devise novel thoughts or intuitions. 

Perhaps we too are bound by some internal mechanistic-like facets, or maybe there is something else going on. Someday, hopefully, we will crack open the secret inner workings of the mind and finally know how we think. I suppose it might undercut the mystery and magical aura that oftentimes goes along with those of us that have moments of outside-the-box visions, though I’d trade that enigma to know how the cups-and-balls trickery truly functions (going behind the curtain, as it were).   

Speaking of novelty, a famous game match involving the playing of Go can provide useful illumination on this overall topic.   

Go is a popular board game in the same complexity category as chess. Arguments are made about which is tougher, chess or Go, but I’m not going to get mired into that morass. For the sake of civil discussion, the key point is that Go is highly complex and requires intense mental concentration especially at the tournament level.   

Generally, Go consists of trying to capture territory on a standard Go board, consisting of a 19 by 19 grid of intersecting lines. For those of you that have never tried playing Go, the closest similar kind of game might be the connect-the-dots that you played in childhood, which involves grabbing up territory, though Go is magnitudes more involved.    

There is no need for you to know anything in particular about Go to get the gist of what will be discussed next regarding the act of human novelty and the act of AI novelty.   

A famous Go competition took place about four years ago that pitted one of the world’s top professional Go players, Lee Sedol, against an AI program that had been crafted to play Go, coined as AlphaGo. There is a riveting documentary about the contest and plenty of write-ups and online videos that have in detail covered the match, including post-game analysis.   

Put yourself back in time to 2016 and relive what happened. 

Most AI developers did not anticipate that the AI of that time would be proficient enough to beat a top Go player. Sure, AI had already been able to best some top chess players, and thus offered a glimmer of expectation that Go would eventually be equally undertaken, but there weren’t any Go programs that had been able to compete at the pinnacle levels of human Go players. Most expected that it would probably be around the year 2020 or so before the capabilities of AI would be sufficient to compete in world-class Go tournaments.  

DeepMind Created AlphaGo Using Deep Learning, Machine Learning   

A small-sized tech company named DeepMind Technologies devised the AlphaGo AI playing system (the firm was later acquired by Google). Using techniques from Machine Learning and Deep Learning, the AlphaGo program was being revamped and adjusted right up to the actual tournament, a typical kind of last-ditch developer contortions that many of us have done when trying to get the last bit of added edge into something that is about to be demonstrated.   

This was a monumental competition that had garnered global interest.   

Human players of Go were doubtful that the AlphaGo program would win. Many AI techies were doubtful that AlphaGo would win. Even the AlphaGo developers were unsure of how well the program would do, including the stay-awake-at-night fears that the AlphaGo program would hit a bug or go into a kind of delusional mode and make outright mistakes and play foolishly.   

A million dollars in prize money was put into the pot for the competition. There would be five Go games played, one per day, along with associated rules about taking breaks, etc. Some predicted that Sedol would handily win all five games, doing so without cracking a sweat. AI pundits were clinging to the hope that AlphaGo would win at least one of the five games, and otherwise, present itself as a respectable level of Go player throughout the contest. 

In the first match, AlphaGo won.   

This was pretty much a worldwide shocker. Sedol was taken aback. Lots of Go players were surprised that a computer program could compete and beat someone at Sedol’s level of play. Everyone began to give some street cred to the AlphaGo program and the efforts by the AI developers.   

Tension grew for the next match.   

For the second game, it was anticipated that Sedol might significantly change his approach to the contest. Perhaps he had been overconfident coming into the competition, some harshly asserted, and the loss of the first game would awaken him to the importance of putting all his concentration into the tournament. Or, possibly he had played as though he was competing with a lesser capable player and thus was not pulling out all the stops to try and win the match.   

What happened in the second game? 

Turns out that AlphaGo prevailed, again, and also did something that was seemingly remarkable for those that avidly play Go. On the 37th move of the match, the AlphaGo program opted to make placement onto the Go board in a spot that nobody especially anticipated. It was a surprise move, coming partway through a match that otherwise was relatively conventional in the nature of the moves being made by both Sedol and AlphaGo.   

At the time, in real-time, rampant speculation was that the move was an utter gaffe on the part of the AlphaGo program.   

Instead, it became famous as a novel move, known now as “Move 37” and heralded in Go and used colloquially overall to suggest any instance when AI does something of a novel or unexpected manner.   

In the third match, AlphaGo won again, now having successfully beaten Sedol in a 3-out-of-5 winner competition. They continued though to play a fourth and a fifth game.   

During the fourth game, things were tight as usual and the match play was going head-to-head (well, head versus AI). Put yourself into the shoes of Sedol. In one sense, he wasn’t just a Go player, he was somehow representing all of humanity (an unfair and misguided viewpoint, but pervasive anyway), and the pressure was on him to win at least one game. Just even one game would be something to hang your hat on, and bolster faith in mankind (again, a nonsensical way to look at it).   

At the seventy-eighth move of the fourth game, Sedol made a so-called “wedge” play that was not conventional and surprised onlookers. The next move by AlphaGo was rotten and diminished the likelihood of a win by the AI system. After additional play, ultimately AlphaGo tossed in the towel and resigned from the match, thus Sedol finally had a win against the AI in his belt. He ended-up losing the fifth game, so AlphaGo won four games, Sedol won one). His move also became famous, generally known as “Move 78” in the lore of Go playing. 

Something else that is worthwhile to know about involves the overarching strategy that AlphaGo was crafted to utilize.   

When you play a game, let’s say connect-the-dots, you can aim to grab as many squares at each moment of play, doing so under the belief that inevitably you will then win by the accumulation of those tactically-oriented successes. Human players of Go are often apt to play that way, as it can be said too of chess players, and nearly any kind of game playing altogether.   

Another approach involves playing to win, even if only by the thinnest of margins, as long as you win. In that case, you might not be motivated for each tactical move to gain near-term territory or score immediate points, and be willing instead to play a larger scope game per se. The proverbial mantra is that if you are shortsighted, you might win some of the battles, but could eventually lose the war. Therefore, it might be a better strategy to keep your eye on the prize, winning the war, albeit if it means that there are battles and skirmishes to be lost along the way.   

The AI developers devised AlphaGo with that kind of macro-perspective underlying how the AI system functioned.   

Humans can have an especially hard time choosing at the moment to make a move that might look bad or ill-advised, such as giving up territory, finding themselves to be unable to grit their teeth, and taking a lump or two during play. The embarrassment at the instant is difficult to offset by betting that it is going to ultimately be okay, and you will prevail in the end.   

For an AI system, there is no semblance of that kind of sentiment involved, and it is all about calculated odds and probabilities.   

Now that we’ve covered the legendary Go match, let’s consider some lessons learned about novelty.   

The “Move 38” made by the AI system was not magical. It was an interesting move, for sure, and the AI developers later indicated that the move was one that the AI had calculated would rarely be undertaken by a human player.   

This can be interpreted in two ways (at least).   

One interpretation is that a human player would not make that move because humans are right and know that it would be a lousy move.   

Another interpretation is that humans would not make that move due to a belief that the move is unwise, but this could be a result of the humans insufficiently assessing the ultimate value of the move, in the long-run, and getting caught up in a shorter time frame semblance of play. 

In this instance, it turned out to be a good move—maybe a brilliant move—and turned the course of the game to the advantage of the AI. Thus, what looked like brilliance was in fact a calculated move that few humans would have imagined as valuable and for which jostled humans to rethink how they think about such matters.   

Some useful recap lessons:   

Showcasing Human Self-Limited Insight. When the AI does something seemingly novel, it might be viewed as novel simply because humans have already predetermined what is customary and anything beyond that is blunted by the assumption that it is unworthy or mistaken. You could say that we are mentally trapped by our own drawing of the lines of what is considered as inside versus outside the box.   

Humans Exploiting AI For Added Insight. Humans can gainfully assess an AI-powered novelty to potentially re-calibrate human thinking on a given topic, enlarging our understanding via leveraging something that the AI, via its vast calculative capacity, might detect or spot that we have not yet so ascertained. Thus, besides admiring the novelty, we ought to seek to improve our mental prowess by whatever source shines brightly including an AI system.   

AI Novelty Is A Dual-Edged Sword. We need to be mindful of all AI systems and their possibility of acting in a novel way, which could be good or could be bad. In the Go game, it worked out well. In other circumstances, the AI exploiting the novelty route might go off the tracks, as it were.   

Let’s see how this can be made tangible via exploring the advent of AI-based true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Acts Of Novelty   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

You could say that the AI is playing a game, a driving game, requiring tactical decision-making and strategic planning, akin to when playing Go or chess, though in this case involving life-or-death matters driving a multi-ton car on our public roadways.   

Our base assumption is that the AI driving system is going to always take a tried-and-true approach to any driving decisions. This assumption is somewhat shaped around a notion that AI is a type of robot or automata that is bereft of any human biases or human foibles.   

In reality, there is no reason to make this kind of assumption. Yes, we can generally rule out the aspect that the AI is not going to display the emotion of a human ilk, and we also know that the AI will not be drunk or DUI in its driving efforts. Nonetheless, if the AI has been trained using Machine Learning (ML) and Deep Learning (DL), it can pick up subtleties of human behavioral patterns in the data about human driving, out of which it will likewise utilize or mimic in choosing its driving actions (for example, see my column postings involving an analysis of potential racial biases in AI and the possibility of gender biases).   

Turning back to the topic of novelty, let’s ponder a specific use case.   

A few years ago, I was driving on an open highway, going at the prevailing speed of around 65 miles per hour, and something nearly unimaginable occurred. A car coming toward me in the opposing lane, and likely traveling at around 60 to 70 miles per hour, suddenly and unexpectedly veered into my lane. It was one of those moments that you cannot anticipate.   

There did not appear to be any reason for the other driver to be headed toward me, in my lane of traffic, and coming at me for an imminent and bone-chillingly terrifying head-on collision. If there had been debris on the other lane, it might have been a clue that perhaps this other driver was simply trying to swing around the obstruction. No debris. If there was a slower moving car, the driver might have wanted to do a fast end-around to get past it. Nope, there was absolutely no discernible basis for this radical and life-threatening maneuver. 

What would you do? 

Come on, hurry, the clock is ticking, and you have just a handful of split seconds to make a life-or-death driving decision.   

You could stay in your lane and hope that the other driver realizes the error of their ways, opting to veer back into their lane at the last moment. Or, you could proactively go into the opposing lane, giving the other driver a clear path in your lane, but this could be a chancy game of chicken whereby the other driver chooses to go back into their lane (plus, there was other traffic further behind that driver, so going into the opposing lane was quite dicey).   

Okay, so do you stay in your lane or veer away into the opposing lane?   

I dare say that most people would be torn between those two options. Neither one is palatable. 

Suppose the AI of a self-driving car was faced with the same circumstance.   

What would the AI do?   

The odds are that even if the AI had been fed with thousands upon thousands of miles of driving via a database about human driving while undergoing the ML/DL training, there might not be any instances of a head-to-head nature and thus no prior pattern to utilize for making this onerous decision.   

Anyway, here’s a twist.   

Imagine that the AI calculated the probabilities involving which way to go, and in some computational manner came to the conclusion that the self-driving car should go into the ditch that was at the right of the roadway. This was intended to avoid entirely a collision with the other car (the AI estimated that a head-on collision would be near-certain death for the occupants). The AI estimated that going into the ditch at such high speed would indisputably wreck the car and cause great bodily injury to the occupants, but the odds of assured death were (let’s say) calculated as lower than the head-on option possibilities (this is a variant of the infamous Trolley Problem, as covered in my columns).   

I’m betting that you would concede that most humans would be relatively unwilling to aim purposely into that ditch, which they know for sure is going to be a wreck and potential death, while instead willing (reluctantly) to take a hoped-for chance of either veering into the other lane or staying on course and wishing for the best.   

In some sense, the AI might seem to have made a novel choice. It is one that (we’ll assume) few humans would have given any explicit thought toward.   

Returning to the earlier recap of the points about AI novelty, you could suggest that in this example, the AI has exceeded a human self-imposed limitation by the AI having considered otherwise “unthinkable” options. From this, perhaps we can learn to broaden our view for options that otherwise don’t seem apparent.   

The other recap element was that the AI novelty can be a dual-edged sword.   

If the AI did react by driving into the ditch, and you were inside the self-driving car, and you got badly injured, would you later believe that the AI acted in a novel manner or that it acted mistakenly or adversely?   

Some might say that if you lived to ask that question, apparently the AI made the right choice. The counter-argument is that if the AI had gone with one of the other choices, perhaps you would have sailed right past the other car and not gotten a single scratch.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion   

For those of you wondering what actually did happen, my lucky stars were looking over me that day, and I survived with nothing more than a close call. I decided to remain in my lane, though it was tempting to veer into the opposing lane, and by some miracle, the other driver suddenly went back into the opposing lane.   

When I tell the story, my heart still gets pumping, and I begin to sweat.   

Overall, AI that appears to engage in novel approaches to problems can be advantageous and in some circumstances such as playing a board game can be right or wrong, for which being wrong does not especially put human lives at stake.   

For AI-based true self-driving cars, lives are at stake.   

We’ll need to proceed mindfully and with our eyes wide open about how we want AI driving systems to operate, including calculating odds and deriving choices while at the wheel of the vehicle.  

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website